You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@ambari.apache.org by "Robert Levas (JIRA)" <ji...@apache.org> on 2015/02/26 22:34:04 UTC

[jira] [Updated] (AMBARI-9822) Check Pig failed after ambari only upgrade from 1.6.0 to 2.0.0 and enabling security

     [ https://issues.apache.org/jira/browse/AMBARI-9822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Robert Levas updated AMBARI-9822:
---------------------------------
    Attachment: AMBARI-9822_01.patch

> Check Pig failed after ambari only upgrade from 1.6.0 to 2.0.0 and enabling security
> ------------------------------------------------------------------------------------
>
>                 Key: AMBARI-9822
>                 URL: https://issues.apache.org/jira/browse/AMBARI-9822
>             Project: Ambari
>          Issue Type: Bug
>          Components: ambari-server
>    Affects Versions: 2.0.0
>            Reporter: Robert Levas
>            Assignee: Robert Levas
>            Priority: Blocker
>             Fix For: 2.0.0
>
>         Attachments: AMBARI-9822_01.patch
>
>
> *Cluster*
> http://172.18.145.45:8080/#/main/hosts
> STR:
> 1)Setup old version 1.6.0 with all services
> 2)Make Ambari only upgrade to 2.0.0
> 2)Enable security
> 3)Try make Check pig
> Actual result:
> Check pig was failed.
> {code}
> stderr:   /var/lib/ambari-agent/data/errors-663.txt
> 2015-02-15 14:31:04,754 - Error while executing command 'service_check':
> Traceback (most recent call last):
>   File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 208, in execute
>     method(env)
>   File "/var/lib/ambari-agent/cache/common-services/PIG/0.12.0.2.0/package/scripts/service_check.py", line 61, in service_check
>     user      = params.smokeuser
>   File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 148, in __init__
>     self.env.run()
>   File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 152, in run
>     self.run_action(resource, action)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 118, in run_action
>     provider_action()
>   File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 276, in action_run
>     raise ex
> Fail: Execution of 'pig /var/lib/ambari-agent/data/tmp/pigSmoke.sh' returned 2. 2015-02-15 14:30:47,099 [main] INFO  org.apache.pig.Main - Apache Pig version 0.12.1.2.1.7.0-784 (rexported) compiled Oct 25 2014, 13:23:09
> 2015-02-15 14:30:47,100 [main] INFO  org.apache.pig.Main - Logging error messages to: /home/ambari-qa/pig_1424010647097.log
> 2015-02-15 14:30:48,592 [main] INFO  org.apache.pig.impl.util.Utils - Default bootup file /home/ambari-qa/.pigbootup not found
> 2015-02-15 14:30:48,916 [main] INFO  org.apache.hadoop.conf.Configuration.deprecation - mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
> 2015-02-15 14:30:48,926 [main] INFO  org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
> 2015-02-15 14:30:48,926 [main] INFO  org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: hdfs://amb-upg160-sles113postgres1423994333-1.cs1cloud.internal:8020
> 2015-02-15 14:30:50,772 [main] INFO  org.apache.pig.tools.pigstats.ScriptState - Pig features used in the script: UNKNOWN
> 2015-02-15 14:30:50,818 [main] INFO  org.apache.pig.newplan.logical.optimizer.LogicalPlanOptimizer - {RULES_ENABLED=[AddForEach, ColumnMapKeyPrune, GroupByConstParallelSetter, LimitOptimizer, LoadTypeCastInserter, MergeFilter, MergeForEach, NewPartitionFilterOptimizer, PartitionFilterOptimizer, PushDownForEachFlatten, PushUpFilter, SplitFilter, StreamTypeCastInserter], RULES_DISABLED=[FilterLogicExpressionSimplifier]}
> 2015-02-15 14:30:50,845 [main] INFO  org.apache.hadoop.conf.Configuration.deprecation - mapred.textoutputformat.separator is deprecated. Instead, use mapreduce.output.textoutputformat.separator
> 2015-02-15 14:30:50,879 [main] INFO  org.apache.hadoop.hdfs.DFSClient - Created HDFS_DELEGATION_TOKEN token 43 for ambari-qa on 172.18.145.154:8020
> 2015-02-15 14:30:50,906 [main] INFO  org.apache.hadoop.mapreduce.security.TokenCache - Got dt for hdfs://amb-upg160-sles113postgres1423994333-1.cs1cloud.internal:8020; Kind: HDFS_DELEGATION_TOKEN, Service: 172.18.145.154:8020, Ident: (HDFS_DELEGATION_TOKEN token 43 for ambari-qa)
> 2015-02-15 14:30:51,037 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler - File concatenation threshold: 100 optimistic? false
> 2015-02-15 14:30:51,077 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size before optimization: 1
> 2015-02-15 14:30:51,078 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size after optimization: 1
> 2015-02-15 14:30:52,120 [main] INFO  org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl - Timeline service address: http://amb-upg160-sles113postgres1423994333-4.cs1cloud.internal:8188/ws/v1/timeline/
> 2015-02-15 14:30:52,124 [main] INFO  org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at amb-upg160-sles113postgres1423994333-4.cs1cloud.internal/172.18.145.37:8050
> 2015-02-15 14:30:52,438 [main] INFO  org.apache.pig.tools.pigstats.ScriptState - Pig script settings are added to the job
> 2015-02-15 14:30:52,449 [main] INFO  org.apache.hadoop.conf.Configuration.deprecation - mapred.job.reduce.markreset.buffer.percent is deprecated. Instead, use mapreduce.reduce.markreset.buffer.percent
> 2015-02-15 14:30:52,449 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3
> 2015-02-15 14:30:52,449 [main] INFO  org.apache.hadoop.conf.Configuration.deprecation - mapred.output.compress is deprecated. Instead, use mapreduce.output.fileoutputformat.compress
> 2015-02-15 14:30:52,452 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - creating jar file Job1741629516120112972.jar
> 2015-02-15 14:30:56,461 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - jar file Job1741629516120112972.jar created
> 2015-02-15 14:30:56,461 [main] INFO  org.apache.hadoop.conf.Configuration.deprecation - mapred.jar is deprecated. Instead, use mapreduce.job.jar
> 2015-02-15 14:30:56,508 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Setting up single store job
> 2015-02-15 14:30:56,525 [main] INFO  org.apache.pig.data.SchemaTupleFrontend - Key [pig.schematuple] is false, will not generate code.
> 2015-02-15 14:30:56,525 [main] INFO  org.apache.pig.data.SchemaTupleFrontend - Starting process to move generated code to distributed cache
> 2015-02-15 14:30:56,527 [main] INFO  org.apache.pig.data.SchemaTupleFrontend - Setting key [pig.schematuple.classes] with classes to deserialize []
> 2015-02-15 14:30:56,579 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 1 map-reduce job(s) waiting for submission.
> 2015-02-15 14:30:56,580 [main] INFO  org.apache.hadoop.conf.Configuration.deprecation - mapred.job.tracker.http.address is deprecated. Instead, use mapreduce.jobtracker.http.address
> 2015-02-15 14:30:56,819 [JobControl] INFO  org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl - Timeline service address: http://amb-upg160-sles113postgres1423994333-4.cs1cloud.internal:8188/ws/v1/timeline/
> 2015-02-15 14:30:56,820 [JobControl] INFO  org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at amb-upg160-sles113postgres1423994333-4.cs1cloud.internal/172.18.145.37:8050
> 2015-02-15 14:30:56,871 [JobControl] INFO  org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
> 2015-02-15 14:30:56,892 [JobControl] INFO  org.apache.hadoop.hdfs.DFSClient - Created HDFS_DELEGATION_TOKEN token 44 for ambari-qa on 172.18.145.154:8020
> 2015-02-15 14:30:56,893 [JobControl] INFO  org.apache.hadoop.mapreduce.security.TokenCache - Got dt for hdfs://amb-upg160-sles113postgres1423994333-1.cs1cloud.internal:8020; Kind: HDFS_DELEGATION_TOKEN, Service: 172.18.145.154:8020, Ident: (HDFS_DELEGATION_TOKEN token 44 for ambari-qa)
> 2015-02-15 14:30:57,800 [JobControl] INFO  org.apache.hadoop.hdfs.DFSClient - Created HDFS_DELEGATION_TOKEN token 45 for ambari-qa on 172.18.145.154:8020
> 2015-02-15 14:30:57,800 [JobControl] INFO  org.apache.hadoop.mapreduce.security.TokenCache - Got dt for hdfs://amb-upg160-sles113postgres1423994333-1.cs1cloud.internal:8020; Kind: HDFS_DELEGATION_TOKEN, Service: 172.18.145.154:8020, Ident: (HDFS_DELEGATION_TOKEN token 45 for ambari-qa)
> 2015-02-15 14:30:57,804 [JobControl] INFO  org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths to process : 1
> 2015-02-15 14:30:57,804 [JobControl] INFO  org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths to process : 1
> 2015-02-15 14:30:57,829 [JobControl] INFO  org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths (combined) to process : 1
> 2015-02-15 14:30:58,025 [JobControl] INFO  org.apache.hadoop.mapreduce.JobSubmitter - number of splits:1
> 2015-02-15 14:30:58,063 [JobControl] INFO  org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
> 2015-02-15 14:30:58,491 [JobControl] INFO  org.apache.hadoop.mapreduce.JobSubmitter - Submitting tokens for job: job_1424006341901_0003
> 2015-02-15 14:30:58,492 [JobControl] INFO  org.apache.hadoop.mapreduce.JobSubmitter - Kind: HDFS_DELEGATION_TOKEN, Service: 172.18.145.154:8020, Ident: (HDFS_DELEGATION_TOKEN token 44 for ambari-qa)
> 2015-02-15 14:30:58,741 [JobControl] INFO  org.apache.hadoop.mapreduce.JobSubmitter - Cleaning up the staging area /user/ambari-qa/.staging/job_1424006341901_0003
> 2015-02-15 14:30:58,753 [JobControl] INFO  org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob - PigLatin:pigSmoke.sh got an error while submitting 
> java.net.ConnectException: Connection refused
> 	at java.net.PlainSocketImpl.socketConnect(Native Method)
> 	at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
> 	at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
> 	at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
> 	at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
> 	at java.net.Socket.connect(Socket.java:579)
> 	at java.net.Socket.connect(Socket.java:528)
> 	at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
> 	at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
> 	at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
> 	at sun.net.www.http.HttpClient.<init>(HttpClient.java:211)
> 	at sun.net.www.http.HttpClient.New(HttpClient.java:308)
> 	at sun.net.www.http.HttpClient.New(HttpClient.java:326)
> 	at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:996)
> 	at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:932)
> 	at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:850)
> 	at org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:186)
> 	at org.apache.hadoop.yarn.client.api.impl.TimelineAuthenticator.authenticate(TimelineAuthenticator.java:97)
> 	at org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:232)
> 	at org.apache.hadoop.yarn.client.api.impl.TimelineAuthenticator.getDelegationToken(TimelineAuthenticator.java:112)
> 	at org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.getDelegationToken(TimelineClientImpl.java:167)
> 	at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.addTimelineDelegationToken(YarnClientImpl.java:275)
> 	at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:221)
> 	at org.apache.hadoop.mapred.ResourceMgrDelegate.submitApplication(ResourceMgrDelegate.java:282)
> 	at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:289)
> 	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:432)
> 	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
> 	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:415)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
> 	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
> 	at org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 	at java.lang.reflect.Method.invoke(Method.java:606)
> 	at org.apache.pig.backend.hadoop23.PigJobControl.submit(PigJobControl.java:128)
> 	at org.apache.pig.backend.hadoop23.PigJobControl.run(PigJobControl.java:191)
> 	at java.lang.Thread.run(Thread.java:744)
> 	at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:271)
> 2015-02-15 14:30:58,757 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - HadoopJobId: job_1424006341901_0003
> 2015-02-15 14:30:58,757 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Processing aliases A,B
> 2015-02-15 14:30:58,757 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - detailed locations: M: A[16,4],B[17,4] C:  R: 
> 2015-02-15 14:30:58,761 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 0% complete
> 2015-02-15 14:31:03,770 [main] WARN  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Ooops! Some job has failed! Specify -stop_on_failure if you want Pig to stop immediately on failure.
> 2015-02-15 14:31:03,770 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - job job_1424006341901_0003 has failed! Stop running all dependent jobs
> 2015-02-15 14:31:03,770 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 100% complete
> 2015-02-15 14:31:04,117 [main] ERROR org.apache.pig.tools.pigstats.SimplePigStats - ERROR: org.apache.hadoop.yarn.exceptions.ApplicationNotFoundException: Application with id 'application_1424006341901_0003' doesn't exist in RM.
> 	at org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getApplicationReport(ClientRMService.java:288)
> 	at org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getApplicationReport(ApplicationClientProtocolPBServiceImpl.java:145)
> 	at org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:321)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:415)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
> 2015-02-15 14:31:04,118 [main] ERROR org.apache.pig.tools.pigstats.PigStatsUtil - 1 map reduce job(s) failed!
> 2015-02-15 14:31:04,120 [main] INFO  org.apache.pig.tools.pigstats.SimplePigStats - Script Statistics: 
> HadoopVersion	PigVersion	UserId	StartedAt	FinishedAt	Features
> 2.4.0.2.1.7.0-784	0.12.1.2.1.7.0-784	ambari-qa	2015-02-15 14:30:52	2015-02-15 14:31:04	UNKNOWN
> Failed!
> Failed Jobs:
> JobId	Alias	Feature	Message	Outputs
> job_1424006341901_0003	A,B	MAP_ONLY	Message: java.net.ConnectException: Connection refused
> 	at java.net.PlainSocketImpl.socketConnect(Native Method)
> 	at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
> 	at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
> 	at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
> 	at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
> 	at java.net.Socket.connect(Socket.java:579)
> 	at java.net.Socket.connect(Socket.java:528)
> 	at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
> 	at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
> 	at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
> 	at sun.net.www.http.HttpClient.<init>(HttpClient.java:211)
> 	at sun.net.www.http.HttpClient.New(HttpClient.java:308)
> 	at sun.net.www.http.HttpClient.New(HttpClient.java:326)
> 	at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:996)
> 	at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:932)
> 	at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:850)
> 	at org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:186)
> 	at org.apache.hadoop.yarn.client.api.impl.TimelineAuthenticator.authenticate(TimelineAuthenticator.java:97)
> 	at org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:232)
> 	at org.apache.hadoop.yarn.client.api.impl.TimelineAuthenticator.getDelegationToken(TimelineAuthenticator.java:112)
> 	at org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.getDelegationToken(TimelineClientImpl.java:167)
> 	at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.addTimelineDelegationToken(YarnClientImpl.java:275)
> 	at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:221)
> 	at org.apache.hadoop.mapred.ResourceMgrDelegate.submitApplication(ResourceMgrDelegate.java:282)
> 	at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:289)
> 	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:432)
> 	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
> 	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:415)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
> 	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
> 	at org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 	at java.lang.reflect.Method.invoke(Method.java:606)
> 	at org.apache.pig.backend.hadoop23.PigJobControl.submit(PigJobControl.java:128)
> 	at org.apache.pig.backend.hadoop23.PigJobControl.run(PigJobControl.java:191)
> 	at java.lang.Thread.run(Thread.java:744)
> 	at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:271)
> 	hdfs://amb-upg160-sles113postgres1423994333-1.cs1cloud.internal:8020/user/ambari-qa/pigsmoke.out,
> Input(s):
> Failed to read data from "hdfs://amb-upg160-sles113postgres1423994333-1.cs1cloud.internal:8020/user/ambari-qa/passwd"
> Output(s):
> Failed to produce result in "hdfs://amb-upg160-sles113postgres1423994333-1.cs1cloud.internal:8020/user/ambari-qa/pigsmoke.out"
> Counters:
> Total records written : 0
> Total bytes written : 0
> Spillable Memory Manager spill count : 0
> Total bags proactively spilled: 0
> Total records proactively spilled: 0
> Job DAG:
> job_1424006341901_0003
> 2015-02-15 14:31:04,120 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Failed!
> 2015-02-15 14:31:04,131 [main] ERROR org.apache.pig.tools.grunt.GruntParser - ERROR 2997: Encountered IOException. org.apache.hadoop.yarn.exceptions.ApplicationNotFoundException: Application with id 'application_1424006341901_0003' doesn't exist in RM.
> 	at org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getApplicationReport(ClientRMService.java:288)
> 	at org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getApplicationReport(ApplicationClientProtocolPBServiceImpl.java:145)
> 	at org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:321)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:415)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
> Details at logfile: /home/ambari-qa/pig_1424010647097.log
> stdout:   /var/lib/ambari-agent/data/output-663.txt
> 2015-02-15 14:29:49,189 - u"ExecuteHadoop['dfs -rmr pigsmoke.out passwd; hadoop --config /etc/hadoop/conf dfs -put /etc/passwd passwd ']" {'security_enabled': True, 'keytab': '/etc/security/keytabs/smokeuser.headless.keytab', 'conf_dir': '/etc/hadoop/conf', 'try_sleep': 5, 'kinit_path_local': '/usr/bin/kinit', 'tries': 3, 'user': 'ambari-qa', 'bin_dir': '/usr/bin', 'principal': 'ambari-qa@EXAMPLE.COM'}
> 2015-02-15 14:29:49,190 - u"Execute['/usr/bin/kinit -kt /etc/security/keytabs/smokeuser.headless.keytab ambari-qa@EXAMPLE.COM']" {'path': ['/bin'], 'user': 'ambari-qa'}
> 2015-02-15 14:29:49,286 - u"Execute['hadoop --config /etc/hadoop/conf dfs -rmr pigsmoke.out passwd; hadoop --config /etc/hadoop/conf dfs -put /etc/passwd passwd ']" {'logoutput': None, 'try_sleep': 5, 'environment': {}, 'tries': 3, 'user': 'ambari-qa', 'path': ['/usr/bin']}
> 2015-02-15 14:29:56,834 - u"File['/var/lib/ambari-agent/data/tmp/pigSmoke.sh']" {'content': StaticFile('pigSmoke.sh'), 'mode': 0755}
> 2015-02-15 14:29:56,846 - u"Execute['pig /var/lib/ambari-agent/data/tmp/pigSmoke.sh']" {'path': [':/usr/sbin:/sbin:/usr/local/bin:/bin:/usr/bin'], 'tries': 3, 'user': 'ambari-qa', 'try_sleep': 5}
> 2015-02-15 14:30:16,722 - Retrying after 5 seconds. Reason: Execution of 'pig /var/lib/ambari-agent/data/tmp/pigSmoke.sh' returned 2. 2015-02-15 14:29:58,893 [main] INFO  org.apache.pig.Main - Apache Pig version 0.12.1.2.1.7.0-784 (rexported) compiled Oct 25 2014, 13:23:09
> 2015-02-15 14:29:58,893 [main] INFO  org.apache.pig.Main - Logging error messages to: /home/ambari-qa/pig_1424010598891.log
> 2015-02-15 14:30:00,096 [main] INFO  org.apache.pig.impl.util.Utils - Default bootup file /home/ambari-qa/.pigbootup not found
> 2015-02-15 14:30:00,328 [main] INFO  org.apache.hadoop.conf.Configuration.deprecation - mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
> 2015-02-15 14:30:00,329 [main] INFO  org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
> 2015-02-15 14:30:00,329 [main] INFO  org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: hdfs://amb-upg160-sles113postgres1423994333-1.cs1cloud.internal:8020
> 2015-02-15 14:30:02,926 [main] INFO  org.apache.pig.tools.pigstats.ScriptState - Pig features used in the script: UNKNOWN
> 2015-02-15 14:30:02,988 [main] INFO  org.apache.pig.newplan.logical.optimizer.LogicalPlanOptimizer - {RULES_ENABLED=[AddForEach, ColumnMapKeyPrune, GroupByConstParallelSetter, LimitOptimizer, LoadTypeCastInserter, MergeFilter, MergeForEach, NewPartitionFilterOptimizer, PartitionFilterOptimizer, PushDownForEachFlatten, PushUpFilter, SplitFilter, StreamTypeCastInserter], RULES_DISABLED=[FilterLogicExpressionSimplifier]}
> 2015-02-15 14:30:03,023 [main] INFO  org.apache.hadoop.conf.Configuration.deprecation - mapred.textoutputformat.separator is deprecated. Instead, use mapreduce.output.textoutputformat.separator
> 2015-02-15 14:30:03,065 [main] INFO  org.apache.hadoop.hdfs.DFSClient - Created HDFS_DELEGATION_TOKEN token 37 for ambari-qa on 172.18.145.154:8020
> 2015-02-15 14:30:03,086 [main] INFO  org.apache.hadoop.mapreduce.security.TokenCache - Got dt for hdfs://amb-upg160-sles113postgres1423994333-1.cs1cloud.internal:8020; Kind: HDFS_DELEGATION_TOKEN, Service: 172.18.145.154:8020, Ident: (HDFS_DELEGATION_TOKEN token 37 for ambari-qa)
> 2015-02-15 14:30:03,210 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler - File concatenation threshold: 100 optimistic? false
> 2015-02-15 14:30:03,264 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size before optimization: 1
> 2015-02-15 14:30:03,264 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size after optimization: 1
> 2015-02-15 14:30:04,242 [main] INFO  org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl - Timeline service address: http://amb-upg160-sles113postgres1423994333-4.cs1cloud.internal:8188/ws/v1/timeline/
> 2015-02-15 14:30:04,245 [main] INFO  org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at amb-upg160-sles113postgres1423994333-4.cs1cloud.internal/172.18.145.37:8050
> 2015-02-15 14:30:04,541 [main] INFO  org.apache.pig.tools.pigstats.ScriptState - Pig script settings are added to the job
> 2015-02-15 14:30:04,549 [main] INFO  org.apache.hadoop.conf.Configuration.deprecation - mapred.job.reduce.markreset.buffer.percent is deprecated. Instead, use mapreduce.reduce.markreset.buffer.percent
> 2015-02-15 14:30:04,549 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3
> 2015-02-15 14:30:04,549 [main] INFO  org.apache.hadoop.conf.Configuration.deprecation - mapred.output.compress is deprecated. Instead, use mapreduce.output.fileoutputformat.compress
> 2015-02-15 14:30:04,552 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - creating jar file Job2700202488205496570.jar
> 2015-02-15 14:30:08,187 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - jar file Job2700202488205496570.jar created
> 2015-02-15 14:30:08,187 [main] INFO  org.apache.hadoop.conf.Configuration.deprecation - mapred.jar is deprecated. Instead, use mapreduce.job.jar
> 2015-02-15 14:30:08,225 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Setting up single store job
> 2015-02-15 14:30:08,239 [main] INFO  org.apache.pig.data.SchemaTupleFrontend - Key [pig.schematuple] is false, will not generate code.
> 2015-02-15 14:30:08,239 [main] INFO  org.apache.pig.data.SchemaTupleFrontend - Starting process to move generated code to distributed cache
> 2015-02-15 14:30:08,241 [main] INFO  org.apache.pig.data.SchemaTupleFrontend - Setting key [pig.schematuple.classes] with classes to deserialize []
> 2015-02-15 14:30:08,301 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 1 map-reduce job(s) waiting for submission.
> 2015-02-15 14:30:08,303 [main] INFO  org.apache.hadoop.conf.Configuration.deprecation - mapred.job.tracker.http.address is deprecated. Instead, use mapreduce.jobtracker.http.address
> 2015-02-15 14:30:08,505 [JobControl] INFO  org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl - Timeline service address: http://amb-upg160-sles113postgres1423994333-4.cs1cloud.internal:8188/ws/v1/timeline/
> 2015-02-15 14:30:08,505 [JobControl] INFO  org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at amb-upg160-sles113postgres1423994333-4.cs1cloud.internal/172.18.145.37:8050
> 2015-02-15 14:30:08,572 [JobControl] INFO  org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
> 2015-02-15 14:30:08,622 [JobControl] INFO  org.apache.hadoop.hdfs.DFSClient - Created HDFS_DELEGATION_TOKEN token 38 for ambari-qa on 172.18.145.154:8020
> 2015-02-15 14:30:08,622 [JobControl] INFO  org.apache.hadoop.mapreduce.security.TokenCache - Got dt for hdfs://amb-upg160-sles113postgres1423994333-1.cs1cloud.internal:8020; Kind: HDFS_DELEGATION_TOKEN, Service: 172.18.145.154:8020, Ident: (HDFS_DELEGATION_TOKEN token 38 for ambari-qa)
> 2015-02-15 14:30:09,888 [JobControl] INFO  org.apache.hadoop.hdfs.DFSClient - Created HDFS_DELEGATION_TOKEN token 39 for ambari-qa on 172.18.145.154:8020
> 2015-02-15 14:30:09,889 [JobControl] INFO  org.apache.hadoop.mapreduce.security.TokenCache - Got dt for hdfs://amb-upg160-sles113postgres1423994333-1.cs1cloud.internal:8020; Kind: HDFS_DELEGATION_TOKEN, Service: 172.18.145.154:8020, Ident: (HDFS_DELEGATION_TOKEN token 39 for ambari-qa)
> 2015-02-15 14:30:09,895 [JobControl] INFO  org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths to process : 1
> 2015-02-15 14:30:09,896 [JobControl] INFO  org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths to process : 1
> 2015-02-15 14:30:09,926 [JobControl] INFO  org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths (combined) to process : 1
> 2015-02-15 14:30:10,155 [JobControl] INFO  org.apache.hadoop.mapreduce.JobSubmitter - number of splits:1
> 2015-02-15 14:30:10,203 [JobControl] INFO  org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
> 2015-02-15 14:30:10,529 [JobControl] INFO  org.apache.hadoop.mapreduce.JobSubmitter - Submitting tokens for job: job_1424006341901_0001
> 2015-02-15 14:30:10,530 [JobControl] INFO  org.apache.hadoop.mapreduce.JobSubmitter - Kind: HDFS_DELEGATION_TOKEN, Service: 172.18.145.154:8020, Ident: (HDFS_DELEGATION_TOKEN token 38 for ambari-qa)
> 2015-02-15 14:30:10,746 [JobControl] INFO  org.apache.hadoop.mapreduce.JobSubmitter - Cleaning up the staging area /user/ambari-qa/.staging/job_1424006341901_0001
> 2015-02-15 14:30:10,768 [JobControl] INFO  org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob - PigLatin:pigSmoke.sh got an error while submitting 
> java.net.ConnectException: Connection refused
> 	at java.net.PlainSocketImpl.socketConnect(Native Method)
> 	at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
> 	at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
> 	at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
> 	at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
> 	at java.net.Socket.connect(Socket.java:579)
> 	at java.net.Socket.connect(Socket.java:528)
> 	at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
> 	at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
> 	at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
> 	at sun.net.www.http.HttpClient.<init>(HttpClient.java:211)
> 	at sun.net.www.http.HttpClient.New(HttpClient.java:308)
> 	at sun.net.www.http.HttpClient.New(HttpClient.java:326)
> 	at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:996)
> 	at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:932)
> 	at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:850)
> 	at org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:186)
> 	at org.apache.hadoop.yarn.client.api.impl.TimelineAuthenticator.authenticate(TimelineAuthenticator.java:97)
> 	at org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:232)
> 	at org.apache.hadoop.yarn.client.api.impl.TimelineAuthenticator.getDelegationToken(TimelineAuthenticator.java:112)
> 	at org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.getDelegationToken(TimelineClientImpl.java:167)
> 	at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.addTimelineDelegationToken(YarnClientImpl.java:275)
> 	at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:221)
> 	at org.apache.hadoop.mapred.ResourceMgrDelegate.submitApplication(ResourceMgrDelegate.java:282)
> 	at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:289)
> 	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:432)
> 	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
> 	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:415)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
> 	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
> 	at org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 	at java.lang.reflect.Method.invoke(Method.java:606)
> 	at org.apache.pig.backend.hadoop23.PigJobControl.submit(PigJobControl.java:128)
> 	at org.apache.pig.backend.hadoop23.PigJobControl.run(PigJobControl.java:191)
> 	at java.lang.Thread.run(Thread.java:744)
> 	at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:271)
> 2015-02-15 14:30:10,772 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - HadoopJobId: job_1424006341901_0001
> 2015-02-15 14:30:10,772 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Processing aliases A,B
> 2015-02-15 14:30:10,772 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - detailed locations: M: A[16,4],B[17,4] C:  R: 
> 2015-02-15 14:30:10,777 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 0% complete
> 2015-02-15 14:30:15,789 [main] WARN  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Ooops! Some job has failed! Specify -stop_on_failure if you want Pig to stop immediately on failure.
> 2015-02-15 14:30:15,789 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - job job_1424006341901_0001 has failed! Stop running all dependent jobs
> 2015-02-15 14:30:15,789 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 100% complete
> 2015-02-15 14:30:16,141 [main] ERROR org.apache.pig.tools.pigstats.SimplePigStats - ERROR: org.apache.hadoop.yarn.exceptions.ApplicationNotFoundException: Application with id 'application_1424006341901_0001' doesn't exist in RM.
> 	at org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getApplicationReport(ClientRMService.java:288)
> 	at org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getApplicationReport(ApplicationClientProtocolPBServiceImpl.java:145)
> 	at org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:321)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:415)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
> 2015-02-15 14:30:16,142 [main] ERROR org.apache.pig.tools.pigstats.PigStatsUtil - 1 map reduce job(s) failed!
> 2015-02-15 14:30:16,144 [main] INFO  org.apache.pig.tools.pigstats.SimplePigStats - Script Statistics: 
> HadoopVersion	PigVersion	UserId	StartedAt	FinishedAt	Features
> 2.4.0.2.1.7.0-784	0.12.1.2.1.7.0-784	ambari-qa	2015-02-15 14:30:04	2015-02-15 14:30:16	UNKNOWN
> Failed!
> Failed Jobs:
> JobId	Alias	Feature	Message	Outputs
> job_1424006341901_0001	A,B	MAP_ONLY	Message: java.net.ConnectException: Connection refused
> 	at java.net.PlainSocketImpl.socketConnect(Native Method)
> 	at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
> 	at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
> 	at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
> 	at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
> 	at java.net.Socket.connect(Socket.java:579)
> 	at java.net.Socket.connect(Socket.java:528)
> 	at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
> 	at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
> 	at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
> 	at sun.net.www.http.HttpClient.<init>(HttpClient.java:211)
> 	at sun.net.www.http.HttpClient.New(HttpClient.java:308)
> 	at sun.net.www.http.HttpClient.New(HttpClient.java:326)
> 	at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:996)
> 	at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:932)
> 	at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:850)
> 	at org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:186)
> 	at org.apache.hadoop.yarn.client.api.impl.TimelineAuthenticator.authenticate(TimelineAuthenticator.java:97)
> 	at org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:232)
> 	at org.apache.hadoop.yarn.client.api.impl.TimelineAuthenticator.getDelegationToken(TimelineAuthenticator.java:112)
> 	at org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.getDelegationToken(TimelineClientImpl.java:167)
> 	at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.addTimelineDelegationToken(YarnClientImpl.java:275)
> 	at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:221)
> 	at org.apache.hadoop.mapred.ResourceMgrDelegate.submitApplication(ResourceMgrDelegate.java:282)
> 	at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:289)
> 	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:432)
> 	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
> 	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:415)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
> 	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
> 	at org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 	at java.lang.reflect.Method.invoke(Method.java:606)
> 	at org.apache.pig.backend.hadoop23.PigJobControl.submit(PigJobControl.java:128)
> 	at org.apache.pig.backend.hadoop23.PigJobControl.run(PigJobControl.java:191)
> 	at java.lang.Thread.run(Thread.java:744)
> 	at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:271)
> 	hdfs://amb-upg160-sles113postgres1423994333-1.cs1cloud.internal:8020/user/ambari-qa/pigsmoke.out,
> Input(s):
> Failed to read data from "hdfs://amb-upg160-sles113postgres1423994333-1.cs1cloud.internal:8020/user/ambari-qa/passwd"
> Output(s):
> Failed to produce result in "hdfs://amb-upg160-sles113postgres1423994333-1.cs1cloud.internal:8020/user/ambari-qa/pigsmoke.out"
> Counters:
> Total records written : 0
> Total bytes written : 0
> Spillable Memory Manager spill count : 0
> Total bags proactively spilled: 0
> Total records proactively spilled: 0
> Job DAG:
> job_1424006341901_0001
> 2015-02-15 14:30:16,144 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Failed!
> 2015-02-15 14:30:16,152 [main] ERROR org.apache.pig.tools.grunt.GruntParser - ERROR 2997: Encountered IOException. org.apache.hadoop.yarn.exceptions.ApplicationNotFoundException: Application with id 'application_1424006341901_0001' doesn't exist in RM.
> 	at org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getApplicationReport(ClientRMService.java:288)
> 	at org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getApplicationReport(ApplicationClientProtocolPBServiceImpl.java:145)
> 	at org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:321)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:415)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
> Details at logfile: /home/ambari-qa/pig_1424010598891.log
> 2015-02-15 14:30:39,929 - Retrying after 5 seconds. Reason: Execution of 'pig /var/lib/ambari-agent/data/tmp/pigSmoke.sh' returned 2. 2015-02-15 14:30:23,797 [main] INFO  org.apache.pig.Main - Apache Pig version 0.12.1.2.1.7.0-784 (rexported) compiled Oct 25 2014, 13:23:09
> 2015-02-15 14:30:23,798 [main] INFO  org.apache.pig.Main - Logging error messages to: /home/ambari-qa/pig_1424010623794.log
> 2015-02-15 14:30:25,075 [main] INFO  org.apache.pig.impl.util.Utils - Default bootup file /home/ambari-qa/.pigbootup not found
> 2015-02-15 14:30:25,318 [main] INFO  org.apache.hadoop.conf.Configuration.deprecation - mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
> 2015-02-15 14:30:25,319 [main] INFO  org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
> 2015-02-15 14:30:25,319 [main] INFO  org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: hdfs://amb-upg160-sles113postgres1423994333-1.cs1cloud.internal:8020
> 2015-02-15 14:30:27,144 [main] INFO  org.apache.pig.tools.pigstats.ScriptState - Pig features used in the script: UNKNOWN
> 2015-02-15 14:30:27,190 [main] INFO  org.apache.pig.newplan.logical.optimizer.LogicalPlanOptimizer - {RULES_ENABLED=[AddForEach, ColumnMapKeyPrune, GroupByConstParallelSetter, LimitOptimizer, LoadTypeCastInserter, MergeFilter, MergeForEach, NewPartitionFilterOptimizer, PartitionFilterOptimizer, PushDownForEachFlatten, PushUpFilter, SplitFilter, StreamTypeCastInserter], RULES_DISABLED=[FilterLogicExpressionSimplifier]}
> 2015-02-15 14:30:27,220 [main] INFO  org.apache.hadoop.conf.Configuration.deprecation - mapred.textoutputformat.separator is deprecated. Instead, use mapreduce.output.textoutputformat.separator
> 2015-02-15 14:30:27,286 [main] INFO  org.apache.hadoop.hdfs.DFSClient - Created HDFS_DELEGATION_TOKEN token 40 for ambari-qa on 172.18.145.154:8020
> 2015-02-15 14:30:27,313 [main] INFO  org.apache.hadoop.mapreduce.security.TokenCache - Got dt for hdfs://amb-upg160-sles113postgres1423994333-1.cs1cloud.internal:8020; Kind: HDFS_DELEGATION_TOKEN, Service: 172.18.145.154:8020, Ident: (HDFS_DELEGATION_TOKEN token 40 for ambari-qa)
> 2015-02-15 14:30:27,443 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler - File concatenation threshold: 100 optimistic? false
> 2015-02-15 14:30:27,481 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size before optimization: 1
> 2015-02-15 14:30:27,481 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size after optimization: 1
> 2015-02-15 14:30:28,333 [main] INFO  org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl - Timeline service address: http://amb-upg160-sles113postgres1423994333-4.cs1cloud.internal:8188/ws/v1/timeline/
> 2015-02-15 14:30:28,336 [main] INFO  org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at amb-upg160-sles113postgres1423994333-4.cs1cloud.internal/172.18.145.37:8050
> 2015-02-15 14:30:28,578 [main] INFO  org.apache.pig.tools.pigstats.ScriptState - Pig script settings are added to the job
> 2015-02-15 14:30:28,591 [main] INFO  org.apache.hadoop.conf.Configuration.deprecation - mapred.job.reduce.markreset.buffer.percent is deprecated. Instead, use mapreduce.reduce.markreset.buffer.percent
> 2015-02-15 14:30:28,591 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3
> 2015-02-15 14:30:28,591 [main] INFO  org.apache.hadoop.conf.Configuration.deprecation - mapred.output.compress is deprecated. Instead, use mapreduce.output.fileoutputformat.compress
> 2015-02-15 14:30:28,593 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - creating jar file Job5157789178969746140.jar
> 2015-02-15 14:30:31,766 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - jar file Job5157789178969746140.jar created
> 2015-02-15 14:30:31,766 [main] INFO  org.apache.hadoop.conf.Configuration.deprecation - mapred.jar is deprecated. Instead, use mapreduce.job.jar
> 2015-02-15 14:30:31,804 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Setting up single store job
> 2015-02-15 14:30:31,813 [main] INFO  org.apache.pig.data.SchemaTupleFrontend - Key [pig.schematuple] is false, will not generate code.
> 2015-02-15 14:30:31,813 [main] INFO  org.apache.pig.data.SchemaTupleFrontend - Starting process to move generated code to distributed cache
> 2015-02-15 14:30:31,815 [main] INFO  org.apache.pig.data.SchemaTupleFrontend - Setting key [pig.schematuple.classes] with classes to deserialize []
> 2015-02-15 14:30:31,858 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 1 map-reduce job(s) waiting for submission.
> 2015-02-15 14:30:31,859 [main] INFO  org.apache.hadoop.conf.Configuration.deprecation - mapred.job.tracker.http.address is deprecated. Instead, use mapreduce.jobtracker.http.address
> 2015-02-15 14:30:32,048 [JobControl] INFO  org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl - Timeline service address: http://amb-upg160-sles113postgres1423994333-4.cs1cloud.internal:8188/ws/v1/timeline/
> 2015-02-15 14:30:32,048 [JobControl] INFO  org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at amb-upg160-sles113postgres1423994333-4.cs1cloud.internal/172.18.145.37:8050
> 2015-02-15 14:30:32,109 [JobControl] INFO  org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
> 2015-02-15 14:30:32,126 [JobControl] INFO  org.apache.hadoop.hdfs.DFSClient - Created HDFS_DELEGATION_TOKEN token 41 for ambari-qa on 172.18.145.154:8020
> 2015-02-15 14:30:32,126 [JobControl] INFO  org.apache.hadoop.mapreduce.security.TokenCache - Got dt for hdfs://amb-upg160-sles113postgres1423994333-1.cs1cloud.internal:8020; Kind: HDFS_DELEGATION_TOKEN, Service: 172.18.145.154:8020, Ident: (HDFS_DELEGATION_TOKEN token 41 for ambari-qa)
> 2015-02-15 14:30:33,001 [JobControl] INFO  org.apache.hadoop.hdfs.DFSClient - Created HDFS_DELEGATION_TOKEN token 42 for ambari-qa on 172.18.145.154:8020
> 2015-02-15 14:30:33,001 [JobControl] INFO  org.apache.hadoop.mapreduce.security.TokenCache - Got dt for hdfs://amb-upg160-sles113postgres1423994333-1.cs1cloud.internal:8020; Kind: HDFS_DELEGATION_TOKEN, Service: 172.18.145.154:8020, Ident: (HDFS_DELEGATION_TOKEN token 42 for ambari-qa)
> 2015-02-15 14:30:33,006 [JobControl] INFO  org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths to process : 1
> 2015-02-15 14:30:33,006 [JobControl] INFO  org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths to process : 1
> 2015-02-15 14:30:33,030 [JobControl] INFO  org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths (combined) to process : 1
> 2015-02-15 14:30:33,183 [JobControl] INFO  org.apache.hadoop.mapreduce.JobSubmitter - number of splits:1
> 2015-02-15 14:30:33,214 [JobControl] INFO  org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
> 2015-02-15 14:30:33,584 [JobControl] INFO  org.apache.hadoop.mapreduce.JobSubmitter - Submitting tokens for job: job_1424006341901_0002
> 2015-02-15 14:30:33,585 [JobControl] INFO  org.apache.hadoop.mapreduce.JobSubmitter - Kind: HDFS_DELEGATION_TOKEN, Service: 172.18.145.154:8020, Ident: (HDFS_DELEGATION_TOKEN token 41 for ambari-qa)
> 2015-02-15 14:30:33,822 [JobControl] INFO  org.apache.hadoop.mapreduce.JobSubmitter - Cleaning up the staging area /user/ambari-qa/.staging/job_1424006341901_0002
> 2015-02-15 14:30:33,837 [JobControl] INFO  org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob - PigLatin:pigSmoke.sh got an error while submitting 
> java.net.ConnectException: Connection refused
> 	at java.net.PlainSocketImpl.socketConnect(Native Method)
> 	at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
> 	at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
> 	at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
> 	at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
> 	at java.net.Socket.connect(Socket.java:579)
> 	at java.net.Socket.connect(Socket.java:528)
> 	at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
> 	at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
> 	at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
> 	at sun.net.www.http.HttpClient.<init>(HttpClient.java:211)
> 	at sun.net.www.http.HttpClient.New(HttpClient.java:308)
> 	at sun.net.www.http.HttpClient.New(HttpClient.java:326)
> 	at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:996)
> 	at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:932)
> 	at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:850)
> 	at org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:186)
> 	at org.apache.hadoop.yarn.client.api.impl.TimelineAuthenticator.authenticate(TimelineAuthenticator.java:97)
> 	at org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:232)
> 	at org.apache.hadoop.yarn.client.api.impl.TimelineAuthenticator.getDelegationToken(TimelineAuthenticator.java:112)
> 	at org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.getDelegationToken(TimelineClientImpl.java:167)
> 	at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.addTimelineDelegationToken(YarnClientImpl.java:275)
> 	at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:221)
> 	at org.apache.hadoop.mapred.ResourceMgrDelegate.submitApplication(ResourceMgrDelegate.java:282)
> 	at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:289)
> 	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:432)
> 	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
> 	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:415)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
> 	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
> 	at org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 	at java.lang.reflect.Method.invoke(Method.java:606)
> 	at org.apache.pig.backend.hadoop23.PigJobControl.submit(PigJobControl.java:128)
> 	at org.apache.pig.backend.hadoop23.PigJobControl.run(PigJobControl.java:191)
> 	at java.lang.Thread.run(Thread.java:744)
> 	at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:271)
> 2015-02-15 14:30:33,842 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - HadoopJobId: job_1424006341901_0002
> 2015-02-15 14:30:33,842 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Processing aliases A,B
> 2015-02-15 14:30:33,842 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - detailed locations: M: A[16,4],B[17,4] C:  R: 
> 2015-02-15 14:30:33,846 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 0% complete
> 2015-02-15 14:30:38,856 [main] WARN  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Ooops! Some job has failed! Specify -stop_on_failure if you want Pig to stop immediately on failure.
> 2015-02-15 14:30:38,857 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - job job_1424006341901_0002 has failed! Stop running all dependent jobs
> 2015-02-15 14:30:38,857 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 100% complete
> 2015-02-15 14:30:39,206 [main] ERROR org.apache.pig.tools.pigstats.SimplePigStats - ERROR: org.apache.hadoop.yarn.exceptions.ApplicationNotFoundException: Application with id 'application_1424006341901_0002' doesn't exist in RM.
> 	at org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getApplicationReport(ClientRMService.java:288)
> 	at org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getApplicationReport(ApplicationClientProtocolPBServiceImpl.java:145)
> 	at org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:321)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:415)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
> 2015-02-15 14:30:39,207 [main] ERROR org.apache.pig.tools.pigstats.PigStatsUtil - 1 map reduce job(s) failed!
> 2015-02-15 14:30:39,210 [main] INFO  org.apache.pig.tools.pigstats.SimplePigStats - Script Statistics: 
> HadoopVersion	PigVersion	UserId	StartedAt	FinishedAt	Features
> 2.4.0.2.1.7.0-784	0.12.1.2.1.7.0-784	ambari-qa	2015-02-15 14:30:28	2015-02-15 14:30:39	UNKNOWN
> Failed!
> Failed Jobs:
> JobId	Alias	Feature	Message	Outputs
> job_1424006341901_0002	A,B	MAP_ONLY	Message: java.net.ConnectException: Connection refused
> 	at java.net.PlainSocketImpl.socketConnect(Native Method)
> 	at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
> 	at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
> 	at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
> 	at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
> 	at java.net.Socket.connect(Socket.java:579)
> 	at java.net.Socket.connect(Socket.java:528)
> 	at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
> 	at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
> 	at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
> 	at sun.net.www.http.HttpClient.<init>(HttpClient.java:211)
> 	at sun.net.www.http.HttpClient.New(HttpClient.java:308)
> 	at sun.net.www.http.HttpClient.New(HttpClient.java:326)
> 	at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:996)
> 	at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:932)
> 	at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:850)
> 	at org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:186)
> 	at org.apache.hadoop.yarn.client.api.impl.TimelineAuthenticator.authenticate(TimelineAuthenticator.java:97)
> 	at org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:232)
> 	at org.apache.hadoop.yarn.client.api.impl.TimelineAuthenticator.getDelegationToken(TimelineAuthenticator.java:112)
> 	at org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.getDelegationToken(TimelineClientImpl.java:167)
> 	at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.addTimelineDelegationToken(YarnClientImpl.java:275)
> 	at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:221)
> 	at org.apache.hadoop.mapred.ResourceMgrDelegate.submitApplication(ResourceMgrDelegate.java:282)
> 	at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:289)
> 	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:432)
> 	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
> 	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:415)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
> 	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
> 	at org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 	at java.lang.reflect.Method.invoke(Method.java:606)
> 	at org.apache.pig.backend.hadoop23.PigJobControl.submit(PigJobControl.java:128)
> 	at org.apache.pig.backend.hadoop23.PigJobControl.run(PigJobControl.java:191)
> 	at java.lang.Thread.run(Thread.java:744)
> 	at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:271)
> 	hdfs://amb-upg160-sles113postgres1423994333-1.cs1cloud.internal:8020/user/ambari-qa/pigsmoke.out,
> Input(s):
> Failed to read data from "hdfs://amb-upg160-sles113postgres1423994333-1.cs1cloud.internal:8020/user/ambari-qa/passwd"
> Output(s):
> Failed to produce result in "hdfs://amb-upg160-sles113postgres1423994333-1.cs1cloud.internal:8020/user/ambari-qa/pigsmoke.out"
> Counters:
> Total records written : 0
> Total bytes written : 0
> Spillable Memory Manager spill count : 0
> Total bags proactively spilled: 0
> Total records proactively spilled: 0
> Job DAG:
> job_1424006341901_0002
> 2015-02-15 14:30:39,212 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Failed!
> 2015-02-15 14:30:39,226 [main] ERROR org.apache.pig.tools.grunt.GruntParser - ERROR 2997: Encountered IOException. org.apache.hadoop.yarn.exceptions.ApplicationNotFoundException: Application with id 'application_1424006341901_0002' doesn't exist in RM.
> 	at org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getApplicationReport(ClientRMService.java:288)
> 	at org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getApplicationReport(ApplicationClientProtocolPBServiceImpl.java:145)
> 	at org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:321)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:415)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
> Details at logfile: /home/ambari-qa/pig_1424010623794.log
> 2015-02-15 14:31:04,754 - Error while executing command 'service_check':
> Traceback (most recent call last):
>   File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 208, in execute
>     method(env)
>   File "/var/lib/ambari-agent/cache/common-services/PIG/0.12.0.2.0/package/scripts/service_check.py", line 61, in service_check
>     user      = params.smokeuser
>   File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 148, in __init__
>     self.env.run()
>   File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 152, in run
>     self.run_action(resource, action)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 118, in run_action
>     provider_action()
>   File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 276, in action_run
>     raise ex
> Fail: Execution of 'pig /var/lib/ambari-agent/data/tmp/pigSmoke.sh' returned 2. 2015-02-15 14:30:47,099 [main] INFO  org.apache.pig.Main - Apache Pig version 0.12.1.2.1.7.0-784 (rexported) compiled Oct 25 2014, 13:23:09
> 2015-02-15 14:30:47,100 [main] INFO  org.apache.pig.Main - Logging error messages to: /home/ambari-qa/pig_1424010647097.log
> 2015-02-15 14:30:48,592 [main] INFO  org.apache.pig.impl.util.Utils - Default bootup file /home/ambari-qa/.pigbootup not found
> 2015-02-15 14:30:48,916 [main] INFO  org.apache.hadoop.conf.Configuration.deprecation - mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
> 2015-02-15 14:30:48,926 [main] INFO  org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
> 2015-02-15 14:30:48,926 [main] INFO  org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: hdfs://amb-upg160-sles113postgres1423994333-1.cs1cloud.internal:8020
> 2015-02-15 14:30:50,772 [main] INFO  org.apache.pig.tools.pigstats.ScriptState - Pig features used in the script: UNKNOWN
> 2015-02-15 14:30:50,818 [main] INFO  org.apache.pig.newplan.logical.optimizer.LogicalPlanOptimizer - {RULES_ENABLED=[AddForEach, ColumnMapKeyPrune, GroupByConstParallelSetter, LimitOptimizer, LoadTypeCastInserter, MergeFilter, MergeForEach, NewPartitionFilterOptimizer, PartitionFilterOptimizer, PushDownForEachFlatten, PushUpFilter, SplitFilter, StreamTypeCastInserter], RULES_DISABLED=[FilterLogicExpressionSimplifier]}
> 2015-02-15 14:30:50,845 [main] INFO  org.apache.hadoop.conf.Configuration.deprecation - mapred.textoutputformat.separator is deprecated. Instead, use mapreduce.output.textoutputformat.separator
> 2015-02-15 14:30:50,879 [main] INFO  org.apache.hadoop.hdfs.DFSClient - Created HDFS_DELEGATION_TOKEN token 43 for ambari-qa on 172.18.145.154:8020
> 2015-02-15 14:30:50,906 [main] INFO  org.apache.hadoop.mapreduce.security.TokenCache - Got dt for hdfs://amb-upg160-sles113postgres1423994333-1.cs1cloud.internal:8020; Kind: HDFS_DELEGATION_TOKEN, Service: 172.18.145.154:8020, Ident: (HDFS_DELEGATION_TOKEN token 43 for ambari-qa)
> 2015-02-15 14:30:51,037 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler - File concatenation threshold: 100 optimistic? false
> 2015-02-15 14:30:51,077 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size before optimization: 1
> 2015-02-15 14:30:51,078 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size after optimization: 1
> 2015-02-15 14:30:52,120 [main] INFO  org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl - Timeline service address: http://amb-upg160-sles113postgres1423994333-4.cs1cloud.internal:8188/ws/v1/timeline/
> 2015-02-15 14:30:52,124 [main] INFO  org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at amb-upg160-sles113postgres1423994333-4.cs1cloud.internal/172.18.145.37:8050
> 2015-02-15 14:30:52,438 [main] INFO  org.apache.pig.tools.pigstats.ScriptState - Pig script settings are added to the job
> 2015-02-15 14:30:52,449 [main] INFO  org.apache.hadoop.conf.Configuration.deprecation - mapred.job.reduce.markreset.buffer.percent is deprecated. Instead, use mapreduce.reduce.markreset.buffer.percent
> 2015-02-15 14:30:52,449 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3
> 2015-02-15 14:30:52,449 [main] INFO  org.apache.hadoop.conf.Configuration.deprecation - mapred.output.compress is deprecated. Instead, use mapreduce.output.fileoutputformat.compress
> 2015-02-15 14:30:52,452 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - creating jar file Job1741629516120112972.jar
> 2015-02-15 14:30:56,461 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - jar file Job1741629516120112972.jar created
> 2015-02-15 14:30:56,461 [main] INFO  org.apache.hadoop.conf.Configuration.deprecation - mapred.jar is deprecated. Instead, use mapreduce.job.jar
> 2015-02-15 14:30:56,508 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Setting up single store job
> 2015-02-15 14:30:56,525 [main] INFO  org.apache.pig.data.SchemaTupleFrontend - Key [pig.schematuple] is false, will not generate code.
> 2015-02-15 14:30:56,525 [main] INFO  org.apache.pig.data.SchemaTupleFrontend - Starting process to move generated code to distributed cache
> 2015-02-15 14:30:56,527 [main] INFO  org.apache.pig.data.SchemaTupleFrontend - Setting key [pig.schematuple.classes] with classes to deserialize []
> 2015-02-15 14:30:56,579 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 1 map-reduce job(s) waiting for submission.
> 2015-02-15 14:30:56,580 [main] INFO  org.apache.hadoop.conf.Configuration.deprecation - mapred.job.tracker.http.address is deprecated. Instead, use mapreduce.jobtracker.http.address
> 2015-02-15 14:30:56,819 [JobControl] INFO  org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl - Timeline service address: http://amb-upg160-sles113postgres1423994333-4.cs1cloud.internal:8188/ws/v1/timeline/
> 2015-02-15 14:30:56,820 [JobControl] INFO  org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at amb-upg160-sles113postgres1423994333-4.cs1cloud.internal/172.18.145.37:8050
> 2015-02-15 14:30:56,871 [JobControl] INFO  org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
> 2015-02-15 14:30:56,892 [JobControl] INFO  org.apache.hadoop.hdfs.DFSClient - Created HDFS_DELEGATION_TOKEN token 44 for ambari-qa on 172.18.145.154:8020
> 2015-02-15 14:30:56,893 [JobControl] INFO  org.apache.hadoop.mapreduce.security.TokenCache - Got dt for hdfs://amb-upg160-sles113postgres1423994333-1.cs1cloud.internal:8020; Kind: HDFS_DELEGATION_TOKEN, Service: 172.18.145.154:8020, Ident: (HDFS_DELEGATION_TOKEN token 44 for ambari-qa)
> 2015-02-15 14:30:57,800 [JobControl] INFO  org.apache.hadoop.hdfs.DFSClient - Created HDFS_DELEGATION_TOKEN token 45 for ambari-qa on 172.18.145.154:8020
> 2015-02-15 14:30:57,800 [JobControl] INFO  org.apache.hadoop.mapreduce.security.TokenCache - Got dt for hdfs://amb-upg160-sles113postgres1423994333-1.cs1cloud.internal:8020; Kind: HDFS_DELEGATION_TOKEN, Service: 172.18.145.154:8020, Ident: (HDFS_DELEGATION_TOKEN token 45 for ambari-qa)
> 2015-02-15 14:30:57,804 [JobControl] INFO  org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths to process : 1
> 2015-02-15 14:30:57,804 [JobControl] INFO  org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths to process : 1
> 2015-02-15 14:30:57,829 [JobControl] INFO  org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths (combined) to process : 1
> 2015-02-15 14:30:58,025 [JobControl] INFO  org.apache.hadoop.mapreduce.JobSubmitter - number of splits:1
> 2015-02-15 14:30:58,063 [JobControl] INFO  org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
> 2015-02-15 14:30:58,491 [JobControl] INFO  org.apache.hadoop.mapreduce.JobSubmitter - Submitting tokens for job: job_1424006341901_0003
> 2015-02-15 14:30:58,492 [JobControl] INFO  org.apache.hadoop.mapreduce.JobSubmitter - Kind: HDFS_DELEGATION_TOKEN, Service: 172.18.145.154:8020, Ident: (HDFS_DELEGATION_TOKEN token 44 for ambari-qa)
> 2015-02-15 14:30:58,741 [JobControl] INFO  org.apache.hadoop.mapreduce.JobSubmitter - Cleaning up the staging area /user/ambari-qa/.staging/job_1424006341901_0003
> 2015-02-15 14:30:58,753 [JobControl] INFO  org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob - PigLatin:pigSmoke.sh got an error while submitting 
> java.net.ConnectException: Connection refused
> 	at java.net.PlainSocketImpl.socketConnect(Native Method)
> 	at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
> 	at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
> 	at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
> 	at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
> 	at java.net.Socket.connect(Socket.java:579)
> 	at java.net.Socket.connect(Socket.java:528)
> 	at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
> 	at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
> 	at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
> 	at sun.net.www.http.HttpClient.<init>(HttpClient.java:211)
> 	at sun.net.www.http.HttpClient.New(HttpClient.java:308)
> 	at sun.net.www.http.HttpClient.New(HttpClient.java:326)
> 	at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:996)
> 	at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:932)
> 	at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:850)
> 	at org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:186)
> 	at org.apache.hadoop.yarn.client.api.impl.TimelineAuthenticator.authenticate(TimelineAuthenticator.java:97)
> 	at org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:232)
> 	at org.apache.hadoop.yarn.client.api.impl.TimelineAuthenticator.getDelegationToken(TimelineAuthenticator.java:112)
> 	at org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.getDelegationToken(TimelineClientImpl.java:167)
> 	at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.addTimelineDelegationToken(YarnClientImpl.java:275)
> 	at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:221)
> 	at org.apache.hadoop.mapred.ResourceMgrDelegate.submitApplication(ResourceMgrDelegate.java:282)
> 	at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:289)
> 	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:432)
> 	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
> 	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:415)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
> 	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
> 	at org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 	at java.lang.reflect.Method.invoke(Method.java:606)
> 	at org.apache.pig.backend.hadoop23.PigJobControl.submit(PigJobControl.java:128)
> 	at org.apache.pig.backend.hadoop23.PigJobControl.run(PigJobControl.java:191)
> 	at java.lang.Thread.run(Thread.java:744)
> 	at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:271)
> 2015-02-15 14:30:58,757 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - HadoopJobId: job_1424006341901_0003
> 2015-02-15 14:30:58,757 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Processing aliases A,B
> 2015-02-15 14:30:58,757 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - detailed locations: M: A[16,4],B[17,4] C:  R: 
> 2015-02-15 14:30:58,761 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 0% complete
> 2015-02-15 14:31:03,770 [main] WARN  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Ooops! Some job has failed! Specify -stop_on_failure if you want Pig to stop immediately on failure.
> 2015-02-15 14:31:03,770 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - job job_1424006341901_0003 has failed! Stop running all dependent jobs
> 2015-02-15 14:31:03,770 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 100% complete
> 2015-02-15 14:31:04,117 [main] ERROR org.apache.pig.tools.pigstats.SimplePigStats - ERROR: org.apache.hadoop.yarn.exceptions.ApplicationNotFoundException: Application with id 'application_1424006341901_0003' doesn't exist in RM.
> 	at org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getApplicationReport(ClientRMService.java:288)
> 	at org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getApplicationReport(ApplicationClientProtocolPBServiceImpl.java:145)
> 	at org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:321)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:415)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
> 2015-02-15 14:31:04,118 [main] ERROR org.apache.pig.tools.pigstats.PigStatsUtil - 1 map reduce job(s) failed!
> 2015-02-15 14:31:04,120 [main] INFO  org.apache.pig.tools.pigstats.SimplePigStats - Script Statistics: 
> HadoopVersion	PigVersion	UserId	StartedAt	FinishedAt	Features
> 2.4.0.2.1.7.0-784	0.12.1.2.1.7.0-784	ambari-qa	2015-02-15 14:30:52	2015-02-15 14:31:04	UNKNOWN
> Failed!
> Failed Jobs:
> JobId	Alias	Feature	Message	Outputs
> job_1424006341901_0003	A,B	MAP_ONLY	Message: java.net.ConnectException: Connection refused
> 	at java.net.PlainSocketImpl.socketConnect(Native Method)
> 	at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
> 	at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
> 	at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
> 	at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
> 	at java.net.Socket.connect(Socket.java:579)
> 	at java.net.Socket.connect(Socket.java:528)
> 	at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
> 	at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
> 	at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
> 	at sun.net.www.http.HttpClient.<init>(HttpClient.java:211)
> 	at sun.net.www.http.HttpClient.New(HttpClient.java:308)
> 	at sun.net.www.http.HttpClient.New(HttpClient.java:326)
> 	at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:996)
> 	at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:932)
> 	at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:850)
> 	at org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:186)
> 	at org.apache.hadoop.yarn.client.api.impl.TimelineAuthenticator.authenticate(TimelineAuthenticator.java:97)
> 	at org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:232)
> 	at org.apache.hadoop.yarn.client.api.impl.TimelineAuthenticator.getDelegationToken(TimelineAuthenticator.java:112)
> 	at org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.getDelegationToken(TimelineClientImpl.java:167)
> 	at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.addTimelineDelegationToken(YarnClientImpl.java:275)
> 	at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:221)
> 	at org.apache.hadoop.mapred.ResourceMgrDelegate.submitApplication(ResourceMgrDelegate.java:282)
> 	at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:289)
> 	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:432)
> 	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
> 	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:415)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
> 	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
> 	at org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 	at java.lang.reflect.Method.invoke(Method.java:606)
> 	at org.apache.pig.backend.hadoop23.PigJobControl.submit(PigJobControl.java:128)
> 	at org.apache.pig.backend.hadoop23.PigJobControl.run(PigJobControl.java:191)
> 	at java.lang.Thread.run(Thread.java:744)
> 	at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:271)
> 	hdfs://amb-upg160-sles113postgres1423994333-1.cs1cloud.internal:8020/user/ambari-qa/pigsmoke.out,
> Input(s):
> Failed to read data from "hdfs://amb-upg160-sles113postgres1423994333-1.cs1cloud.internal:8020/user/ambari-qa/passwd"
> Output(s):
> Failed to produce result in "hdfs://amb-upg160-sles113postgres1423994333-1.cs1cloud.internal:8020/user/ambari-qa/pigsmoke.out"
> Counters:
> Total records written : 0
> Total bytes written : 0
> Spillable Memory Manager spill count : 0
> Total bags proactively spilled: 0
> Total records proactively spilled: 0
> Job DAG:
> job_1424006341901_0003
> 2015-02-15 14:31:04,120 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Failed!
> 2015-02-15 14:31:04,131 [main] ERROR org.apache.pig.tools.grunt.GruntParser - ERROR 2997: Encountered IOException. org.apache.hadoop.yarn.exceptions.ApplicationNotFoundException: Application with id 'application_1424006341901_0003' doesn't exist in RM.
> 	at org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getApplicationReport(ClientRMService.java:288)
> 	at org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getApplicationReport(ApplicationClientProtocolPBServiceImpl.java:145)
> 	at org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:321)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:415)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
> Details at logfile: /home/ambari-qa/pig_1424010647097.log
> {code}
> Expected result:
> Check pig was passed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)