You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by "SimonDZhu (JIRA)" <ji...@apache.org> on 2014/09/27 12:12:34 UTC
[jira] [Commented] (SOLR-5551) Error while updating replicas
[ https://issues.apache.org/jira/browse/SOLR-5551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150518#comment-14150518 ]
SimonDZhu commented on SOLR-5551:
---------------------------------
Hi All,
I run into a similar issue on solrcloud which always gives the following error message.
Could anyone help please,
Thanks in advance,
Simon
14/09/27 17:27:05 INFO mapreduce.Job: map 100% reduce 68%
14/09/27 17:27:05 INFO mapreduce.Job: Task Id : attempt_1411790808211_0012_r_000000_2, Status : FAILED
Error: org.apache.solr.common.SolrException: Bad Request
Bad Request
request: http://master:8899/solr/update?wt=javabin&version=2
at org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:430)
at org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:244)
at org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:105)
at org.apache.nutch.indexwriter.solr.SolrIndexWriter.write(SolrIndexWriter.java:135)
at org.apache.nutch.indexer.IndexWriters.write(IndexWriters.java:88)
at org.apache.nutch.indexer.IndexerOutputFormat$1.write(IndexerOutputFormat.java:50)
at org.apache.nutch.indexer.IndexerOutputFormat$1.write(IndexerOutputFormat.java:41)
at org.apache.hadoop.mapred.ReduceTask$OldTrackingRecordWriter.write(ReduceTask.java:493)
at org.apache.hadoop.mapred.ReduceTask$3.collect(ReduceTask.java:422)
at org.apache.nutch.indexer.IndexerMapReduce.reduce(IndexerMapReduce.java:323)
at org.apache.nutch.indexer.IndexerMapReduce.reduce(IndexerMapReduce.java:53)
at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:444)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
14/09/27 17:27:06 INFO mapreduce.Job: map 100% reduce 0%
14/09/27 17:27:28 INFO mapreduce.Job: map 100% reduce 15%
14/09/27 17:27:51 INFO mapreduce.Job: map 100% reduce 32%
14/09/27 17:27:54 INFO mapreduce.Job: map 100% reduce 33%
14/09/27 17:29:26 INFO mapreduce.Job: map 100% reduce 67%
14/09/27 17:29:59 INFO mapreduce.Job: map 100% reduce 68%
14/09/27 17:30:09 INFO mapreduce.Job: map 100% reduce 100%
14/09/27 17:30:15 INFO mapreduce.Job: Job job_1411790808211_0012 failed with state FAILED due to: Task failed task_1411790808211_0012_r_000000
Job failed as tasks failed. failedMaps:0 failedReduces:1
14/09/27 17:30:17 INFO mapreduce.Job: Counters: 40
File System Counters
FILE: Number of bytes read=177143751
FILE: Number of bytes written=1245091385
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=593913729
HDFS: Number of bytes written=0
HDFS: Number of read operations=136
HDFS: Number of large read operations=0
HDFS: Number of write operations=0
Job Counters
Failed map tasks=5
Failed reduce tasks=4
Launched map tasks=39
Launched reduce tasks=4
Other local map tasks=5
Data-local map tasks=33
Rack-local map tasks=1
Total time spent by all maps in occupied slots (ms)=1871174
Total time spent by all reduces in occupied slots (ms)=486005
Total time spent by all map tasks (ms)=1871174
Total time spent by all reduce tasks (ms)=486005
Total vcore-seconds taken by all map tasks=1871174
Total vcore-seconds taken by all reduce tasks=486005
Total megabyte-seconds taken by all map tasks=1916082176
Total megabyte-seconds taken by all reduce tasks=497669120
Map-Reduce Framework
Map input records=3751130
Map output records=3751130
Map output bytes=1055861232
Map output materialized bytes=1064042642
Input split bytes=4799
Combine input records=0
Spilled Records=3795415
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=3251
CPU time spent (ms)=127480
Physical memory (bytes) snapshot=14463537152
Virtual memory (bytes) snapshot=173441761280
Total committed heap usage (bytes)=14498136064
File Input Format Counters
Bytes Read=593908930
14/09/27 17:30:17 ERROR indexer.IndexingJob: Indexer: java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:836)
at org.apache.nutch.indexer.IndexingJob.index(IndexingJob.java:114)
at org.apache.nutch.indexer.IndexingJob.run(IndexingJob.java:176)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.nutch.indexer.IndexingJob.main(IndexingJob.java:186)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
> Error while updating replicas
> -----------------------------
>
> Key: SOLR-5551
> URL: https://issues.apache.org/jira/browse/SOLR-5551
> Project: Solr
> Issue Type: Bug
> Components: SolrCloud
> Affects Versions: 4.6
> Reporter: David Boychuck
> Attachments: 24.log, 25.log, 26.log
>
>
> There is an error with peersynch in SolrCloud mode with decimal values. I have described the issue in detail here: http://lucene.472066.n3.nabble.com/Solr-Cloud-error-with-shard-update-td4106260.html
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org