You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by "Mark Miller (JIRA)" <ji...@apache.org> on 2013/11/22 07:00:47 UTC

[jira] [Comment Edited] (SOLR-1301) Add a Solr contrib that allows for building Solr indexes via Hadoop's Map-Reduce.

    [ https://issues.apache.org/jira/browse/SOLR-1301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13829678#comment-13829678 ] 

Mark Miller edited comment on SOLR-1301 at 11/22/13 5:59 AM:
-------------------------------------------------------------

New Patch.

* Updated to trunk

* A pass at putting dependencies in the correct modules

* A script for running the MapReduceIndexTool - classpath in the manifest doesn't seem very nice.

* Updated to CDK 0.8.1

I'm sure there are a variety of other things to polish, fix, decide and finalize, as well as code to sync up - but nothing that needs to be done before this is committed. I need to get this in asap as it's a large burden to maintain over time.

Except for the test policy issue. That is the only blocker I know of for committing remaining.

Also have to do a bit of manual testing.

You can run the tool by running Solr's 'ant package' and then expand one of the release zip/tgz files. Try something like:

cd solr/example/scripts/solr-mr
sh solr-mr.sh --help


was (Author: markrmiller@gmail.com):
New Patch.

* Updated to trunk

* A pass at putting dependencies in the correct modules

* A script for running the MapReduceIndexTool - classpath in the manifest doesn't seem very nice.

* Updated to CDK 0.8.1

I'm sure there are a variety of other things to polish, fix, decide and finalize, as well as code to sync up - but nothing that needs to be done before this is committed. I need to get this in asap as it's a large burden to maintain over time.

Except for the test policy issue. That is the only blocker I know of for committing remaining.

Also have to do a bit of manual testing.

You can run the tool by running Solr's 'ant package' and then expand one of the release zip/tgz files. Try something like:

cd solr/example/script/solr-mr
sh solr-mr.sh --help

> Add a Solr contrib that allows for building Solr indexes via Hadoop's Map-Reduce.
> ---------------------------------------------------------------------------------
>
>                 Key: SOLR-1301
>                 URL: https://issues.apache.org/jira/browse/SOLR-1301
>             Project: Solr
>          Issue Type: New Feature
>            Reporter: Andrzej Bialecki 
>            Assignee: Mark Miller
>             Fix For: 5.0, 4.7
>
>         Attachments: README.txt, SOLR-1301-hadoop-0-20.patch, SOLR-1301-hadoop-0-20.patch, SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, SolrRecordWriter.java, commons-logging-1.0.4.jar, commons-logging-api-1.0.4.jar, hadoop-0.19.1-core.jar, hadoop-0.20.1-core.jar, hadoop-core-0.20.2-cdh3u3.jar, hadoop.patch, log4j-1.2.15.jar
>
>
> This patch contains  a contrib module that provides distributed indexing (using Hadoop) to Solr EmbeddedSolrServer. The idea behind this module is twofold:
> * provide an API that is familiar to Hadoop developers, i.e. that of OutputFormat
> * avoid unnecessary export and (de)serialization of data maintained on HDFS. SolrOutputFormat consumes data produced by reduce tasks directly, without storing it in intermediate files. Furthermore, by using an EmbeddedSolrServer, the indexing task is split into as many parts as there are reducers, and the data to be indexed is not sent over the network.
> Design
> ----------
> Key/value pairs produced by reduce tasks are passed to SolrOutputFormat, which in turn uses SolrRecordWriter to write this data. SolrRecordWriter instantiates an EmbeddedSolrServer, and it also instantiates an implementation of SolrDocumentConverter, which is responsible for turning Hadoop (key, value) into a SolrInputDocument. This data is then added to a batch, which is periodically submitted to EmbeddedSolrServer. When reduce task completes, and the OutputFormat is closed, SolrRecordWriter calls commit() and optimize() on the EmbeddedSolrServer.
> The API provides facilities to specify an arbitrary existing solr.home directory, from which the conf/ and lib/ files will be taken.
> This process results in the creation of as many partial Solr home directories as there were reduce tasks. The output shards are placed in the output directory on the default filesystem (e.g. HDFS). Such part-NNNNN directories can be used to run N shard servers. Additionally, users can specify the number of reduce tasks, in particular 1 reduce task, in which case the output will consist of a single shard.
> An example application is provided that processes large CSV files and uses this API. It uses a custom CSV processing to avoid (de)serialization overhead.
> This patch relies on hadoop-core-0.19.1.jar - I attached the jar to this issue, you should put it in contrib/hadoop/lib.
> Note: the development of this patch was sponsored by an anonymous contributor and approved for release under Apache License.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org