You are viewing a plain text version of this content. The canonical link for it is here.
Posted to solr-user@lucene.apache.org by Gora Mohanty <go...@srijan.in> on 2010/07/29 11:51:39 UTC

Implementing lookups while importing data

Hi,

We have a database that has numeric values for some columns, which
correspond to text values in drop-downs on a website. We need to
index both the numeric and text equivalents into Solr, and can do
that via a lookup on a different table from the one holding the
main data. We are currently doing this via a JOIN on the numeric
field, between the main data table and the lookup table, but this
dramatically slows down indexing.

We could try using the CachedSqlEntity processor, but there are
some issues in doing that, as the data import handler is quite
complicated.

As the lookups need to be done only once, I was planning the
following:
(a) Do the lookups in a custom data source that extends
    JDBCDataSource, and store them in arrays.
(b) Implement a custom transformer that uses the array data
    to convert numeric values read from the database to text.
Comments on this approach, or suggestions for simpler ones would be
much appreciated.

Regards,
Gora

RE: PDF file

Posted by "Ma, Xiaohui (NIH/NLM/LHC) [C]" <xi...@mail.nlm.nih.gov>.
Thanks, I knew how to enable Streaming. But I got another error, ERROR:unknown field 'metadata_trapped'. 

Does anyone know how to match up with SolrCell metadata? I found the following in schema.xml. I don't know how to make changes for PDF.

<!-- Common metadata fields, named specifically to match up with
     SolrCell metadata when parsing rich documents such as Word, PDF.
     Some fields are multiValued only because Tika currently may return
     multiple values for them. -->

I really appreciate your help!
Thanks,

-----Original Message-----
From: Ma, Xiaohui (NIH/NLM/LHC) [C] 
Sent: Wednesday, August 11, 2010 10:36 AM
To: solr-user@lucene.apache.org
Cc: 'jayendra.patil.001@gmail.com'
Subject: RE: PDF file

Thanks so much for your help! I got "Remote Streaming is disabled" error. Would you please tell me if I miss something?

Thanks, 

-----Original Message-----
From: Jayendra Patil [mailto:jayendra.patil.001@gmail.com] 
Sent: Tuesday, August 10, 2010 8:51 PM
To: solr-user@lucene.apache.org
Subject: Re: PDF file

Try ...

curl "
http://lhcinternal.nlm.nih.gov:8989/solr/lhc/update/extract?stream.file=
<Full_Path_of_File>/pub2009001.pdf&literal.id=777045&commit=true"

stream.file - specify full path
literal.<extra params> - specify any extra params if needed

Regards,
Jayendra

On Tue, Aug 10, 2010 at 4:49 PM, Ma, Xiaohui (NIH/NLM/LHC) [C] <
xiaohui@mail.nlm.nih.gov> wrote:

> Thanks so much for your help! I tried to index a pdf file and got the
> following. The command I used is
>
> curl '
> http://lhcinternal.nlm.nih.gov:8989/solr/lhc/update/extract?map.content=text&map.stream_name=id&commit=true'
> -F "file=@pub2009001.pdf"
>
> Did I do something wrong? Do I need modify anything in schema.xml or other
> configuration file?
>
> ********************************************
> [xiaohui@lhcinternal lhc]$ curl '
> http://lhcinternal.nlm.nih.gov:8989/solr/lhc/update/extract?map.content=text&map.stream_name=id&commit=true'
> -F "file=@pub2009001.pdf"
> <html>
> <head>
> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"/>
> <title>Error 404 </title>
> </head>
> <body><h2>HTTP ERROR: 404</h2><pre>NOT_FOUND</pre>
> <p>RequestURI=/solr/lhc/update/extract</p><p><i><small><a href="
> http://jetty.mortbay.org/">Powered by Jetty://</a></small></i></p><br/>
> <br/>
> <br/>
> <br/>
> <br/>
> <br/>
> <br/>
> <br/>
> <br/>
> <br/>
> <br/>
> <br/>
> <br/>
> <br/>
> <br/>
> <br/>
> <br/>
> <br/>
> <br/>
> <br/>
>
> </body>
> </html>
> *******************************************
>
> -----Original Message-----
> From: Sharp, Jonathan [mailto:JSharp@coh.org]
> Sent: Tuesday, August 10, 2010 4:37 PM
> To: solr-user@lucene.apache.org
> Subject: RE: PDF file
>
> Xiaohui,
>
> You need to add the following jars to the lib subdirectory of the solr
> config directory on your server.
>
> (path inside the solr 1.4.1 download)
>
> /dist/apache-solr-cell-1.4.1.jar
> plus all the jars in
> /contrib/extraction/lib
>
> HTH
>
> -Jon
> ________________________________________
> From: Ma, Xiaohui (NIH/NLM/LHC) [C] [xiaohui@mail.nlm.nih.gov]
> Sent: Tuesday, August 10, 2010 11:57 AM
> To: 'solr-user@lucene.apache.org'
> Subject: RE: PDF file
>
> Does anyone have any experience with PDF file? I really appreciate your
> help!
> Thanks so much in advance.
>
> -----Original Message-----
> From: Ma, Xiaohui (NIH/NLM/LHC) [C]
> Sent: Tuesday, August 10, 2010 10:37 AM
> To: 'solr-user@lucene.apache.org'
> Subject: PDF file
>
> I have a lot of pdf files. I am trying to import pdf files to solr and
> index them. I added ExtractingRequestHandler to solrconfig.xml.
>
> Please tell me if I need download some jar files.
>
> In the Solr1.4 Enterprise Search Server book, use following command to
> import a mccm.pdf.
>
> curl '
> http://localhost:8983/solr/solr-home/update/extract?map.content=text&map.stream_name=id&commit=true'
> -F "file=@mccm.pdf"
>
> Please tell me if there is a way to import pdf files from a directory.
>
> Thanks so much for your help!
>
>
>
> ---------------------------------------------------------------------
> SECURITY/CONFIDENTIALITY WARNING:
> This message and any attachments are intended solely for the individual or
> entity to which they are addressed. This communication may contain
> information that is privileged, confidential, or exempt from disclosure
> under applicable law (e.g., personal health information, research data,
> financial information). Because this e-mail has been sent without
> encryption, individuals other than the intended recipient may be able to
> view the information, forward it to others or tamper with the information
> without the knowledge or consent of the sender. If you are not the intended
> recipient, or the employee or person responsible for delivering the message
> to the intended recipient, any dissemination, distribution or copying of the
> communication is strictly prohibited. If you received the communication in
> error, please notify the sender immediately by replying to this message and
> deleting the message and any accompanying files from your system. If, due to
> the security risks, you do not wish to receive further communications via
> e-mail, please reply to this message and inform the sender that you do not
> wish to receive further e-mail from the sender.
>
> ---------------------------------------------------------------------
>
>

RE: PDF file

Posted by "Ma, Xiaohui (NIH/NLM/LHC) [C]" <xi...@mail.nlm.nih.gov>.
Thanks so much for your help! I got "Remote Streaming is disabled" error. Would you please tell me if I miss something?

Thanks, 

-----Original Message-----
From: Jayendra Patil [mailto:jayendra.patil.001@gmail.com] 
Sent: Tuesday, August 10, 2010 8:51 PM
To: solr-user@lucene.apache.org
Subject: Re: PDF file

Try ...

curl "
http://lhcinternal.nlm.nih.gov:8989/solr/lhc/update/extract?stream.file=
<Full_Path_of_File>/pub2009001.pdf&literal.id=777045&commit=true"

stream.file - specify full path
literal.<extra params> - specify any extra params if needed

Regards,
Jayendra

On Tue, Aug 10, 2010 at 4:49 PM, Ma, Xiaohui (NIH/NLM/LHC) [C] <
xiaohui@mail.nlm.nih.gov> wrote:

> Thanks so much for your help! I tried to index a pdf file and got the
> following. The command I used is
>
> curl '
> http://lhcinternal.nlm.nih.gov:8989/solr/lhc/update/extract?map.content=text&map.stream_name=id&commit=true'
> -F "file=@pub2009001.pdf"
>
> Did I do something wrong? Do I need modify anything in schema.xml or other
> configuration file?
>
> ********************************************
> [xiaohui@lhcinternal lhc]$ curl '
> http://lhcinternal.nlm.nih.gov:8989/solr/lhc/update/extract?map.content=text&map.stream_name=id&commit=true'
> -F "file=@pub2009001.pdf"
> <html>
> <head>
> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"/>
> <title>Error 404 </title>
> </head>
> <body><h2>HTTP ERROR: 404</h2><pre>NOT_FOUND</pre>
> <p>RequestURI=/solr/lhc/update/extract</p><p><i><small><a href="
> http://jetty.mortbay.org/">Powered by Jetty://</a></small></i></p><br/>
> <br/>
> <br/>
> <br/>
> <br/>
> <br/>
> <br/>
> <br/>
> <br/>
> <br/>
> <br/>
> <br/>
> <br/>
> <br/>
> <br/>
> <br/>
> <br/>
> <br/>
> <br/>
> <br/>
>
> </body>
> </html>
> *******************************************
>
> -----Original Message-----
> From: Sharp, Jonathan [mailto:JSharp@coh.org]
> Sent: Tuesday, August 10, 2010 4:37 PM
> To: solr-user@lucene.apache.org
> Subject: RE: PDF file
>
> Xiaohui,
>
> You need to add the following jars to the lib subdirectory of the solr
> config directory on your server.
>
> (path inside the solr 1.4.1 download)
>
> /dist/apache-solr-cell-1.4.1.jar
> plus all the jars in
> /contrib/extraction/lib
>
> HTH
>
> -Jon
> ________________________________________
> From: Ma, Xiaohui (NIH/NLM/LHC) [C] [xiaohui@mail.nlm.nih.gov]
> Sent: Tuesday, August 10, 2010 11:57 AM
> To: 'solr-user@lucene.apache.org'
> Subject: RE: PDF file
>
> Does anyone have any experience with PDF file? I really appreciate your
> help!
> Thanks so much in advance.
>
> -----Original Message-----
> From: Ma, Xiaohui (NIH/NLM/LHC) [C]
> Sent: Tuesday, August 10, 2010 10:37 AM
> To: 'solr-user@lucene.apache.org'
> Subject: PDF file
>
> I have a lot of pdf files. I am trying to import pdf files to solr and
> index them. I added ExtractingRequestHandler to solrconfig.xml.
>
> Please tell me if I need download some jar files.
>
> In the Solr1.4 Enterprise Search Server book, use following command to
> import a mccm.pdf.
>
> curl '
> http://localhost:8983/solr/solr-home/update/extract?map.content=text&map.stream_name=id&commit=true'
> -F "file=@mccm.pdf"
>
> Please tell me if there is a way to import pdf files from a directory.
>
> Thanks so much for your help!
>
>
>
> ---------------------------------------------------------------------
> SECURITY/CONFIDENTIALITY WARNING:
> This message and any attachments are intended solely for the individual or
> entity to which they are addressed. This communication may contain
> information that is privileged, confidential, or exempt from disclosure
> under applicable law (e.g., personal health information, research data,
> financial information). Because this e-mail has been sent without
> encryption, individuals other than the intended recipient may be able to
> view the information, forward it to others or tamper with the information
> without the knowledge or consent of the sender. If you are not the intended
> recipient, or the employee or person responsible for delivering the message
> to the intended recipient, any dissemination, distribution or copying of the
> communication is strictly prohibited. If you received the communication in
> error, please notify the sender immediately by replying to this message and
> deleting the message and any accompanying files from your system. If, due to
> the security risks, you do not wish to receive further communications via
> e-mail, please reply to this message and inform the sender that you do not
> wish to receive further e-mail from the sender.
>
> ---------------------------------------------------------------------
>
>

Re: PDF file

Posted by Jayendra Patil <ja...@gmail.com>.
Try ...

curl "
http://lhcinternal.nlm.nih.gov:8989/solr/lhc/update/extract?stream.file=
<Full_Path_of_File>/pub2009001.pdf&literal.id=777045&commit=true"

stream.file - specify full path
literal.<extra params> - specify any extra params if needed

Regards,
Jayendra

On Tue, Aug 10, 2010 at 4:49 PM, Ma, Xiaohui (NIH/NLM/LHC) [C] <
xiaohui@mail.nlm.nih.gov> wrote:

> Thanks so much for your help! I tried to index a pdf file and got the
> following. The command I used is
>
> curl '
> http://lhcinternal.nlm.nih.gov:8989/solr/lhc/update/extract?map.content=text&map.stream_name=id&commit=true'
> -F "file=@pub2009001.pdf"
>
> Did I do something wrong? Do I need modify anything in schema.xml or other
> configuration file?
>
> ********************************************
> [xiaohui@lhcinternal lhc]$ curl '
> http://lhcinternal.nlm.nih.gov:8989/solr/lhc/update/extract?map.content=text&map.stream_name=id&commit=true'
> -F "file=@pub2009001.pdf"
> <html>
> <head>
> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"/>
> <title>Error 404 </title>
> </head>
> <body><h2>HTTP ERROR: 404</h2><pre>NOT_FOUND</pre>
> <p>RequestURI=/solr/lhc/update/extract</p><p><i><small><a href="
> http://jetty.mortbay.org/">Powered by Jetty://</a></small></i></p><br/>
> <br/>
> <br/>
> <br/>
> <br/>
> <br/>
> <br/>
> <br/>
> <br/>
> <br/>
> <br/>
> <br/>
> <br/>
> <br/>
> <br/>
> <br/>
> <br/>
> <br/>
> <br/>
> <br/>
>
> </body>
> </html>
> *******************************************
>
> -----Original Message-----
> From: Sharp, Jonathan [mailto:JSharp@coh.org]
> Sent: Tuesday, August 10, 2010 4:37 PM
> To: solr-user@lucene.apache.org
> Subject: RE: PDF file
>
> Xiaohui,
>
> You need to add the following jars to the lib subdirectory of the solr
> config directory on your server.
>
> (path inside the solr 1.4.1 download)
>
> /dist/apache-solr-cell-1.4.1.jar
> plus all the jars in
> /contrib/extraction/lib
>
> HTH
>
> -Jon
> ________________________________________
> From: Ma, Xiaohui (NIH/NLM/LHC) [C] [xiaohui@mail.nlm.nih.gov]
> Sent: Tuesday, August 10, 2010 11:57 AM
> To: 'solr-user@lucene.apache.org'
> Subject: RE: PDF file
>
> Does anyone have any experience with PDF file? I really appreciate your
> help!
> Thanks so much in advance.
>
> -----Original Message-----
> From: Ma, Xiaohui (NIH/NLM/LHC) [C]
> Sent: Tuesday, August 10, 2010 10:37 AM
> To: 'solr-user@lucene.apache.org'
> Subject: PDF file
>
> I have a lot of pdf files. I am trying to import pdf files to solr and
> index them. I added ExtractingRequestHandler to solrconfig.xml.
>
> Please tell me if I need download some jar files.
>
> In the Solr1.4 Enterprise Search Server book, use following command to
> import a mccm.pdf.
>
> curl '
> http://localhost:8983/solr/solr-home/update/extract?map.content=text&map.stream_name=id&commit=true'
> -F "file=@mccm.pdf"
>
> Please tell me if there is a way to import pdf files from a directory.
>
> Thanks so much for your help!
>
>
>
> ---------------------------------------------------------------------
> SECURITY/CONFIDENTIALITY WARNING:
> This message and any attachments are intended solely for the individual or
> entity to which they are addressed. This communication may contain
> information that is privileged, confidential, or exempt from disclosure
> under applicable law (e.g., personal health information, research data,
> financial information). Because this e-mail has been sent without
> encryption, individuals other than the intended recipient may be able to
> view the information, forward it to others or tamper with the information
> without the knowledge or consent of the sender. If you are not the intended
> recipient, or the employee or person responsible for delivering the message
> to the intended recipient, any dissemination, distribution or copying of the
> communication is strictly prohibited. If you received the communication in
> error, please notify the sender immediately by replying to this message and
> deleting the message and any accompanying files from your system. If, due to
> the security risks, you do not wish to receive further communications via
> e-mail, please reply to this message and inform the sender that you do not
> wish to receive further e-mail from the sender.
>
> ---------------------------------------------------------------------
>
>

RE: PDF file

Posted by "Ma, Xiaohui (NIH/NLM/LHC) [C]" <xi...@mail.nlm.nih.gov>.
Thanks so much for your help! I tried to index a pdf file and got the following. The command I used is 

curl 'http://lhcinternal.nlm.nih.gov:8989/solr/lhc/update/extract?map.content=text&map.stream_name=id&commit=true' -F "file=@pub2009001.pdf"

Did I do something wrong? Do I need modify anything in schema.xml or other configuration file?

********************************************
[xiaohui@lhcinternal lhc]$ curl 'http://lhcinternal.nlm.nih.gov:8989/solr/lhc/update/extract?map.content=text&map.stream_name=id&commit=true' -F "file=@pub2009001.pdf"
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"/>
<title>Error 404 </title>
</head>
<body><h2>HTTP ERROR: 404</h2><pre>NOT_FOUND</pre>
<p>RequestURI=/solr/lhc/update/extract</p><p><i><small><a href="http://jetty.mortbay.org/">Powered by Jetty://</a></small></i></p><br/>                                                
<br/>                                                
<br/>                                                
<br/>                                                
<br/>                                                
<br/>                                                
<br/>                                                
<br/>                                                
<br/>                                                
<br/>                                                
<br/>                                                
<br/>                                                
<br/>                                                
<br/>                                                
<br/>                                                
<br/>                                                
<br/>                                                
<br/>                                                
<br/>                                                
<br/>                                                

</body>
</html>
*******************************************

-----Original Message-----
From: Sharp, Jonathan [mailto:JSharp@coh.org] 
Sent: Tuesday, August 10, 2010 4:37 PM
To: solr-user@lucene.apache.org
Subject: RE: PDF file

Xiaohui,

You need to add the following jars to the lib subdirectory of the solr config directory on your server. 

(path inside the solr 1.4.1 download)

/dist/apache-solr-cell-1.4.1.jar
plus all the jars in 
/contrib/extraction/lib

HTH 

-Jon
________________________________________
From: Ma, Xiaohui (NIH/NLM/LHC) [C] [xiaohui@mail.nlm.nih.gov]
Sent: Tuesday, August 10, 2010 11:57 AM
To: 'solr-user@lucene.apache.org'
Subject: RE: PDF file

Does anyone have any experience with PDF file? I really appreciate your help!
Thanks so much in advance.

-----Original Message-----
From: Ma, Xiaohui (NIH/NLM/LHC) [C]
Sent: Tuesday, August 10, 2010 10:37 AM
To: 'solr-user@lucene.apache.org'
Subject: PDF file

I have a lot of pdf files. I am trying to import pdf files to solr and index them. I added ExtractingRequestHandler to solrconfig.xml.

Please tell me if I need download some jar files.

In the Solr1.4 Enterprise Search Server book, use following command to import a mccm.pdf.

curl 'http://localhost:8983/solr/solr-home/update/extract?map.content=text&map.stream_name=id&commit=true' -F "file=@mccm.pdf"

Please tell me if there is a way to import pdf files from a directory.

Thanks so much for your help!



---------------------------------------------------------------------
SECURITY/CONFIDENTIALITY WARNING:  
This message and any attachments are intended solely for the individual or entity to which they are addressed. This communication may contain information that is privileged, confidential, or exempt from disclosure under applicable law (e.g., personal health information, research data, financial information). Because this e-mail has been sent without encryption, individuals other than the intended recipient may be able to view the information, forward it to others or tamper with the information without the knowledge or consent of the sender. If you are not the intended recipient, or the employee or person responsible for delivering the message to the intended recipient, any dissemination, distribution or copying of the communication is strictly prohibited. If you received the communication in error, please notify the sender immediately by replying to this message and deleting the message and any accompanying files from your system. If, due to the security risks, you do not wish to receive further communications via e-mail, please reply to this message and inform the sender that you do not wish to receive further e-mail from the sender. 

---------------------------------------------------------------------


RE: PDF file

Posted by "Sharp, Jonathan" <JS...@coh.org>.
Xiaohui,

You need to add the following jars to the lib subdirectory of the solr config directory on your server. 

(path inside the solr 1.4.1 download)

/dist/apache-solr-cell-1.4.1.jar
plus all the jars in 
/contrib/extraction/lib

HTH 

-Jon
________________________________________
From: Ma, Xiaohui (NIH/NLM/LHC) [C] [xiaohui@mail.nlm.nih.gov]
Sent: Tuesday, August 10, 2010 11:57 AM
To: 'solr-user@lucene.apache.org'
Subject: RE: PDF file

Does anyone have any experience with PDF file? I really appreciate your help!
Thanks so much in advance.

-----Original Message-----
From: Ma, Xiaohui (NIH/NLM/LHC) [C]
Sent: Tuesday, August 10, 2010 10:37 AM
To: 'solr-user@lucene.apache.org'
Subject: PDF file

I have a lot of pdf files. I am trying to import pdf files to solr and index them. I added ExtractingRequestHandler to solrconfig.xml.

Please tell me if I need download some jar files.

In the Solr1.4 Enterprise Search Server book, use following command to import a mccm.pdf.

curl 'http://localhost:8983/solr/solr-home/update/extract?map.content=text&map.stream_name=id&commit=true' -F "file=@mccm.pdf"

Please tell me if there is a way to import pdf files from a directory.

Thanks so much for your help!



---------------------------------------------------------------------
SECURITY/CONFIDENTIALITY WARNING:  
This message and any attachments are intended solely for the individual or entity to which they are addressed. This communication may contain information that is privileged, confidential, or exempt from disclosure under applicable law (e.g., personal health information, research data, financial information). Because this e-mail has been sent without encryption, individuals other than the intended recipient may be able to view the information, forward it to others or tamper with the information without the knowledge or consent of the sender. If you are not the intended recipient, or the employee or person responsible for delivering the message to the intended recipient, any dissemination, distribution or copying of the communication is strictly prohibited. If you received the communication in error, please notify the sender immediately by replying to this message and deleting the message and any accompanying files from your system. If, due to the security risks, you do not wish to receive further communications via e-mail, please reply to this message and inform the sender that you do not wish to receive further e-mail from the sender. 

---------------------------------------------------------------------


RE: PDF file

Posted by "Ma, Xiaohui (NIH/NLM/LHC) [C]" <xi...@mail.nlm.nih.gov>.
Does anyone have any experience with PDF file? I really appreciate your help!
Thanks so much in advance.

-----Original Message-----
From: Ma, Xiaohui (NIH/NLM/LHC) [C] 
Sent: Tuesday, August 10, 2010 10:37 AM
To: 'solr-user@lucene.apache.org'
Subject: PDF file

I have a lot of pdf files. I am trying to import pdf files to solr and index them. I added ExtractingRequestHandler to solrconfig.xml. 

Please tell me if I need download some jar files. 

In the Solr1.4 Enterprise Search Server book, use following command to import a mccm.pdf.

curl 'http://localhost:8983/solr/solr-home/update/extract?map.content=text&map.stream_name=id&commit=true' -F "file=@mccm.pdf"

Please tell me if there is a way to import pdf files from a directory.

Thanks so much for your help!


Re: PDF file

Posted by Chris Hostetter <ho...@fucit.org>.
: Subject: PDF file
: References: <20...@ibis>
:  <AA...@mail.gmail.com>
: In-Reply-To: <AA...@mail.gmail.com>

http://people.apache.org/~hossman/#threadhijack
Thread Hijacking on Mailing Lists

When starting a new discussion on a mailing list, please do not reply to 
an existing message, instead start a fresh email.  Even if you change the 
subject line of your email, other mail headers still track which thread 
you replied to and your question is "hidden" in that thread and gets less 
attention.   It makes following discussions in the mailing list archives 
particularly difficult.
See Also:  http://en.wikipedia.org/wiki/User:DonDiego/Thread_hijacking




-Hoss


PDF file

Posted by "Ma, Xiaohui (NIH/NLM/LHC) [C]" <xi...@mail.nlm.nih.gov>.
I have a lot of pdf files. I am trying to import pdf files to solr and index them. I added ExtractingRequestHandler to solrconfig.xml. 

Please tell me if I need download some jar files. 

In the Solr1.4 Enterprise Search Server book, use following command to import a mccm.pdf.

curl 'http://localhost:8983/solr/solr-home/update/extract?map.content=text&map.stream_name=id&commit=true' -F "file=@mccm.pdf"

Please tell me if there is a way to import pdf files from a directory.

Thanks so much for your help!


Re: Implementing lookups while importing data

Posted by Alexey Serba <as...@gmail.com>.
> We are currently doing this via a JOIN on the numeric
> field, between the main data table and the lookup table, but this
> dramatically slows down indexing.
I believe SQL JOIN is the fastest and easiest way in your case (in
comparison with nested entity even using CachedSqlEntity). You
probably don't have proper indexes in your database - check SQL query
plan.

Re: Implementing lookups while importing data

Posted by Chris Hostetter <ho...@fucit.org>.
: We have a database that has numeric values for some columns, which
: correspond to text values in drop-downs on a website. We need to
: index both the numeric and text equivalents into Solr, and can do
: that via a lookup on a different table from the one holding the
: main data. We are currently doing this via a JOIN on the numeric
: field, between the main data table and the lookup table, but this
: dramatically slows down indexing.
: 
: We could try using the CachedSqlEntity processor, but there are
: some issues in doing that, as the data import handler is quite
: complicated.

wha you are describing is pretty much the exact use case of the 
CachedSqlEntity (as i understand it) so perhaps you should elaborate on 
what issues you had.

showing your DIH config is the best way ot get assistance.



-Hoss


Re: Implementing lookups while importing data

Posted by Gora Mohanty <go...@srijan.in>.
On Thu, 29 Jul 2010 12:30:50 +0200
Chantal Ackermann <ch...@btelligent.de> wrote:

> Hi Gora,
> 
> your suggestion is good.
> 
> Two thoughts:
> 1. if both of the tables you are joining are in the same database
> under the same user you might want to check why the join is so
> slow. Maybe you just need to add an index on a column that is
> used in your WHERE clauses. Joins should not be slow.

Hmm, that is a very good point. You can probably tell that I am
a novice at databases :-) Currently, I am probably doing the joins
in a way that is naive, and it slows things down by about an order
of magnitude.

> 2. if the tables are in different databases and you are joining
> them via DIH I tend to agree that this can get too slow (I think
> the connections might not get pooled and the jdbc driver adds too
> much overhead - ATTENTION ASSUMPTION).

They are in the same database.

> If it's not a possibility for you to create a temporary table that
> aggregates the required data before indexing, then your proposal
> is indeed a good solution.

Unfortunately, it is not easily doable for us to recreate the
database. Forgot to mention that.

> Another way I can think off right now, that would only reduce your
> coding effort and change it to a configuration task:
> In your indexing procedure do:
> a) create a temporary solr core on your solr server (see the page
> on core admin in the wiki)
> b) index this tmp core with the text data
> c) index your main core with the data by joining it to the already
> existing solr index in the tmp core (this is fast, I can assure
> you, use URLDataSource with XPathEntityProcessor if you are on
> 1.4) d) delete the tmp core (well, or keep it for next time)
[...]

Another great idea, and one which should be less work than a custom
datasource, plus a custom transformer. Thank you very much.

Regards,
Gora

Re: Implementing lookups while importing data

Posted by Chantal Ackermann <ch...@btelligent.de>.
Hi Gora,

your suggestion is good.

Two thoughts:
1. if both of the tables you are joining are in the same database under
the same user you might want to check why the join is so slow. Maybe you
just need to add an index on a column that is used in your WHERE
clauses. Joins should not be slow.

2. if the tables are in different databases and you are joining them via
DIH I tend to agree that this can get too slow (I think the connections
might not get pooled and the jdbc driver adds too much overhead -
ATTENTION ASSUMPTION).
If it's not a possibility for you to create a temporary table that
aggregates the required data before indexing, then your proposal is
indeed a good solution.
Another way I can think off right now, that would only reduce your
coding effort and change it to a configuration task:
In your indexing procedure do:
a) create a temporary solr core on your solr server (see the page on
core admin in the wiki)
b) index this tmp core with the text data
c) index your main core with the data by joining it to the already
existing solr index in the tmp core (this is fast, I can assure you, use
URLDataSource with XPathEntityProcessor if you are on 1.4)
d) delete the tmp core (well, or keep it for next time)

Chantal


On Thu, 2010-07-29 at 11:51 +0200, Gora Mohanty wrote:
> Hi,
> 
> We have a database that has numeric values for some columns, which
> correspond to text values in drop-downs on a website. We need to
> index both the numeric and text equivalents into Solr, and can do
> that via a lookup on a different table from the one holding the
> main data. We are currently doing this via a JOIN on the numeric
> field, between the main data table and the lookup table, but this
> dramatically slows down indexing.
> 
> We could try using the CachedSqlEntity processor, but there are
> some issues in doing that, as the data import handler is quite
> complicated.
> 
> As the lookups need to be done only once, I was planning the
> following:
> (a) Do the lookups in a custom data source that extends
>     JDBCDataSource, and store them in arrays.
> (b) Implement a custom transformer that uses the array data
>     to convert numeric values read from the database to text.
> Comments on this approach, or suggestions for simpler ones would be
> much appreciated.
> 
> Regards,
> Gora