You are viewing a plain text version of this content. The canonical link for it is here.
Posted to solr-user@lucene.apache.org by sunnyfr <jo...@gmail.com> on 2008/12/11 11:38:50 UTC

Make it more performant - solr 1.3 - 1200msec respond time.

Hi,

I'm doing a stress test on solr.
I've around 8,5M of doc, the size of my data's directory is 5,6G.

I've  indexed again my data to make it faster, and applied all the last
patch.
My index data store just two field : id and text (which is a copy of three
fiels)
But I still think it's very long, what do you think?

For 50request/sec during 40mn, my average  respond time : 1235msec.
49430request.

When I make this test with 100request second during 10mn and 10 other
minutes with 50 request : my average respond time is 1600msec. Don't you
think it's a bit long.

Should I partition this index more ? or what should I do to make this work
faster.
I can read post with people who have just 300msec request for 300Go of index
partionned ? 
My request for collecting all this book is quite complex and have lot of
table linked together, maybe it would be faster if I create a csv file ? 

The server that I'm using for the test has 8G of memory.
4CPU : Intel(R) Xeon(R) CPU            5160  @ 3.00GHz
Tomcat55 : -Xms2000m -Xmx4000m 
Solr 1.3.

What can I modify to make it more performant ? memory, indexation ...?
Does it can come from my request to the mysql database which is too much
linked ? 

Thanks a lot for your help,
Johanna


-- 
View this message in context: http://www.nabble.com/Make-it-more-performant---solr-1.3---1200msec-respond-time.-tp20953079p20953079.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Make it more performant - solr 1.3 - 1200msec respond time.

Posted by sunnyfr <jo...@gmail.com>.
Hi,

Around 50threads/sec the request bring back  " No read Solr server
available" , the gc seems to be quite full, but I didn"t get OOM error,
would love an advice.

Thanks a lot 

Details :
8G of memory
4CPU : Intel(R) Xeon(R) CPU            5160  @ 3.00GHz 
Solr 1.3
# Arguments to pass to the Java virtual machine (JVM).
JAVA_OPTS="-Xms1000m -Xmx4000m -XX:+UseParallelGC
-XX:+HeapDumpOnOutOfMemoryError -Xloggc:gc.log -XX:+PrintGCDetails
-XX:+PrintGCTimeStamps"
5.3G of data with 8,2M of documents.

Thanks a lot for the help



sunnyfr wrote:
> 
> Actually I just notices, lot of request didn"t bring back correct answer,
> but " No read Solr server available" so my jmeter didn't take that for an
> error. Obviously out of memory, and a file gc.log is created with :
> 0.054: [GC [PSYoungGen: 5121K->256K(298688K)] 5121K->256K(981376K),
> 0.0020630 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 0.056: [Full GC (System) [PSYoungGen: 256K->0K(298688K)] [PSOldGen:
> 0K->180K(682688K)] 256K->180K(981376K) [PSPermGen: 3002K->3002K(21248K)],
> 0.0055170 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 
> so far my tomcat55 file is configurate like that :
> JAVA_OPTS="-Xms1000m -Xmx4000m -XX:+HeapDumpOnOutOfMemoryError
> -Xloggc:gc.log -XX:+PrintGCDetails -XX:+PrintGCTimeStamps"
> 
> My error:
> Dec 11 14:16:27 solr-test jsvc.exec[30653]: Dec 11, 2008 2:16:27 PM
> org.apache.solr.core.SolrCore execute INFO: [video] webapp=/solr
> path=/admin/ping params={} hits=0 status=0 QTime=1  Dec 11, 2008 2:16:27
> PM org.apache.solr.core.SolrCore execute INFO: [video] webapp=/solr
> path=/admin/ping params={} status=0 QTime=1  Dec 11, 2008 2:16:27 PM
> org.apache.solr.core.SolrCore execute INFO: [video] webapp=/solr
> path=/admin/ping params={} hits=0 status=0 QTime=1  Dec 11, 2008 2:16:27
> PM org.apache.solr.core.SolrCore execute INFO: [video] webapp=/solr
> path=/admin/ping params={} status=0 QTime=2
> Dec 11 14:16:27 solr-test jsvc.exec[30653]: java.lang.OutOfMemoryError: GC
> overhead limit exceeded Dumping heap to java_pid30655.hprof ...
> 
> 
> 
> 
> Thanks for your help
> 
> 
> 
> Shalin Shekhar Mangar wrote:
>> 
>> Are each of those queries unique?
>> 
>> First time queries are slower. They are cached by Solr and the same query
>> again will return results very quickly because it won't need to hit the
>> file
>> system.
>> 
>> On Thu, Dec 11, 2008 at 4:08 PM, sunnyfr <jo...@gmail.com> wrote:
>> 
>>>
>>> Hi,
>>>
>>> I'm doing a stress test on solr.
>>> I've around 8,5M of doc, the size of my data's directory is 5,6G.
>>>
>>> I've  indexed again my data to make it faster, and applied all the last
>>> patch.
>>> My index data store just two field : id and text (which is a copy of
>>> three
>>> fiels)
>>> But I still think it's very long, what do you think?
>>>
>>> For 50request/sec during 40mn, my average  respond time : 1235msec.
>>> 49430request.
>>>
>>> When I make this test with 100request second during 10mn and 10 other
>>> minutes with 50 request : my average respond time is 1600msec. Don't you
>>> think it's a bit long.
>>>
>>> Should I partition this index more ? or what should I do to make this
>>> work
>>> faster.
>>> I can read post with people who have just 300msec request for 300Go of
>>> index
>>> partionned ?
>>> My request for collecting all this book is quite complex and have lot of
>>> table linked together, maybe it would be faster if I create a csv file ?
>>>
>>> The server that I'm using for the test has 8G of memory.
>>> 4CPU : Intel(R) Xeon(R) CPU            5160  @ 3.00GHz
>>> Tomcat55 : -Xms2000m -Xmx4000m
>>> Solr 1.3.
>>>
>>> What can I modify to make it more performant ? memory, indexation ...?
>>> Does it can come from my request to the mysql database which is too much
>>> linked ?
>>>
>>> Thanks a lot for your help,
>>> Johanna
>>>
>>>
>>> --
>>> View this message in context:
>>> http://www.nabble.com/Make-it-more-performant---solr-1.3---1200msec-respond-time.-tp20953079p20953079.html
>>> Sent from the Solr - User mailing list archive at Nabble.com.
>>>
>>>
>> 
>> 
>> -- 
>> Regards,
>> Shalin Shekhar Mangar.
>> 
>> 
> 
> 

-- 
View this message in context: http://www.nabble.com/Make-it-more-performant---solr-1.3---1200msec-respond-time.-tp20953079p20955856.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Make it more performant - solr 1.3 - 1200msec respond time.

Posted by sunnyfr <jo...@gmail.com>.
Actually I still have this error : " No read Solr server available "



sunnyfr wrote:
> 
> Ok sorry I just add the parameter -XX:+UseParallelGC and it seems to don't
> go oom.
> 
> 
> 
> 
> sunnyfr wrote:
>> 
>> Actually I just notices, lot of request didn"t bring back correct answer,
>> but " No read Solr server available" so my jmeter didn't take that for an
>> error. Obviously out of memory, and a file gc.log is created with :
>> 0.054: [GC [PSYoungGen: 5121K->256K(298688K)] 5121K->256K(981376K),
>> 0.0020630 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 0.056: [Full GC (System) [PSYoungGen: 256K->0K(298688K)] [PSOldGen:
>> 0K->180K(682688K)] 256K->180K(981376K) [PSPermGen: 3002K->3002K(21248K)],
>> 0.0055170 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 
>> so far my tomcat55 file is configurate like that :
>> JAVA_OPTS="-Xms1000m -Xmx4000m -XX:+HeapDumpOnOutOfMemoryError
>> -Xloggc:gc.log -XX:+PrintGCDetails -XX:+PrintGCTimeStamps"
>> 
>> My error:
>> Dec 11 14:16:27 solr-test jsvc.exec[30653]: Dec 11, 2008 2:16:27 PM
>> org.apache.solr.core.SolrCore execute INFO: [video] webapp=/solr
>> path=/admin/ping params={} hits=0 status=0 QTime=1  Dec 11, 2008 2:16:27
>> PM org.apache.solr.core.SolrCore execute INFO: [video] webapp=/solr
>> path=/admin/ping params={} status=0 QTime=1  Dec 11, 2008 2:16:27 PM
>> org.apache.solr.core.SolrCore execute INFO: [video] webapp=/solr
>> path=/admin/ping params={} hits=0 status=0 QTime=1  Dec 11, 2008 2:16:27
>> PM org.apache.solr.core.SolrCore execute INFO: [video] webapp=/solr
>> path=/admin/ping params={} status=0 QTime=2
>> Dec 11 14:16:27 solr-test jsvc.exec[30653]: java.lang.OutOfMemoryError:
>> GC overhead limit exceeded Dumping heap to java_pid30655.hprof ...
>> 
>> 
>> 
>> 
>> Thanks for your help
>> 
>> 
>> 
>> Shalin Shekhar Mangar wrote:
>>> 
>>> Are each of those queries unique?
>>> 
>>> First time queries are slower. They are cached by Solr and the same
>>> query
>>> again will return results very quickly because it won't need to hit the
>>> file
>>> system.
>>> 
>>> On Thu, Dec 11, 2008 at 4:08 PM, sunnyfr <jo...@gmail.com> wrote:
>>> 
>>>>
>>>> Hi,
>>>>
>>>> I'm doing a stress test on solr.
>>>> I've around 8,5M of doc, the size of my data's directory is 5,6G.
>>>>
>>>> I've  indexed again my data to make it faster, and applied all the last
>>>> patch.
>>>> My index data store just two field : id and text (which is a copy of
>>>> three
>>>> fiels)
>>>> But I still think it's very long, what do you think?
>>>>
>>>> For 50request/sec during 40mn, my average  respond time : 1235msec.
>>>> 49430request.
>>>>
>>>> When I make this test with 100request second during 10mn and 10 other
>>>> minutes with 50 request : my average respond time is 1600msec. Don't
>>>> you
>>>> think it's a bit long.
>>>>
>>>> Should I partition this index more ? or what should I do to make this
>>>> work
>>>> faster.
>>>> I can read post with people who have just 300msec request for 300Go of
>>>> index
>>>> partionned ?
>>>> My request for collecting all this book is quite complex and have lot
>>>> of
>>>> table linked together, maybe it would be faster if I create a csv file
>>>> ?
>>>>
>>>> The server that I'm using for the test has 8G of memory.
>>>> 4CPU : Intel(R) Xeon(R) CPU            5160  @ 3.00GHz
>>>> Tomcat55 : -Xms2000m -Xmx4000m
>>>> Solr 1.3.
>>>>
>>>> What can I modify to make it more performant ? memory, indexation ...?
>>>> Does it can come from my request to the mysql database which is too
>>>> much
>>>> linked ?
>>>>
>>>> Thanks a lot for your help,
>>>> Johanna
>>>>
>>>>
>>>> --
>>>> View this message in context:
>>>> http://www.nabble.com/Make-it-more-performant---solr-1.3---1200msec-respond-time.-tp20953079p20953079.html
>>>> Sent from the Solr - User mailing list archive at Nabble.com.
>>>>
>>>>
>>> 
>>> 
>>> -- 
>>> Regards,
>>> Shalin Shekhar Mangar.
>>> 
>>> 
>> 
>> 
> 
> 

-- 
View this message in context: http://www.nabble.com/Make-it-more-performant---solr-1.3---1200msec-respond-time.-tp20953079p20955609.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Make it more performant - solr 1.3 - 1200msec respond time.

Posted by sunnyfr <jo...@gmail.com>.
Ok sorry I just add the parameter -XX:+UseParallelGC and it seems to don't go
oom.




sunnyfr wrote:
> 
> Actually I just notices, lot of request didn"t bring back correct answer,
> but " No read Solr server available" so my jmeter didn't take that for an
> error. Obviously out of memory, and a file gc.log is created with :
> 0.054: [GC [PSYoungGen: 5121K->256K(298688K)] 5121K->256K(981376K),
> 0.0020630 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 0.056: [Full GC (System) [PSYoungGen: 256K->0K(298688K)] [PSOldGen:
> 0K->180K(682688K)] 256K->180K(981376K) [PSPermGen: 3002K->3002K(21248K)],
> 0.0055170 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 
> so far my tomcat55 file is configurate like that :
> JAVA_OPTS="-Xms1000m -Xmx4000m -XX:+HeapDumpOnOutOfMemoryError
> -Xloggc:gc.log -XX:+PrintGCDetails -XX:+PrintGCTimeStamps"
> 
> My error:
> Dec 11 14:16:27 solr-test jsvc.exec[30653]: Dec 11, 2008 2:16:27 PM
> org.apache.solr.core.SolrCore execute INFO: [video] webapp=/solr
> path=/admin/ping params={} hits=0 status=0 QTime=1  Dec 11, 2008 2:16:27
> PM org.apache.solr.core.SolrCore execute INFO: [video] webapp=/solr
> path=/admin/ping params={} status=0 QTime=1  Dec 11, 2008 2:16:27 PM
> org.apache.solr.core.SolrCore execute INFO: [video] webapp=/solr
> path=/admin/ping params={} hits=0 status=0 QTime=1  Dec 11, 2008 2:16:27
> PM org.apache.solr.core.SolrCore execute INFO: [video] webapp=/solr
> path=/admin/ping params={} status=0 QTime=2
> Dec 11 14:16:27 solr-test jsvc.exec[30653]: java.lang.OutOfMemoryError: GC
> overhead limit exceeded Dumping heap to java_pid30655.hprof ...
> 
> 
> 
> 
> Thanks for your help
> 
> 
> 
> Shalin Shekhar Mangar wrote:
>> 
>> Are each of those queries unique?
>> 
>> First time queries are slower. They are cached by Solr and the same query
>> again will return results very quickly because it won't need to hit the
>> file
>> system.
>> 
>> On Thu, Dec 11, 2008 at 4:08 PM, sunnyfr <jo...@gmail.com> wrote:
>> 
>>>
>>> Hi,
>>>
>>> I'm doing a stress test on solr.
>>> I've around 8,5M of doc, the size of my data's directory is 5,6G.
>>>
>>> I've  indexed again my data to make it faster, and applied all the last
>>> patch.
>>> My index data store just two field : id and text (which is a copy of
>>> three
>>> fiels)
>>> But I still think it's very long, what do you think?
>>>
>>> For 50request/sec during 40mn, my average  respond time : 1235msec.
>>> 49430request.
>>>
>>> When I make this test with 100request second during 10mn and 10 other
>>> minutes with 50 request : my average respond time is 1600msec. Don't you
>>> think it's a bit long.
>>>
>>> Should I partition this index more ? or what should I do to make this
>>> work
>>> faster.
>>> I can read post with people who have just 300msec request for 300Go of
>>> index
>>> partionned ?
>>> My request for collecting all this book is quite complex and have lot of
>>> table linked together, maybe it would be faster if I create a csv file ?
>>>
>>> The server that I'm using for the test has 8G of memory.
>>> 4CPU : Intel(R) Xeon(R) CPU            5160  @ 3.00GHz
>>> Tomcat55 : -Xms2000m -Xmx4000m
>>> Solr 1.3.
>>>
>>> What can I modify to make it more performant ? memory, indexation ...?
>>> Does it can come from my request to the mysql database which is too much
>>> linked ?
>>>
>>> Thanks a lot for your help,
>>> Johanna
>>>
>>>
>>> --
>>> View this message in context:
>>> http://www.nabble.com/Make-it-more-performant---solr-1.3---1200msec-respond-time.-tp20953079p20953079.html
>>> Sent from the Solr - User mailing list archive at Nabble.com.
>>>
>>>
>> 
>> 
>> -- 
>> Regards,
>> Shalin Shekhar Mangar.
>> 
>> 
> 
> 

-- 
View this message in context: http://www.nabble.com/Make-it-more-performant---solr-1.3---1200msec-respond-time.-tp20953079p20955479.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Make it more performant - solr 1.3 - 1200msec respond time.

Posted by sunnyfr <jo...@gmail.com>.
Actually I just notices, lot of request didn"t bring back correct answer, but
" No read Solr server available" so my jmeter didn't take that for an error.
Obviously out of memory, and a file gc.log is created with :
0.054: [GC [PSYoungGen: 5121K->256K(298688K)] 5121K->256K(981376K),
0.0020630 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
0.056: [Full GC (System) [PSYoungGen: 256K->0K(298688K)] [PSOldGen:
0K->180K(682688K)] 256K->180K(981376K) [PSPermGen: 3002K->3002K(21248K)],
0.0055170 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]

so far my tomcat55 file is configurate like that :
JAVA_OPTS="-Xms1000m -Xmx4000m -XX:+HeapDumpOnOutOfMemoryError
-Xloggc:gc.log -XX:+PrintGCDetails -XX:+PrintGCTimeStamps"


Thanks for your help



Shalin Shekhar Mangar wrote:
> 
> Are each of those queries unique?
> 
> First time queries are slower. They are cached by Solr and the same query
> again will return results very quickly because it won't need to hit the
> file
> system.
> 
> On Thu, Dec 11, 2008 at 4:08 PM, sunnyfr <jo...@gmail.com> wrote:
> 
>>
>> Hi,
>>
>> I'm doing a stress test on solr.
>> I've around 8,5M of doc, the size of my data's directory is 5,6G.
>>
>> I've  indexed again my data to make it faster, and applied all the last
>> patch.
>> My index data store just two field : id and text (which is a copy of
>> three
>> fiels)
>> But I still think it's very long, what do you think?
>>
>> For 50request/sec during 40mn, my average  respond time : 1235msec.
>> 49430request.
>>
>> When I make this test with 100request second during 10mn and 10 other
>> minutes with 50 request : my average respond time is 1600msec. Don't you
>> think it's a bit long.
>>
>> Should I partition this index more ? or what should I do to make this
>> work
>> faster.
>> I can read post with people who have just 300msec request for 300Go of
>> index
>> partionned ?
>> My request for collecting all this book is quite complex and have lot of
>> table linked together, maybe it would be faster if I create a csv file ?
>>
>> The server that I'm using for the test has 8G of memory.
>> 4CPU : Intel(R) Xeon(R) CPU            5160  @ 3.00GHz
>> Tomcat55 : -Xms2000m -Xmx4000m
>> Solr 1.3.
>>
>> What can I modify to make it more performant ? memory, indexation ...?
>> Does it can come from my request to the mysql database which is too much
>> linked ?
>>
>> Thanks a lot for your help,
>> Johanna
>>
>>
>> --
>> View this message in context:
>> http://www.nabble.com/Make-it-more-performant---solr-1.3---1200msec-respond-time.-tp20953079p20953079.html
>> Sent from the Solr - User mailing list archive at Nabble.com.
>>
>>
> 
> 
> -- 
> Regards,
> Shalin Shekhar Mangar.
> 
> 

-- 
View this message in context: http://www.nabble.com/Make-it-more-performant---solr-1.3---1200msec-respond-time.-tp20953079p20955210.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Make it more performant - solr 1.3 - 1200msec respond time.

Posted by Shalin Shekhar Mangar <sh...@gmail.com>.
On Thu, Dec 11, 2008 at 5:56 PM, sunnyfr <jo...@gmail.com> wrote:

>
> So according to you and everything explained in my post, I did my best to
> optimize it ?
> Yes it's unique queries. I will try it again and activate cache.
>

If you run unique queries then it is not a very realistic test. Turn on
caching and try running queries from a old access log.


>
> What you mean by hit the file system?


Solr/Lucene has to access the file system to load the results. To avoid this
access and processing, there are many caches in Solr. This makes Solr
faster.

-- 
Regards,
Shalin Shekhar Mangar.

Re: Make it more performant - solr 1.3 - 1200msec respond time.

Posted by sunnyfr <jo...@gmail.com>.
So according to you and everything explained in my post, I did my best to
optimize it ? 
Yes it's unique queries. I will try it again and activate cache. 

What you mean by hit the file system?
thanks a lot




Shalin Shekhar Mangar wrote:
> 
> Are each of those queries unique?
> 
> First time queries are slower. They are cached by Solr and the same query
> again will return results very quickly because it won't need to hit the
> file
> system.
> 
> On Thu, Dec 11, 2008 at 4:08 PM, sunnyfr <jo...@gmail.com> wrote:
> 
>>
>> Hi,
>>
>> I'm doing a stress test on solr.
>> I've around 8,5M of doc, the size of my data's directory is 5,6G.
>>
>> I've  indexed again my data to make it faster, and applied all the last
>> patch.
>> My index data store just two field : id and text (which is a copy of
>> three
>> fiels)
>> But I still think it's very long, what do you think?
>>
>> For 50request/sec during 40mn, my average  respond time : 1235msec.
>> 49430request.
>>
>> When I make this test with 100request second during 10mn and 10 other
>> minutes with 50 request : my average respond time is 1600msec. Don't you
>> think it's a bit long.
>>
>> Should I partition this index more ? or what should I do to make this
>> work
>> faster.
>> I can read post with people who have just 300msec request for 300Go of
>> index
>> partionned ?
>> My request for collecting all this book is quite complex and have lot of
>> table linked together, maybe it would be faster if I create a csv file ?
>>
>> The server that I'm using for the test has 8G of memory.
>> 4CPU : Intel(R) Xeon(R) CPU            5160  @ 3.00GHz
>> Tomcat55 : -Xms2000m -Xmx4000m
>> Solr 1.3.
>>
>> What can I modify to make it more performant ? memory, indexation ...?
>> Does it can come from my request to the mysql database which is too much
>> linked ?
>>
>> Thanks a lot for your help,
>> Johanna
>>
>>
>> --
>> View this message in context:
>> http://www.nabble.com/Make-it-more-performant---solr-1.3---1200msec-respond-time.-tp20953079p20953079.html
>> Sent from the Solr - User mailing list archive at Nabble.com.
>>
>>
> 
> 
> -- 
> Regards,
> Shalin Shekhar Mangar.
> 
> 

-- 
View this message in context: http://www.nabble.com/Make-it-more-performant---solr-1.3---1200msec-respond-time.-tp20953079p20954392.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Make it more performant - solr 1.3 - 1200msec respond time.

Posted by Shalin Shekhar Mangar <sh...@gmail.com>.
Are each of those queries unique?

First time queries are slower. They are cached by Solr and the same query
again will return results very quickly because it won't need to hit the file
system.

On Thu, Dec 11, 2008 at 4:08 PM, sunnyfr <jo...@gmail.com> wrote:

>
> Hi,
>
> I'm doing a stress test on solr.
> I've around 8,5M of doc, the size of my data's directory is 5,6G.
>
> I've  indexed again my data to make it faster, and applied all the last
> patch.
> My index data store just two field : id and text (which is a copy of three
> fiels)
> But I still think it's very long, what do you think?
>
> For 50request/sec during 40mn, my average  respond time : 1235msec.
> 49430request.
>
> When I make this test with 100request second during 10mn and 10 other
> minutes with 50 request : my average respond time is 1600msec. Don't you
> think it's a bit long.
>
> Should I partition this index more ? or what should I do to make this work
> faster.
> I can read post with people who have just 300msec request for 300Go of
> index
> partionned ?
> My request for collecting all this book is quite complex and have lot of
> table linked together, maybe it would be faster if I create a csv file ?
>
> The server that I'm using for the test has 8G of memory.
> 4CPU : Intel(R) Xeon(R) CPU            5160  @ 3.00GHz
> Tomcat55 : -Xms2000m -Xmx4000m
> Solr 1.3.
>
> What can I modify to make it more performant ? memory, indexation ...?
> Does it can come from my request to the mysql database which is too much
> linked ?
>
> Thanks a lot for your help,
> Johanna
>
>
> --
> View this message in context:
> http://www.nabble.com/Make-it-more-performant---solr-1.3---1200msec-respond-time.-tp20953079p20953079.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
>


-- 
Regards,
Shalin Shekhar Mangar.