You are viewing a plain text version of this content. The canonical link for it is here.
Posted to solr-user@lucene.apache.org by Gian Maria Ricci - aka Alkampfer <al...@nablasoft.com> on 2016/01/15 10:43:32 UTC

Speculation on Memory needed to efficently run a Solr Instance.

Hi, 

 

When it is time to calculate how much RAM a solr instance needs to run with
good performance, I know that it is some form of art, but I'm looking at a
general "formula" to have at least one good starting point.

 

Apart the RAM devoted to Java HEAP, that is strongly dependant on how I
configure caches, and the distribution of queries in my system, I'm
particularly interested in the amount of RAM to leave to operating system to
use File Cache.

 

Suppose I have an index of 51 Gb of dimension, clearly having that amount of
ram devoted to the OS is the best approach, so all index files can be cached
into memory by the OS, thus I can achieve maximum speed.

 

But if I look at the detail of the index, in this particular example I see
that the bigger file has .fdt extension, it is the stored field for the
documents, so it affects retrieval of document data, not the real search
process. Since this file is 24 GB of size, it is almost half of the space of
the index.

 

My question is: it could be safe to assume that a good starting point for
the amount of RAM to leave to the OS is the dimension of the index less the
dimension of the .fdt file because it has less importance in the search
process?

 

Are there any particular setting at OS level (CentOS linux) to have maximum
benefit from OS file cache? (documentation at
<https://cwiki.apache.org/confluence/display/solr/Taking+Solr+to+Production#
TakingSolrtoProduction-MemoryandGCSettings>
https://cwiki.apache.org/confluence/display/solr/Taking+Solr+to+Production#T
akingSolrtoProduction-MemoryandGCSettings does not have any information
related to OS configuration). Elasticsearch
(https://www.elastic.co/guide/en/elasticsearch/reference/1.4/setup-configura
tion.html) generally have some suggestions such as using mlockall, disable
swap etc etc, I wonder if there are similar suggestions for solr.

 

Many thanks for all the great help you are giving me in this mailing list. 

 

--
Gian Maria Ricci
Cell: +39 320 0136949

 <http://mvp.microsoft.com/en-us/mvp/Gian%20Maria%20Ricci-4025635>
<http://www.linkedin.com/in/gianmariaricci>
<https://twitter.com/alkampfer>   <http://feeds.feedburner.com/AlkampferEng>


 


Re: Speculation on Memory needed to efficently run a Solr Instance.

Posted by Erick Erickson <er...@gmail.com>.
And to make matters worse, much worse (actually, better)...

See: https://issues.apache.org/jira/browse/SOLR-8220

That ticket (and there will be related ones) is about returning
data from DocValues fields rather than from the stored data
in some situations. Which means it will soon (I hope) be
entirely possible to not have an .fdt file at all. There are some
caveats to that approach, but it can completely bypass the
read-from-disk, decompress, return the data process.

Do note, however, that you can't have analyzed text be docValues
so this will be suitable only for string, numerics and the like fields.

Best,
Erick

On Fri, Jan 15, 2016 at 2:56 AM, Gian Maria Ricci - aka Alkampfer
<al...@nablasoft.com> wrote:
> THanks a lot I'll have a look to Sematext SPM.
>
> Actually the index is not static, but the number of new documents will be
> small and probably they will be indexed during the night, so I'm not
> expecting too much problem from merge factor. We can index new document
> during the night and then optimize the index. (during night there are no
> searches).
>
> --
> Gian Maria Ricci
> Cell: +39 320 0136949
>
>
>
> -----Original Message-----
> From: Emir Arnautovic [mailto:emir.arnautovic@sematext.com]
> Sent: venerdì 15 gennaio 2016 11:06
> To: solr-user@lucene.apache.org
> Subject: Re: Speculation on Memory needed to efficently run a Solr Instance.
>
> Hi,
> OS does not care much about search v.s. retrieve so amount of RAM needed for
> file caches would depend on your index usage patterns. If you are not
> retrieving stored fields much and most/all results are only
> id+score, than it can be assumed that you can go with less RAM than
> actual index size. In such case you can question if you need stored fields
> in index. Also if your index/usage pattern is such that only small subset of
> documents is retrieved with stored fields, than it can also be assumed it
> will never need to cache entire fdt file.
> One thing that you forgot (unless you index is static) is segments merging -
> in worst case system will have two "copies" of index and having extra memory
> can help in such cases.
> The best approach is to use some tool and monitor IO and memory metrics.
> One such tool is Sematext's SPM (http://sematext.com/spm) where you can see
> metrics for both system and SOLR.
>
> Thanks,
> Emir
>
> On 15.01.2016 10:43, Gian Maria Ricci - aka Alkampfer wrote:
>>
>> Hi,
>>
>> When it is time to calculate how much RAM a solr instance needs to run
>> with good performance, I know that it is some form of art, but I’m
>> looking at a general “formula” to have at least one good starting point.
>>
>> Apart the RAM devoted to Java HEAP, that is strongly dependant on how
>> I configure caches, and the distribution of queries in my system, I’m
>> particularly interested in the amount of RAM to leave to operating
>> system to use File Cache.
>>
>> Suppose I have an index of 51 Gb of dimension, clearly having that
>> amount of ram devoted to the OS is the best approach, so all index
>> files can be cached into memory by the OS, thus I can achieve maximum
>> speed.
>>
>> But if I look at the detail of the index, in this particular example I
>> see that the bigger file has .fdt extension, it is the stored field
>> for the documents, so it affects retrieval of document data, not the
>> re
> al search process. Since this file is 24 GB of size, it is almost
>> half of the space of the index.
>>
>> My question is: it could be safe to assume that a good starting point
>> for the amount of RAM to leave to the OS is the dimension of the index
>> less the dimension of the .fdt file because it has less importance in
>> the search process?
>>
>> Are there any particular setting at OS level (CentOS linux) to have
>> maximum benefit from OS file cache? (documentation at
>> https://cwiki.apache.org/confluence/display/solr/Taking+Solr+to+Produc
>> tion#TakingSolrtoProduction-MemoryandGCSettingsdoes
>> not have any information related to OS configuration). Elasticsearch
>> (https://www.elastic.co/guide/en/elasticsearch/reference/1.4/setup-con
>> figuration.html) generally have some suggestions such as using
>> mlockall, disable swap etc etc, I wonder if there are similar
>> suggestions for solr.
>>
>> Many thanks for all the great help you are giving me in this mailing
>> list.
>>
>> --
>> Gian Maria Ricci> Cell: +39 320 0136949
>>
>> https://ci5.googleusercontent.com/proxy/5oNMOYAeFXZ_LDKanNfoLRHC37mAZk
>> VVhkPN7QxMdA0K5JW2m0bm8azJe7oWZMNt8fKHNX1bzrUTd-kIyE40CmwT2Mlf8OI=s0-d
>> -e1-ft#http://www.codewrecks.com/files/signature/mvp.png
>> <http://mvp.microsoft.com/en-us/mvp/Gian%20Maria%20Ricci-4025635>https
>> ://ci3.googleusercontent.com/proxy/f-unQbmk6NtkHFspO5Y6x4jlIf_xrmGLUT3
>> fU9y_7VUHSFUjLs7aUIMdZQYTh3eWIA0sBnvNX3WGXCU59chKXLuAHi2ArWdAcBclKA=s0
>> -d-e1-ft#http://www.codewrecks.com/files/signature/linkedin.jpg
>> <http://www.linkedin.com/in/gianmariaricci>https://ci3.googleuserconte
>> nt.com/proxy/gjapMzu3KEakBQUstx_-cN7gHJ_GpcIZNEPjCzOYMrPl-r1DViPE378qN
>> AQyEWbXMTj6mcduIAGaApe9qHG1KN_hyFxQAIkdNSVT=s0-d-e1-ft#http://www.code
>> wrecks.com/files/signature/twitter.jpg
>> <https://twitter.com/alkampfer>https://ci5.googleusercontent.com/proxy
>> /iuDOD2sdaxRDvTwS8MO7-CcXchpNJX96uaWuvagoVLcjpAPsJi88XeOonE4vHT6udVimo
>> 7yL9ZtdrYueEfH7jXnudmi_Vvw=s0-d-e1-ft#http://www.codewrecks.com/files/
>> signature/rss.jpg
>> <http://feeds.feedburner.com
> /AlkampferEng>https://ci6.googleusercontent.com/proxy/EBJjfkBzcsSlAzlyR88y86
> YXcwaKfn3x7ydAObL1vtjJYclQr_l5TvrFx4PQ5qLNYW3yp7Ig66DJ-0tPJCDbDmYAFcamPQehwg
> =s0-d-e1-ft#http://www.codewrecks.com/files/signature/skype.jpg
>>
>
> --
> Monitoring * Alerting * Anomaly Detection * Centralized Log Management Solr
> & Elasticsearch Support * http://sematext.com/
>

RE: Speculation on Memory needed to efficently run a Solr Instance.

Posted by Gian Maria Ricci - aka Alkampfer <al...@nablasoft.com>.
THanks a lot I'll have a look to Sematext SPM. 

Actually the index is not static, but the number of new documents will be
small and probably they will be indexed during the night, so I'm not
expecting too much problem from merge factor. We can index new document
during the night and then optimize the index. (during night there are no
searches).

--
Gian Maria Ricci
Cell: +39 320 0136949
    


-----Original Message-----
From: Emir Arnautovic [mailto:emir.arnautovic@sematext.com] 
Sent: venerdì 15 gennaio 2016 11:06
To: solr-user@lucene.apache.org
Subject: Re: Speculation on Memory needed to efficently run a Solr Instance.

Hi,
OS does not care much about search v.s. retrieve so amount of RAM needed for
file caches would depend on your index usage patterns. If you are not
retrieving stored fields much and most/all results are only 
id+score, than it can be assumed that you can go with less RAM than
actual index size. In such case you can question if you need stored fields
in index. Also if your index/usage pattern is such that only small subset of
documents is retrieved with stored fields, than it can also be assumed it
will never need to cache entire fdt file.
One thing that you forgot (unless you index is static) is segments merging -
in worst case system will have two "copies" of index and having extra memory
can help in such cases.
The best approach is to use some tool and monitor IO and memory metrics. 
One such tool is Sematext's SPM (http://sematext.com/spm) where you can see
metrics for both system and SOLR.

Thanks,
Emir

On 15.01.2016 10:43, Gian Maria Ricci - aka Alkampfer wrote:
>
> Hi,
>
> When it is time to calculate how much RAM a solr instance needs to run 
> with good performance, I know that it is some form of art, but I’m 
> looking at a general “formula” to have at least one good starting point.
>
> Apart the RAM devoted to Java HEAP, that is strongly dependant on how 
> I configure caches, and the distribution of queries in my system, I’m 
> particularly interested in the amount of RAM to leave to operating 
> system to use File Cache.
>
> Suppose I have an index of 51 Gb of dimension, clearly having that 
> amount of ram devoted to the OS is the best approach, so all index 
> files can be cached into memory by the OS, thus I can achieve maximum 
> speed.
>
> But if I look at the detail of the index, in this particular example I 
> see that the bigger file has .fdt extension, it is the stored field 
> for the documents, so it affects retrieval of document data, not the 
> re
al search process. Since this file is 24 GB of size, it is almost 
> half of the space of the index.
>
> My question is: it could be safe to assume that a good starting point 
> for the amount of RAM to leave to the OS is the dimension of the index 
> less the dimension of the .fdt file because it has less importance in 
> the search process?
>
> Are there any particular setting at OS level (CentOS linux) to have 
> maximum benefit from OS file cache? (documentation at 
> https://cwiki.apache.org/confluence/display/solr/Taking+Solr+to+Produc
> tion#TakingSolrtoProduction-MemoryandGCSettingsdoes
> not have any information related to OS configuration). Elasticsearch
> (https://www.elastic.co/guide/en/elasticsearch/reference/1.4/setup-con
> figuration.html) generally have some suggestions such as using 
> mlockall, disable swap etc etc, I wonder if there are similar 
> suggestions for solr.
>
> Many thanks for all the great help you are giving me in this mailing 
> list.
>
> --
> Gian Maria Ricci> Cell: +39 320 0136949
>
> https://ci5.googleusercontent.com/proxy/5oNMOYAeFXZ_LDKanNfoLRHC37mAZk
> VVhkPN7QxMdA0K5JW2m0bm8azJe7oWZMNt8fKHNX1bzrUTd-kIyE40CmwT2Mlf8OI=s0-d
> -e1-ft#http://www.codewrecks.com/files/signature/mvp.png
> <http://mvp.microsoft.com/en-us/mvp/Gian%20Maria%20Ricci-4025635>https
> ://ci3.googleusercontent.com/proxy/f-unQbmk6NtkHFspO5Y6x4jlIf_xrmGLUT3
> fU9y_7VUHSFUjLs7aUIMdZQYTh3eWIA0sBnvNX3WGXCU59chKXLuAHi2ArWdAcBclKA=s0
> -d-e1-ft#http://www.codewrecks.com/files/signature/linkedin.jpg
> <http://www.linkedin.com/in/gianmariaricci>https://ci3.googleuserconte
> nt.com/proxy/gjapMzu3KEakBQUstx_-cN7gHJ_GpcIZNEPjCzOYMrPl-r1DViPE378qN
> AQyEWbXMTj6mcduIAGaApe9qHG1KN_hyFxQAIkdNSVT=s0-d-e1-ft#http://www.code
> wrecks.com/files/signature/twitter.jpg
> <https://twitter.com/alkampfer>https://ci5.googleusercontent.com/proxy
> /iuDOD2sdaxRDvTwS8MO7-CcXchpNJX96uaWuvagoVLcjpAPsJi88XeOonE4vHT6udVimo
> 7yL9ZtdrYueEfH7jXnudmi_Vvw=s0-d-e1-ft#http://www.codewrecks.com/files/
> signature/rss.jpg
> <http://feeds.feedburner.com
/AlkampferEng>https://ci6.googleusercontent.com/proxy/EBJjfkBzcsSlAzlyR88y86
YXcwaKfn3x7ydAObL1vtjJYclQr_l5TvrFx4PQ5qLNYW3yp7Ig66DJ-0tPJCDbDmYAFcamPQehwg
=s0-d-e1-ft#http://www.codewrecks.com/files/signature/skype.jpg
>

--
Monitoring * Alerting * Anomaly Detection * Centralized Log Management Solr
& Elasticsearch Support * http://sematext.com/


Re: Speculation on Memory needed to efficently run a Solr Instance.

Posted by Emir Arnautovic <em...@sematext.com>.
Hi,
OS does not care much about search v.s. retrieve so amount of RAM needed 
for file caches would depend on your index usage patterns. If you are 
not retrieving stored fields much and most/all results are only 
id+score, than it can be assumed that you can go with less RAM than 
actual index size. In such case you can question if you need stored 
fields in index. Also if your index/usage pattern is such that only 
small subset of documents is retrieved with stored fields, than it can 
also be assumed it will never need to cache entire fdt file.
One thing that you forgot (unless you index is static) is segments 
merging - in worst case system will have two "copies" of index and 
having extra memory can help in such cases.
The best approach is to use some tool and monitor IO and memory metrics. 
One such tool is Sematext's SPM (http://sematext.com/spm) where you can 
see metrics for both system and SOLR.

Thanks,
Emir

On 15.01.2016 10:43, Gian Maria Ricci - aka Alkampfer wrote:
>
> Hi,
>
> When it is time to calculate how much RAM a solr instance needs to run 
> with good performance, I know that it is some form of art, but I’m 
> looking at a general “formula” to have at least one good starting point.
>
> Apart the RAM devoted to Java HEAP, that is strongly dependant on how 
> I configure caches, and the distribution of queries in my system, I’m 
> particularly interested in the amount of RAM to leave to operating 
> system to use File Cache.
>
> Suppose I have an index of 51 Gb of dimension, clearly having that 
> amount of ram devoted to the OS is the best approach, so all index 
> files can be cached into memory by the OS, thus I can achieve maximum 
> speed.
>
> But if I look at the detail of the index, in this particular example I 
> see that the bigger file has .fdt extension, it is the stored field 
> for the documents, so it affects retrieval of document data, not the 
> real search process. Since this file is 24 GB of size, it is almost 
> half of the space of the index.
>
> My question is: it could be safe to assume that a good starting point 
> for the amount of RAM to leave to the OS is the dimension of the index 
> less the dimension of the .fdt file because it has less importance in 
> the search process?
>
> Are there any particular setting at OS level (CentOS linux) to have 
> maximum benefit from OS file cache? (documentation at 
> https://cwiki.apache.org/confluence/display/solr/Taking+Solr+to+Production#TakingSolrtoProduction-MemoryandGCSettingsdoes 
> not have any information related to OS configuration). Elasticsearch 
> (https://www.elastic.co/guide/en/elasticsearch/reference/1.4/setup-configuration.html) 
> generally have some suggestions such as using mlockall, disable swap 
> etc etc, I wonder if there are similar suggestions for solr.
>
> Many thanks for all the great help you are giving me in this mailing 
> list.
>
> --
> Gian Maria Ricci
> Cell: +39 320 0136949
>
> https://ci5.googleusercontent.com/proxy/5oNMOYAeFXZ_LDKanNfoLRHC37mAZkVVhkPN7QxMdA0K5JW2m0bm8azJe7oWZMNt8fKHNX1bzrUTd-kIyE40CmwT2Mlf8OI=s0-d-e1-ft#http://www.codewrecks.com/files/signature/mvp.png 
> <http://mvp.microsoft.com/en-us/mvp/Gian%20Maria%20Ricci-4025635>https://ci3.googleusercontent.com/proxy/f-unQbmk6NtkHFspO5Y6x4jlIf_xrmGLUT3fU9y_7VUHSFUjLs7aUIMdZQYTh3eWIA0sBnvNX3WGXCU59chKXLuAHi2ArWdAcBclKA=s0-d-e1-ft#http://www.codewrecks.com/files/signature/linkedin.jpg 
> <http://www.linkedin.com/in/gianmariaricci>https://ci3.googleusercontent.com/proxy/gjapMzu3KEakBQUstx_-cN7gHJ_GpcIZNEPjCzOYMrPl-r1DViPE378qNAQyEWbXMTj6mcduIAGaApe9qHG1KN_hyFxQAIkdNSVT=s0-d-e1-ft#http://www.codewrecks.com/files/signature/twitter.jpg 
> <https://twitter.com/alkampfer>https://ci5.googleusercontent.com/proxy/iuDOD2sdaxRDvTwS8MO7-CcXchpNJX96uaWuvagoVLcjpAPsJi88XeOonE4vHT6udVimo7yL9ZtdrYueEfH7jXnudmi_Vvw=s0-d-e1-ft#http://www.codewrecks.com/files/signature/rss.jpg 
> <http://feeds.feedburner.com/AlkampferEng>https://ci6.googleusercontent.com/proxy/EBJjfkBzcsSlAzlyR88y86YXcwaKfn3x7ydAObL1vtjJYclQr_l5TvrFx4PQ5qLNYW3yp7Ig66DJ-0tPJCDbDmYAFcamPQehwg=s0-d-e1-ft#http://www.codewrecks.com/files/signature/skype.jpg
>

-- 
Monitoring * Alerting * Anomaly Detection * Centralized Log Management
Solr & Elasticsearch Support * http://sematext.com/


Re: Speculation on Memory needed to efficently run a Solr Instance.

Posted by Toke Eskildsen <te...@statsbiblioteket.dk>.
Jack Krupansky <ja...@gmail.com> wrote:

> Again to be clear, if you really do need the best/minimal overall query
> latency, your best bet is to have sufficient system memory to fully cache
> the entire index. If you actually don't need minimal latency, then of
> course you can feel free to trade off RAM for lower latency.

This bears repeating and I wish it would be added each time someone presents the "free cache = index size" rule of thumb. Thank you for stating it so clearly, Jack.

- Toke Eskildsen

Re: Speculation on Memory needed to efficently run a Solr Instance.

Posted by Jack Krupansky <ja...@gmail.com>.
Personally, I'll continue to recommend that the ideal goal is to fully
cache the entire Lucene index in system memory, as well as doing a proof of
concept implementation to validate actual performance for your actual data.
You can do a POC with a small fraction of your full data, like 15% or even
10%, and then it's fairly safe to simply multiple those numbers to get the
RAM needed for the full 100% of your data (or even 120% to allow for modest
growth.)

Be careful about distinguishing search and query - sure, only a subset of
the data is needed to find the matching documents, but then the stored data
must be fetched to return the query results (search/lookup vs. query
results.) If the stored values are not also cached, you will increase the
latency of your overall query (returning results) even if the
search/match/lookup was reasonably fast.

So, the model is to prototype with a measured subset of your data, see how
the latency and system memory usage work out, and then scale that number up
for total memory requirement.

Again to be clear, if you really do need the best/minimal overall query
latency, your best bet is to have sufficient system memory to fully cache
the entire index. If you actually don't need minimal latency, then of
course you can feel free to trade off RAM for lower latency.



-- Jack Krupansky

On Fri, Jan 15, 2016 at 4:43 AM, Gian Maria Ricci - aka Alkampfer <
alkampfer@nablasoft.com> wrote:

> Hi,
>
>
>
> When it is time to calculate how much RAM a solr instance needs to run
> with good performance, I know that it is some form of art, but I’m looking
> at a general “formula” to have at least one good starting point.
>
>
>
> Apart the RAM devoted to Java HEAP, that is strongly dependant on how I
> configure caches, and the distribution of queries in my system, I’m
> particularly interested in the amount of RAM to leave to operating system
> to use File Cache.
>
>
>
> Suppose I have an index of 51 Gb of dimension, clearly having that amount
> of ram devoted to the OS is the best approach, so all index files can be
> cached into memory by the OS, thus I can achieve maximum speed.
>
>
>
> But if I look at the detail of the index, in this particular example I see
> that the bigger file has .fdt extension, it is the stored field for the
> documents, so it affects retrieval of document data, not the real search
> process. Since this file is 24 GB of size, it is almost half of the space
> of the index.
>
>
>
> My question is: it could be safe to assume that a good starting point for
> the amount of RAM to leave to the OS is the dimension of the index less the
> dimension of the .fdt file because it has less importance in the search
> process?
>
>
>
> Are there any particular setting at OS level (CentOS linux) to have
> maximum benefit from OS file cache? (documentation at
> https://cwiki.apache.org/confluence/display/solr/Taking+Solr+to+Production#TakingSolrtoProduction-MemoryandGCSettings
> does not have any information related to OS configuration). Elasticsearch (
> https://www.elastic.co/guide/en/elasticsearch/reference/1.4/setup-configuration.html)
> generally have some suggestions such as using mlockall, disable swap etc
> etc, I wonder if there are similar suggestions for solr.
>
>
>
> Many thanks for all the great help you are giving me in this mailing list.
>
>
>
> --
> Gian Maria Ricci
> Cell: +39 320 0136949
>
> [image:
> https://ci5.googleusercontent.com/proxy/5oNMOYAeFXZ_LDKanNfoLRHC37mAZkVVhkPN7QxMdA0K5JW2m0bm8azJe7oWZMNt8fKHNX1bzrUTd-kIyE40CmwT2Mlf8OI=s0-d-e1-ft#http://www.codewrecks.com/files/signature/mvp.png]
> <http://mvp.microsoft.com/en-us/mvp/Gian%20Maria%20Ricci-4025635> [image:
> https://ci3.googleusercontent.com/proxy/f-unQbmk6NtkHFspO5Y6x4jlIf_xrmGLUT3fU9y_7VUHSFUjLs7aUIMdZQYTh3eWIA0sBnvNX3WGXCU59chKXLuAHi2ArWdAcBclKA=s0-d-e1-ft#http://www.codewrecks.com/files/signature/linkedin.jpg]
> <http://www.linkedin.com/in/gianmariaricci> [image:
> https://ci3.googleusercontent.com/proxy/gjapMzu3KEakBQUstx_-cN7gHJ_GpcIZNEPjCzOYMrPl-r1DViPE378qNAQyEWbXMTj6mcduIAGaApe9qHG1KN_hyFxQAIkdNSVT=s0-d-e1-ft#http://www.codewrecks.com/files/signature/twitter.jpg]
> <https://twitter.com/alkampfer> [image:
> https://ci5.googleusercontent.com/proxy/iuDOD2sdaxRDvTwS8MO7-CcXchpNJX96uaWuvagoVLcjpAPsJi88XeOonE4vHT6udVimo7yL9ZtdrYueEfH7jXnudmi_Vvw=s0-d-e1-ft#http://www.codewrecks.com/files/signature/rss.jpg]
> <http://feeds.feedburner.com/AlkampferEng> [image:
> https://ci6.googleusercontent.com/proxy/EBJjfkBzcsSlAzlyR88y86YXcwaKfn3x7ydAObL1vtjJYclQr_l5TvrFx4PQ5qLNYW3yp7Ig66DJ-0tPJCDbDmYAFcamPQehwg=s0-d-e1-ft#http://www.codewrecks.com/files/signature/skype.jpg]
>
>
>