You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@sdap.apache.org by le...@apache.org on 2017/10/27 22:41:38 UTC

[01/15] incubator-sdap-edge git commit: SDAP-1 Import all code under the SDAP SGA

Repository: incubator-sdap-edge
Updated Branches:
  refs/heads/master [created] 53351bf3a


http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/site/apt/release/index-3.1.0.apt
----------------------------------------------------------------------
diff --git a/src/site/apt/release/index-3.1.0.apt b/src/site/apt/release/index-3.1.0.apt
new file mode 100644
index 0000000..e242ba0
--- /dev/null
+++ b/src/site/apt/release/index-3.1.0.apt
@@ -0,0 +1,81 @@
+~~ Copyright 2010, by the California Institute of Technology.
+~~ ALL RIGHTS RESERVED. United States Government Sponsorship acknowledged.
+~~
+~~ $Id:$
+
+  ---
+  Release Description 3.1.0 
+  ---
+  Thomas Huang
+  ---
+  
+Release Description 3.1.0
+
+  This release of the initial release of the OCSI Program Set. This release is NOT intended as an operational release. The following sections can be found in this document:
+
+  * {{{Capabilities}Capabilities}}
+
+  * {{{Liens}Liens}}
+
+  * {{{System_Requirements}System Requirements}}
+
+
+{Capabilities}
+
+  This section details the new, modified and corrected capabilities that comprise this release of the OCSI Program Set. In lieu of listing the capabilities by requirement they have been summarized. For a complete list of the requirements and their support status, see {{{Appendix_A_-_Requirements_Support}Appendix A}}.
+
+* New Capabilities
+
+   * ISO 19115-2 support for dataset.
+   
+   * Spatial search for granules (bouding box).
+   
+   * Atom feed for opensearch response.
+
+   * Prettify support for xml response.
+
+* Modified Capabilities
+
+   * None.
+
+* Corrected Capabilities
+
+   * None 
+
+
+{Liens}
+
+  This section details the liens against the capabilities that have yet to be implemented or are partially implemented. In lieu of listing the liens by requirement they have been summarized. For a complete list of the requirements and their support status, see {{{Appendix_A_-_Requirements_Support}Appendix A}}.
+
+  There are no liens for this release.
+
+
+{System Requirements}
+
+  The software contained in this program set was written in Python and will run on any platform with Python enabled environment. The software was specifically developed under Python 2.6.x or 2.7.x and has only been tested with this version.
+
+  * PO.DAAC Standard Computing Environment
+
+  This will eventually be documented on the Wiki and linked to in future releases. For now, the following software and tools should be available in the current environment:
+
+    * Apache Maven 2.2.x
+
+    * Subversion Client 1.4.X
+    
+    * Python 2.6.x or 2.7.x
+
+    * Tornado 1.2.1
+    
+    * Jinja2 2.5.5 (Template engine for python)
+
+    * cx_Oracle 5.1 (Python DB API implementation for python)
+
+    []
+
+    See the system-level {{{../../cm/release/index-3.1.0.html}release}} document for specific version numbers.
+
+
+{Appendix A - Requirements Support}
+
+  TBD
+

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/site/apt/release/index-3.1.1.apt
----------------------------------------------------------------------
diff --git a/src/site/apt/release/index-3.1.1.apt b/src/site/apt/release/index-3.1.1.apt
new file mode 100644
index 0000000..f9797fa
--- /dev/null
+++ b/src/site/apt/release/index-3.1.1.apt
@@ -0,0 +1,89 @@
+~~ Copyright 2010, by the California Institute of Technology.
+~~ ALL RIGHTS RESERVED. United States Government Sponsorship acknowledged.
+~~
+~~ $Id:$
+
+  ---
+  Release Description 3.1.1 
+  ---
+  Thomas Huang
+  ---
+  
+Release Description 3.1.1
+
+  This release of the initial release of the OCSI Program Set. This release is NOT intended as an operational release. The following sections can be found in this document:
+
+  * {{{Capabilities}Capabilities}}
+
+  * {{{Liens}Liens}}
+
+  * {{{System_Requirements}System Requirements}}
+
+
+{Capabilities}
+
+  This section details the new, modified and corrected capabilities that comprise this release of the OCSI Program Set. In lieu of listing the capabilities by requirement they have been summarized. For a complete list of the requirements and their support status, see {{{Appendix_A_-_Requirements_Support}Appendix A}}.
+
+* New Capabilities
+
+   * ISO 19115-2 support for granule.
+   
+   * GCMD DIF support for dataset.
+   
+   * Added new search parameters for dataset (instrument, satellite, format, status, processingLevel).
+
+* Modified Capabilities
+
+   * Updated ISO 19115-2 response for dataset.
+   
+   * Changed search parameters (keyword, startTime, endTime, numberOfResults, boundingBox).
+   
+   * Date time parameters take ISO 8601 format.
+
+   * Added Connection: close header to HTTP request.
+
+* Corrected Capabilities
+
+   * {{{https://podaac-cm/trac/ticket/1025}1025}} - Granule search result shows online link.
+   
+   * Search returns only OPEN, PREVIEW, RETIRED, and SIMULATED datasets and granules. This fix has been merged to 3.1.0 release.
+
+
+{Liens}
+
+  This section details the liens against the capabilities that have yet to be implemented or are partially implemented. In lieu of listing the liens by requirement they have been summarized. For a complete list of the requirements and their support status, see {{{Appendix_A_-_Requirements_Support}Appendix A}}.
+
+  There are no liens for this release.
+
+
+{System Requirements}
+
+  The software contained in this program set was written in Python and will run on any platform with Python enabled environment. The software was specifically developed under Python 2.6.x or 2.7.x and has only been tested with this version.
+
+  * PO.DAAC Standard Computing Environment
+
+  This will eventually be documented on the Wiki and linked to in future releases. For now, the following software and tools should be available in the current environment:
+
+    * Apache Maven 2.2.x
+
+    * Subversion Client 1.4.X
+    
+    * Python 2.6.x or 2.7.x
+
+    * Tornado 1.2.1
+    
+    * Jinja2 2.5.5 (Template engine for python)
+
+    * cx_Oracle 5.1 (Python DB API implementation for python)
+    
+    * python-dateutil 1.5 (Extensions to the standard python 2.3+ datetime module)
+
+    []
+
+    See the system-level {{{../../cm/release/index-3.1.1.html}release}} document for specific version numbers.
+
+
+{Appendix A - Requirements Support}
+
+  TBD
+

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/site/apt/release/index-3.2.0.apt
----------------------------------------------------------------------
diff --git a/src/site/apt/release/index-3.2.0.apt b/src/site/apt/release/index-3.2.0.apt
new file mode 100644
index 0000000..2b1558d
--- /dev/null
+++ b/src/site/apt/release/index-3.2.0.apt
@@ -0,0 +1,79 @@
+~~ Copyright 2010, by the California Institute of Technology.
+~~ ALL RIGHTS RESERVED. United States Government Sponsorship acknowledged.
+~~
+~~ $Id:$
+
+  ---
+  Release Description 3.2.0 
+  ---
+  Thomas Huang
+  ---
+  
+Release Description 3.2.0
+
+  This release of the initial release of the OCSI Program Set. This release is NOT intended as an operational release. The following sections can be found in this document:
+
+  * {{{Capabilities}Capabilities}}
+
+  * {{{Liens}Liens}}
+
+  * {{{System_Requirements}System Requirements}}
+
+
+{Capabilities}
+
+  This section details the new, modified and corrected capabilities that comprise this release of the OCSI Program Set. In lieu of listing the capabilities by requirement they have been summarized. For a complete list of the requirements and their support status, see {{{Appendix_A_-_Requirements_Support}Appendix A}}.
+
+* New Capabilities
+
+   * {{{https://podaac-cm/trac/ticket/1083}Trac-1083}} - FGDC creation support.
+
+* Modified Capabilities
+
+   * {{{https://podaac-cm/trac/ticket/1084}Trac-1084}} - Support for persistent ID.
+
+   * {{{https://podaac-cm/trac/ticket/1094}Trac-1094}} - Case-insensitive search on format, status, and processingLevel parameter.
+
+* Corrected Capabilities
+
+   * None
+
+
+{Liens}
+
+  This section details the liens against the capabilities that have yet to be implemented or are partially implemented. In lieu of listing the liens by requirement they have been summarized. For a complete list of the requirements and their support status, see {{{Appendix_A_-_Requirements_Support}Appendix A}}.
+
+  There are no liens for this release.
+
+
+{System Requirements}
+
+  The software contained in this program set was written in Python and will run on any platform with Python enabled environment. The software was specifically developed under Python 2.6.x or 2.7.x and has only been tested with this version.
+
+  * PO.DAAC Standard Computing Environment
+
+  This will eventually be documented on the Wiki and linked to in future releases. For now, the following software and tools should be available in the current environment:
+
+    * Apache Maven 2.2.x
+
+    * Subversion Client 1.4.X
+    
+    * Python 2.6.x or 2.7.x
+
+    * Tornado 1.2.1
+    
+    * Jinja2 2.5.5 (Template engine for python)
+
+    * cx_Oracle 5.1 (Python DB API implementation for python)
+    
+    * python-dateutil 1.5 (Extensions to the standard python 2.3+ datetime module)
+
+    []
+
+    See the system-level {{{../../cm/release/index-3.2.0.html}release}} document for specific version numbers.
+
+
+{Appendix A - Requirements Support}
+
+  TBD
+

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/site/apt/release/index-3.2.1.apt
----------------------------------------------------------------------
diff --git a/src/site/apt/release/index-3.2.1.apt b/src/site/apt/release/index-3.2.1.apt
new file mode 100644
index 0000000..d274df0
--- /dev/null
+++ b/src/site/apt/release/index-3.2.1.apt
@@ -0,0 +1,79 @@
+~~ Copyright 2010, by the California Institute of Technology.
+~~ ALL RIGHTS RESERVED. United States Government Sponsorship acknowledged.
+~~
+~~ $Id:$
+
+  ---
+  Release Description 3.2.1 
+  ---
+  Thomas Huang
+  ---
+  
+Release Description 3.2.1
+
+  This release of the initial release of the OCSI Program Set. This release is NOT intended as an operational release. The following sections can be found in this document:
+
+  * {{{Capabilities}Capabilities}}
+
+  * {{{Liens}Liens}}
+
+  * {{{System_Requirements}System Requirements}}
+
+
+{Capabilities}
+
+  This section details the new, modified and corrected capabilities that comprise this release of the OCSI Program Set. In lieu of listing the capabilities by requirement they have been summarized. For a complete list of the requirements and their support status, see {{{Appendix_A_-_Requirements_Support}Appendix A}}.
+
+* New Capabilities
+
+   * None
+
+* Modified Capabilities
+
+   * {{{https://podaac-cm/trac/ticket/1144}Trac-1144}} - Return only granule name and link to granule metadata in OpenSearch response for bounding box search.
+
+* Corrected Capabilities
+
+   * {{{https://podaac-cm/trac/ticket/1145}Trac-1145}} - Sort granules in FGDC and ISO response by start time to prevent solr out of memory error.
+
+   * {{{https://podaac-cm/trac/ticket/1146}Trac-1146}} - Update DTD link in FGDC response to fix Internet Explorer error.
+
+
+{Liens}
+
+  This section details the liens against the capabilities that have yet to be implemented or are partially implemented. In lieu of listing the liens by requirement they have been summarized. For a complete list of the requirements and their support status, see {{{Appendix_A_-_Requirements_Support}Appendix A}}.
+
+  There are no liens for this release.
+
+
+{System Requirements}
+
+  The software contained in this program set was written in Python and will run on any platform with Python enabled environment. The software was specifically developed under Python 2.6.x or 2.7.x and has only been tested with this version.
+
+  * PO.DAAC Standard Computing Environment
+
+  This will eventually be documented on the Wiki and linked to in future releases. For now, the following software and tools should be available in the current environment:
+
+    * Apache Maven 2.2.x
+
+    * Subversion Client 1.4.X
+    
+    * Python 2.6.x or 2.7.x
+
+    * Tornado 1.2.1
+    
+    * Jinja2 2.5.5 (Template engine for python)
+
+    * cx_Oracle 5.1 (Python DB API implementation for python)
+    
+    * python-dateutil 1.5 (Extensions to the standard python 2.3+ datetime module)
+
+    []
+
+    See the system-level {{{../../cm/release/index-3.2.1.html}release}} document for specific version numbers.
+
+
+{Appendix A - Requirements Support}
+
+  TBD
+

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/site/apt/release/index-3.2.2.apt
----------------------------------------------------------------------
diff --git a/src/site/apt/release/index-3.2.2.apt b/src/site/apt/release/index-3.2.2.apt
new file mode 100644
index 0000000..12584b1
--- /dev/null
+++ b/src/site/apt/release/index-3.2.2.apt
@@ -0,0 +1,81 @@
+~~ Copyright 2010, by the California Institute of Technology.
+~~ ALL RIGHTS RESERVED. United States Government Sponsorship acknowledged.
+~~
+~~ $Id:$
+
+  ---
+  Release Description 3.2.2 
+  ---
+  Thomas Huang
+  ---
+  
+Release Description 3.2.2
+
+  This release of the initial release of the OCSI Program Set. This release is NOT intended as an operational release. The following sections can be found in this document:
+
+  * {{{Capabilities}Capabilities}}
+
+  * {{{Liens}Liens}}
+
+  * {{{System_Requirements}System Requirements}}
+
+
+{Capabilities}
+
+  This section details the new, modified and corrected capabilities that comprise this release of the OCSI Program Set. In lieu of listing the capabilities by requirement they have been summarized. For a complete list of the requirements and their support status, see {{{Appendix_A_-_Requirements_Support}Appendix A}}.
+
+* New Capabilities
+
+   * None
+
+* Modified Capabilities
+
+   * {{{https://podaac-cm/trac/ticket/1214}Trac-1214}} - Capped number of results for OCSI response.
+
+   * {{{https://podaac-cm/trac/ticket/1231}Trac-1231}} - Modified OpenSearch response.
+
+* Corrected Capabilities
+
+   * {{{https://podaac-cm/trac/ticket/1174}Trac-1174}} - Fixed dataset temporal search to return all datasets that have total or partial overlap.
+
+   * {{{https://podaac-cm/trac/ticket/1233}Trac-1233}} - Fixed dataset missing optional GCMD DIF field should not return 404 page with error.
+
+
+{Liens}
+
+  This section details the liens against the capabilities that have yet to be implemented or are partially implemented. In lieu of listing the liens by requirement they have been summarized. For a complete list of the requirements and their support status, see {{{Appendix_A_-_Requirements_Support}Appendix A}}.
+
+  There are no liens for this release.
+
+
+{System Requirements}
+
+  The software contained in this program set was written in Python and will run on any platform with Python enabled environment. The software was specifically developed under Python 2.6.x or 2.7.x and has only been tested with this version.
+
+  * PO.DAAC Standard Computing Environment
+
+  This will eventually be documented on the Wiki and linked to in future releases. For now, the following software and tools should be available in the current environment:
+
+    * Apache Maven 2.2.x
+
+    * Subversion Client 1.4.X
+    
+    * Python 2.6.x or 2.7.x
+
+    * Tornado 1.2.1
+    
+    * Jinja2 2.5.5 (Template engine for python)
+
+    * cx_Oracle 5.1 (Python DB API implementation for python)
+    
+    * python-dateutil 1.5 (Extensions to the standard python 2.3+ datetime module)
+
+    []
+
+    See the system-level {{{../../cm/release/index-3.2.2.html}release}} document for specific version numbers.
+
+
+{Appendix A - Requirements Support}
+
+  TBD
+

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/site/apt/release/index-3.3.0.apt
----------------------------------------------------------------------
diff --git a/src/site/apt/release/index-3.3.0.apt b/src/site/apt/release/index-3.3.0.apt
new file mode 100644
index 0000000..9b76618
--- /dev/null
+++ b/src/site/apt/release/index-3.3.0.apt
@@ -0,0 +1,81 @@
+~~ Copyright 2010, by the California Institute of Technology.
+~~ ALL RIGHTS RESERVED. United States Government Sponsorship acknowledged.
+~~
+~~ $Id:$
+
+  ---
+  Release Description 3.3.0 
+  ---
+  Thomas Huang
+  ---
+  
+Release Description 3.3.0
+
+  This release of the initial release of the OCSI Program Set. This release is NOT intended as an operational release. The following sections can be found in this document:
+
+  * {{{Capabilities}Capabilities}}
+
+  * {{{Liens}Liens}}
+
+  * {{{System_Requirements}System Requirements}}
+
+
+{Capabilities}
+
+  This section details the new, modified and corrected capabilities that comprise this release of the OCSI Program Set. In lieu of listing the capabilities by requirement they have been summarized. For a complete list of the requirements and their support status, see {{{Appendix_A_-_Requirements_Support}Appendix A}}.
+
+* New Capabilities
+
+   * None
+
+* Modified Capabilities
+
+   * {{{https://podaac-cm/trac/ticket/1245}Trac-1245}} - Revised OpenSearch response and interface.
+
+* Corrected Capabilities
+
+   * {{{https://podaac-cm/trac/ticket/1268}Trac-1268}} - Fixed evaluation of opensearch temporal query.
+
+   * {{{https://podaac-cm/trac/ticket/1283}Trac-1283}} - Fixed blocking issue when making requests to external data services.
+
+
+{Liens}
+
+  This section details the liens against the capabilities that have yet to be implemented or are partially implemented. In lieu of listing the liens by requirement they have been summarized. For a complete list of the requirements and their support status, see {{{Appendix_A_-_Requirements_Support}Appendix A}}.
+
+  There are no liens for this release.
+
+
+{System Requirements}
+
+  The software contained in this program set was written in Python and will run on any platform with Python enabled environment. The software was specifically developed under Python 2.6.x or 2.7.x and has only been tested with this version.
+
+  * PO.DAAC Standard Computing Environment
+
+  This will eventually be documented on the Wiki and linked to in future releases. For now, the following software and tools should be available in the current environment:
+
+    * Apache Maven 2.2.x
+
+    * Subversion Client 1.4.X
+    
+    * Python 2.6.x or 2.7.x
+
+    * Tornado 1.2.1
+    
+    * Jinja2 2.5.5 (Template engine for python)
+
+    * cx_Oracle 5.1 (Python DB API implementation for python)
+    
+    * python-dateutil 1.5 (Extensions to the standard python 2.3+ datetime module)
+
+    * PycURL 7.19.0 (Python interface to libcurl)
+
+    []
+
+    See the system-level {{{../../cm/release/index-3.3.0.html}release}} document for specific version numbers.
+
+
+{Appendix A - Requirements Support}
+
+  TBD
+

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/site/apt/release/index-4.0.0.apt
----------------------------------------------------------------------
diff --git a/src/site/apt/release/index-4.0.0.apt b/src/site/apt/release/index-4.0.0.apt
new file mode 100644
index 0000000..f22e6bd
--- /dev/null
+++ b/src/site/apt/release/index-4.0.0.apt
@@ -0,0 +1,79 @@
+~~ Copyright 2010, by the California Institute of Technology.
+~~ ALL RIGHTS RESERVED. United States Government Sponsorship acknowledged.
+~~
+~~ $Id:$
+
+  ---
+  Release Description 4.0.0 
+  ---
+  Thomas Huang
+  ---
+  
+Release Description 4.0.0
+
+  This release of the OCSI Program Set is a component of the integrated release ({{{../../cm/release/index-4.0.0.html}4.0.0}}) of the PO.DAAC System. This release is intended as an operational release. The following sections can be found in this document:
+
+  * {{{Capabilities}Capabilities}}
+
+  * {{{Liens}Liens}}
+
+  * {{{System_Requirements}System Requirements}}
+
+
+{Capabilities}
+
+  This section details the new, modified and corrected capabilities that comprise this release of the OCSI Program Set. In lieu of listing the capabilities by requirement they have been summarized. For a complete list of the requirements and their support status, see {{{Appendix_A_-_Requirements_Support}Appendix A}}.
+
+* New Capabilities
+
+   * None
+
+* Modified Capabilities
+
+   * None
+
+* Corrected Capabilities
+
+   * None
+
+
+{Liens}
+
+  This section details the liens against the capabilities that have yet to be implemented or are partially implemented. In lieu of listing the liens by requirement they have been summarized. For a complete list of the requirements and their support status, see {{{Appendix_A_-_Requirements_Support}Appendix A}}.
+
+  There are no liens for this release.
+
+
+{System Requirements}
+
+  The software contained in this program set was written in Python and will run on any platform with Python enabled environment. The software was specifically developed under Python 2.6.x or 2.7.x and has only been tested with this version.
+
+  * PO.DAAC Standard Computing Environment
+
+  This will eventually be documented on the Wiki and linked to in future releases. For now, the following software and tools should be available in the current environment:
+
+    * Apache Maven 2.2.x
+
+    * Subversion Client 1.4.X
+    
+    * Python 2.6.x or 2.7.x
+
+    * Tornado 1.2.1
+    
+    * Jinja2 2.5.5 (Template engine for python)
+
+    * cx_Oracle 5.1 (Python DB API implementation for python)
+    
+    * python-dateutil 1.5 (Extensions to the standard python 2.3+ datetime module)
+
+    * PycURL 7.19.0 (Python interface to libcurl)
+
+    []
+
+    See the system-level {{{../../cm/release/index-4.0.0.html}release}} document for specific version numbers.
+
+
+{Appendix A - Requirements Support}
+
+  TBD
+

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/site/apt/release/index-4.1.0.apt
----------------------------------------------------------------------
diff --git a/src/site/apt/release/index-4.1.0.apt b/src/site/apt/release/index-4.1.0.apt
new file mode 100644
index 0000000..441ed8f
--- /dev/null
+++ b/src/site/apt/release/index-4.1.0.apt
@@ -0,0 +1,79 @@
+~~ Copyright 2010, by the California Institute of Technology.
+~~ ALL RIGHTS RESERVED. United States Government Sponsorship acknowledged.
+~~
+~~ $Id:$
+
+  ---
+  Release Description 4.1.0 
+  ---
+  Thomas Huang
+  ---
+  
+Release Description 4.1.0
+
+  This release of the OCSI Program Set is a component of the integrated release ({{{../../cm/release/index-4.1.0.html}4.1.0}}) of the PO.DAAC System. This release is intended as an operational release. The following sections can be found in this document:
+
+  * {{{Capabilities}Capabilities}}
+
+  * {{{Liens}Liens}}
+
+  * {{{System_Requirements}System Requirements}}
+
+
+{Capabilities}
+
+  This section details the new, modified and corrected capabilities that comprise this release of the OCSI Program Set. In lieu of listing the capabilities by requirement they have been summarized. For a complete list of the requirements and their support status, see {{{Appendix_A_-_Requirements_Support}Appendix A}}.
+
+* New Capabilities
+
+   * None
+
+* Modified Capabilities
+
+   * {{{https://podaac-cm/trac/ticket/1353}Trac-1353}} - Modified OpenSearch response to return Granule Search link only if dataset has granules and Dataset Information link only if dataset can be viewed via web portal.
+
+* Corrected Capabilities
+
+   * None
+
+
+{Liens}
+
+  This section details the liens against the capabilities that have yet to be implemented or are partially implemented. In lieu of listing the liens by requirement they have been summarized. For a complete list of the requirements and their support status, see {{{Appendix_A_-_Requirements_Support}Appendix A}}.
+
+  There are no liens for this release.
+
+
+{System Requirements}
+
+  The software contained in this program set was written in Python and will run on any platform with Python enabled environment. The software was specifically developed under Python 2.6.x or 2.7.x and has only been tested with this version.
+
+  * PO.DAAC Standard Computing Environment
+
+  This will eventually be documented on the Wiki and linked to in future releases. For now, the following software and tools should be available in the current environment:
+
+    * Apache Maven 2.2.x
+
+    * Subversion Client 1.4.X
+    
+    * Python 2.6.x or 2.7.x
+
+    * Tornado 1.2.1
+    
+    * Jinja2 2.5.5 (Template engine for python)
+
+    * cx_Oracle 5.1 (Python DB API implementation for python)
+    
+    * python-dateutil 1.5 (Extensions to the standard python 2.3+ datetime module)
+
+    * PycURL 7.19.0 (Python interface to libcurl)
+
+    []
+
+    See the system-level {{{../../cm/release/index-4.1.0.html}release}} document for specific version numbers.
+
+
+{Appendix A - Requirements Support}
+
+  TBD
+

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/site/apt/release/index-4.2.0.apt
----------------------------------------------------------------------
diff --git a/src/site/apt/release/index-4.2.0.apt b/src/site/apt/release/index-4.2.0.apt
new file mode 100644
index 0000000..8b5c3a4
--- /dev/null
+++ b/src/site/apt/release/index-4.2.0.apt
@@ -0,0 +1,79 @@
+~~ Copyright 2010, by the California Institute of Technology.
+~~ ALL RIGHTS RESERVED. United States Government Sponsorship acknowledged.
+~~
+~~ $Id:$
+
+  ---
+  Release Description 4.2.0 
+  ---
+  Thomas Huang
+  ---
+  
+Release Description 4.2.0
+
+  This release of the OCSI Program Set is a component of the integrated release ({{{../../cm/release/index-4.2.0.html}4.2.0}}) of the PO.DAAC System. This release is intended as an operational release. The following sections can be found in this document:
+
+  * {{{Capabilities}Capabilities}}
+
+  * {{{Liens}Liens}}
+
+  * {{{System_Requirements}System Requirements}}
+
+
+{Capabilities}
+
+  This section details the new, modified and corrected capabilities that comprise this release of the OCSI Program Set. In lieu of listing the capabilities by requirement they have been summarized. For a complete list of the requirements and their support status, see {{{Appendix_A_-_Requirements_Support}Appendix A}}.
+
+* New Capabilities
+
+   * None
+
+* Modified Capabilities
+
+   * None
+
+* Corrected Capabilities
+
+   * None
+
+
+{Liens}
+
+  This section details the liens against the capabilities that have yet to be implemented or are partially implemented. In lieu of listing the liens by requirement they have been summarized. For a complete list of the requirements and their support status, see {{{Appendix_A_-_Requirements_Support}Appendix A}}.
+
+  There are no liens for this release.
+
+
+{System Requirements}
+
+  The software contained in this program set was written in Python and will run on any platform with Python enabled environment. The software was specifically developed under Python 2.6.x or 2.7.x and has only been tested with this version.
+
+  * PO.DAAC Standard Computing Environment
+
+  This will eventually be documented on the Wiki and linked to in future releases. For now, the following software and tools should be available in the current environment:
+
+    * Apache Maven 2.2.x
+
+    * Subversion Client 1.4.X
+    
+    * Python 2.6.x or 2.7.x
+
+    * Tornado 1.2.1
+    
+    * Jinja2 2.5.5 (Template engine for python)
+
+    * cx_Oracle 5.1 (Python DB API implementation for python)
+    
+    * python-dateutil 1.5 (Extensions to the standard python 2.3+ datetime module)
+
+    * PycURL 7.19.0 (Python interface to libcurl)
+
+    []
+
+    See the system-level {{{../../cm/release/index-4.2.0.html}release}} document for specific version numbers.
+
+
+{Appendix A - Requirements Support}
+
+  TBD
+

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/site/apt/release/index-4.3.0.apt
----------------------------------------------------------------------
diff --git a/src/site/apt/release/index-4.3.0.apt b/src/site/apt/release/index-4.3.0.apt
new file mode 100644
index 0000000..3acbc4a
--- /dev/null
+++ b/src/site/apt/release/index-4.3.0.apt
@@ -0,0 +1,77 @@
+~~ Copyright 2010, by the California Institute of Technology.
+~~ ALL RIGHTS RESERVED. United States Government Sponsorship acknowledged.
+~~
+~~ $Id:$
+
+  ---
+  Release Description 4.3.0 
+  ---
+  Thomas Huang
+  ---
+  
+Release Description 4.3.0
+
+  This release of the OCSI Program Set is intended as an operational release. The following sections can be found in this document:
+
+  * {{{Capabilities}Capabilities}}
+
+  * {{{Liens}Liens}}
+
+  * {{{System_Requirements}System Requirements}}
+
+
+{Capabilities}
+
+  This section details the new, modified and corrected capabilities that comprise this release of the OCSI Program Set. In lieu of listing the capabilities by requirement they have been summarized. For a complete list of the requirements and their support status, see {{{Appendix_A_-_Requirements_Support}Appendix A}}.
+
+* New Capabilities
+
+   * None
+
+* Modified Capabilities
+
+   * {{{https://podaac-cm/trac/ticket/1508}Trac-1508}} - Remove "N/A" and "none" entries from OCSI GCMD interface.
+
+* Corrected Capabilities
+
+   * {{{https://podaac-cm/trac/ticket/1590}Trac-1590}} - Fix output error on ampersand symbol.
+
+
+{Liens}
+
+  This section details the liens against the capabilities that have yet to be implemented or are partially implemented. In lieu of listing the liens by requirement they have been summarized. For a complete list of the requirements and their support status, see {{{Appendix_A_-_Requirements_Support}Appendix A}}.
+
+  There are no liens for this release.
+
+
+{System Requirements}
+
+  The software contained in this program set was written in Python and will run on any platform with Python enabled environment. The software was specifically developed under Python 2.6.x or 2.7.x and has only been tested with this version.
+
+  * PO.DAAC Standard Computing Environment
+
+  This will eventually be documented on the Wiki and linked to in future releases. For now, the following software and tools should be available in the current environment:
+
+    * Apache Maven 2.2.x
+
+    * Subversion Client 1.4.X
+    
+    * Python 2.6.x or 2.7.x
+
+    * Tornado 1.2.1
+    
+    * Jinja2 2.5.5 (Template engine for python)
+
+    * cx_Oracle 5.1 (Python DB API implementation for python)
+    
+    * python-dateutil 1.5 (Extensions to the standard python 2.3+ datetime module)
+
+    * PycURL 7.19.0 (Python interface to libcurl)
+
+    []
+
+
+{Appendix A - Requirements Support}
+
+  TBD
+

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/site/apt/release/index-4.4.0.apt
----------------------------------------------------------------------
diff --git a/src/site/apt/release/index-4.4.0.apt b/src/site/apt/release/index-4.4.0.apt
new file mode 100644
index 0000000..c713a97
--- /dev/null
+++ b/src/site/apt/release/index-4.4.0.apt
@@ -0,0 +1,78 @@
+~~ Copyright 2010, by the California Institute of Technology.
+~~ ALL RIGHTS RESERVED. United States Government Sponsorship acknowledged.
+~~
+~~ $Id:$
+
+  ---
+  Release Description 4.4.0 
+  ---
+  Thomas Huang
+  ---
+  
+Release Description 4.4.0
+
+  This release of the OCSI Program Set is intended as an operational release. The following sections can be found in this document:
+
+  * {{{Capabilities}Capabilities}}
+
+  * {{{Liens}Liens}}
+
+  * {{{System_Requirements}System Requirements}}
+
+
+{Capabilities}
+
+  This section details the new, modified and corrected capabilities that comprise this release of the OCSI Program Set. In lieu of listing the capabilities by requirement they have been summarized. For a complete list of the requirements and their support status, see {{{Appendix_A_-_Requirements_Support}Appendix A}}.
+
+* New Capabilities
+
+   * {{{https://podaac-cm/trac/ticket/1550}Trac-1550}} - OCSI Datacasting Feed Creation
+   * {{{https://podaac-cm/trac/ticket/1614}Trac-1614}} - OCSI datacasting feed shall include FTP and OPeNDAP links as custom elements
+
+* Modified Capabilities
+
+   * {{{https://podaac-cm/trac/ticket/1572}Trac-1572}} - Web services should return the same results as the PO.DAAC portal
+
+* Corrected Capabilities
+
+   * None
+
+
+{Liens}
+
+  This section details the liens against the capabilities that have yet to be implemented or are partially implemented. In lieu of listing the liens by requirement they have been summarized. For a complete list of the requirements and their support status, see {{{Appendix_A_-_Requirements_Support}Appendix A}}.
+
+  There are no liens for this release.
+
+
+{System Requirements}
+
+  The software contained in this program set was written in Python and will run on any platform with Python enabled environment. The software was specifically developed under Python 2.6.x or 2.7.x and has only been tested with this version.
+
+  * PO.DAAC Standard Computing Environment
+
+  This will eventually be documented on the Wiki and linked to in future releases. For now, the following software and tools should be available in the current environment:
+
+    * Apache Maven 2.2.x
+
+    * Subversion Client 1.4.X
+    
+    * Python 2.6.x or 2.7.x
+
+    * Tornado 1.2.1
+    
+    * Jinja2 2.5.5 (Template engine for python)
+
+    * cx_Oracle 5.1 (Python DB API implementation for python)
+    
+    * python-dateutil 1.5 (Extensions to the standard python 2.3+ datetime module)
+
+    * PycURL 7.19.0 (Python interface to libcurl)
+
+    []
+
+
+{Appendix A - Requirements Support}
+
+  TBD
+

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/site/apt/release/index-4.4.1.apt
----------------------------------------------------------------------
diff --git a/src/site/apt/release/index-4.4.1.apt b/src/site/apt/release/index-4.4.1.apt
new file mode 100644
index 0000000..c940ffc
--- /dev/null
+++ b/src/site/apt/release/index-4.4.1.apt
@@ -0,0 +1,79 @@
+~~ Copyright 2010, by the California Institute of Technology.
+~~ ALL RIGHTS RESERVED. United States Government Sponsorship acknowledged.
+~~
+~~ $Id:$
+
+  ---
+  Release Description 4.4.1 
+  ---
+  Thomas Huang
+  ---
+  
+Release Description 4.4.1
+
+  This release of the OCSI Program Set is intended as an operational release. The following sections can be found in this document:
+
+  * {{{Capabilities}Capabilities}}
+
+  * {{{Liens}Liens}}
+
+  * {{{System_Requirements}System Requirements}}
+
+
+{Capabilities}
+
+  This section details the new, modified and corrected capabilities that comprise this release of the OCSI Program Set. In lieu of listing the capabilities by requirement they have been summarized. For a complete list of the requirements and their support status, see {{{Appendix_A_-_Requirements_Support}Appendix A}}.
+
+* New Capabilities
+
+   * None
+
+* Modified Capabilities
+
+   * {{{https://podaac-cm/trac/ticket/1869}Trac-1869}} - Modified dataset temporal range search to return true if there is any overlap
+
+* Corrected Capabilities
+
+   * {{{https://podaac-cm/trac/ticket/1790}Trac-1790}} - Fixed mapping for GCMD DIF Data_Center
+
+   * {{{https://podaac-cm/trac/ticket/1827}Trac-1827}} - Added Personnel to GCMD DIF with DIF AUTHOR Role
+
+
+{Liens}
+
+  This section details the liens against the capabilities that have yet to be implemented or are partially implemented. In lieu of listing the liens by requirement they have been summarized. For a complete list of the requirements and their support status, see {{{Appendix_A_-_Requirements_Support}Appendix A}}.
+
+  There are no liens for this release.
+
+
+{System Requirements}
+
+  The software contained in this program set was written in Python and will run on any platform with Python enabled environment. The software was specifically developed under Python 2.6.x or 2.7.x and has only been tested with this version.
+
+  * PO.DAAC Standard Computing Environment
+
+  This will eventually be documented on the Wiki and linked to in future releases. For now, the following software and tools should be available in the current environment:
+
+    * Apache Maven 2.2.x
+
+    * Subversion Client 1.4.X
+    
+    * Python 2.6.x or 2.7.x
+
+    * Tornado 1.2.1
+    
+    * Jinja2 2.5.5 (Template engine for python)
+
+    * cx_Oracle 5.1 (Python DB API implementation for python)
+    
+    * python-dateutil 1.5 (Extensions to the standard python 2.3+ datetime module)
+
+    * PycURL 7.19.0 (Python interface to libcurl)
+
+    []
+
+
+{Appendix A - Requirements Support}
+
+  TBD
+

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/site/apt/release/index.apt
----------------------------------------------------------------------
diff --git a/src/site/apt/release/index.apt b/src/site/apt/release/index.apt
new file mode 100644
index 0000000..afff6fa
--- /dev/null
+++ b/src/site/apt/release/index.apt
@@ -0,0 +1,101 @@
+~~ Copyright 2010, by the California Institute of Technology.
+~~ ALL RIGHTS RESERVED. United States Government Sponsorship acknowledged.
+~~
+~~ $Id: $
+
+  ---
+  Release Description
+  ---
+  Atsuya Takagi
+  ---
+  
+Release Description
+
+  The Release Description Documents (RDDs) detail the new and modified capabilities that comprise a particular OCSI Program Set release.
+
+* Release History
+
+  * Version: {{{./index-4.4.1.html}4.4.1}}
+
+    * Date: 2013-10-24
+
+    * Description: This release of OCSI Program Set is intended as an operational release.
+
+  * Version: {{{./index-4.4.0.html}4.4.0}}
+
+    * Date: 2013-02-13
+
+    * Description: This release of OCSI Program Set is intended as an operational release.
+
+  * Version: {{{./index-4.3.0.html}4.3.0}}
+
+    * Date: 2012-12-03
+
+    * Description: This release of OCSI Program Set is intended as an operational release.
+
+  * Version: {{{./index-4.2.0.html}4.2.0}}
+
+    * Date: 2012-08-31
+
+    * Description: This release of OCSI Program Set is a component of the integrated release ({{{./index-4.2.0.html}4.2.0}}) of the PO.DAAC System.
+
+  * Version: {{{./index-4.1.0.html}4.1.0}}
+
+    * Date: 2012-06-08
+
+    * Description: This release of OCSI Program Set is a component of the integrated release ({{{./index-4.1.0.html}4.1.0}}) of the PO.DAAC System.
+
+  * Version: {{{./index-4.0.0.html}4.0.0}}
+
+    * Date: 2012-03-14
+
+    * Description: This release of OCSI Program Set is a component of the integrated release ({{{./index-4.0.0.html}4.0.0}}) of the PO.DAAC System.
+
+  * Version: {{{./index-3.3.0.html}3.3.0}}
+
+    * Date: 2012-01-20
+
+    * Description: This release adds modifications to opensearch response and interface to enable integration with L2 Granule Search Service.
+
+  * Version: {{{./index-3.2.2.html}3.2.2}}
+
+    * Date: 2011-11-23
+
+    * Description: This release adds modifications to opensearch response, changes how dataset temporal query is evaluated, and caps number of results.
+
+  * Version: {{{./index-3.2.1.html}3.2.1}}
+
+    * Date: 2011-09-30
+
+    * Description: This release adds modifications to opensearch response for bounding box search, FGDC response to conform to DTD and granules sort order in FGDC and ISO response.
+
+  * Version: {{{./index-3.2.0.html}3.2.0}}
+
+    * Date: 2011-08-19
+
+    * Description: This release adds FGDC creation support for exporting granules, support for persistent ID, and case-insensitive search on format, status, and processingLevel parameter.
+
+  * Version: {{{./index-3.1.1.html}3.1.1}}
+
+    * Date: 2011-06-02
+
+    * Description: This release adds ISO support for granule, GCMD DIF support for dataset, and new dataset search parameters.
+    
+  * Version: {{{./index-3.1.0.html}3.1.0}}
+
+    * Date: 2011-04-05
+
+    * Description: This release adds ISO and spatial search support.
+
+  * Version: {{{./index-3.0.0.html}3.0.0}}
+
+    * Date: 2011-01-27
+
+    * Description: This release adds open search support for granules.
+
+  * Version: {{{./index-2.2.1.html}2.2.1}}
+
+    * Date: 2010-11-15
+
+    * Description: This release of the initial release of the OCSI Program Set. This release is NOT intended as an operational release.
+

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/site/resources/images/podaac_logo.jpg
----------------------------------------------------------------------
diff --git a/src/site/resources/images/podaac_logo.jpg b/src/site/resources/images/podaac_logo.jpg
new file mode 100644
index 0000000..abb1063
Binary files /dev/null and b/src/site/resources/images/podaac_logo.jpg differ

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/site/site.xml
----------------------------------------------------------------------
diff --git a/src/site/site.xml b/src/site/site.xml
new file mode 100644
index 0000000..b8ca679
--- /dev/null
+++ b/src/site/site.xml
@@ -0,0 +1,36 @@
+<?xml version="1.0" encoding="ISO-8859-1"?>
+
+<!-- 
+  Copyright 2009, by the California Institute of Technology.
+  ALL RIGHTS RESERVED. United States Government Sponsorship acknowledged.
+
+  $Id: $
+-->
+
+<project name="OCSI Program Set">
+  <bannerLeft>
+    <src>images/podaac_logo.jpg</src>
+    <href>http://podaac-cm.jpl.nasa.gov/</href>
+  </bannerLeft>
+
+  <bannerRight>
+    <name>Oceanographic Common Search Interface Program Set&nbsp;&nbsp;</name>
+  </bannerRight>
+
+  <body>
+    <links>
+      <item name="OCSI" href="http://podaac-cm.jpl.nasa.gov/docs/ocsi/index.html" />
+    </links>
+
+    <menu name="Software Documentation">
+      <item name="About OCSI" href="index.html"/>
+      <item name="Release Description" href="release/index.html"/>
+      <item name="Installation" href="install/index.html"/>
+      <item name="Operation" href="operate/index.html"/>
+    </menu>
+
+    <menu ref="modules"/>
+
+    <menu ref="reports"/>
+  </body>
+</project>



[10/15] incubator-sdap-edge git commit: SDAP-1 Import all code under the SDAP SGA

Posted by le...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/granule/iso/iso_template.xml
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/granule/iso/iso_template.xml b/src/main/python/plugins/granule/iso/iso_template.xml
new file mode 100644
index 0000000..f5cd340
--- /dev/null
+++ b/src/main/python/plugins/granule/iso/iso_template.xml
@@ -0,0 +1,674 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<gmd:DS_Series xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.isotc211.org/2005/gmd http://www.ngdc.noaa.gov/metadata/published/xsd/schema.xsd" xmlns:gmd="http://www.isotc211.org/2005/gmd" xmlns:gco="http://www.isotc211.org/2005/gco" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:gml="http://www.opengis.net/gml" xmlns:gsr="http://www.isotc211.org/2005/gsr" xmlns:gss="http://www.isotc211.org/2005/gss" xmlns:gts="http://www.isotc211.org/2005/gts" xmlns:gmx="http://www.isotc211.org/2005/gmx" xmlns:gmi="http://www.isotc211.org/2005/gmi">
+{% if granules %}
+<gmd:composedOf>
+<gmd:DS_DataSet>
+<gmd:has>
+<gmi:MI_Metadata>
+<gmd:fileIdentifier>
+<gco:CharacterString>{{ granules[0]['Granule-Name'][0] }}</gco:CharacterString>
+</gmd:fileIdentifier>
+<gmd:contact xlink:href="#seriesMetadataContact"/>
+<gmd:dateStamp>
+<gco:DateTime>{{ granules[0]['Granule-CreateTimeLong'][0] }}</gco:DateTime>
+</gmd:dateStamp>
+<gmd:dataSetURI>
+<gco:CharacterString>{{ granules[0]['link'] }}</gco:CharacterString>
+</gmd:dataSetURI>
+<gmd:identificationInfo>
+<gmd:MD_DataIdentification>
+<gmd:citation>
+<gmd:CI_Citation>
+<gmd:title>
+<gco:CharacterString>{{ granules[0]['Granule-Name'][0] }}</gco:CharacterString>
+</gmd:title>
+<gmd:date>
+<gmd:CI_Date>
+<gmd:date>
+<gco:DateTime>{{ granules[0]['Granule-ArchiveTimeLong'][0] }}</gco:DateTime>
+</gmd:date>
+<gmd:dateType>
+<gmd:CI_DateTypeCode codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_DateTypeCode" codeListValue="publication">publication</gmd:CI_DateTypeCode>
+</gmd:dateType>
+</gmd:CI_Date>
+</gmd:date>
+</gmd:CI_Citation>
+</gmd:citation>
+<gmd:abstract/>
+<gmd:language/>
+<gmd:extent>
+<gmd:EX_Extent>
+<gmd:geographicElement>
+<gmd:EX_GeographicBoundingBox id="swathBoundingBox">
+{% if GranuleBoundingBox %}
+<gmd:westBoundLongitude>
+<gco:Decimal>{{ GranuleBoundingBox['westernmostLongitude'] }}</gco:Decimal>
+</gmd:westBoundLongitude>
+<gmd:eastBoundLongitude>
+<gco:Decimal>{{ GranuleBoundingBox['easternmostLongitude'] }}</gco:Decimal>
+</gmd:eastBoundLongitude>
+<gmd:southBoundLatitude>
+<gco:Decimal>{{ GranuleBoundingBox['southernmostLatitude'] }}</gco:Decimal>
+</gmd:southBoundLatitude>
+<gmd:northBoundLatitude>
+<gco:Decimal>{{ GranuleBoundingBox['northernmostLatitude'] }}</gco:Decimal>
+</gmd:northBoundLatitude>
+{% else %}
+<gmd:westBoundLongitude>
+<gco:Decimal>-180.00</gco:Decimal>
+</gmd:westBoundLongitude>
+<gmd:eastBoundLongitude>
+<gco:Decimal>180.00</gco:Decimal>
+</gmd:eastBoundLongitude>
+<gmd:southBoundLatitude>
+<gco:Decimal>-90.00</gco:Decimal>
+</gmd:southBoundLatitude>
+<gmd:northBoundLatitude>
+<gco:Decimal>90.00</gco:Decimal>
+</gmd:northBoundLatitude>
+{% endif %}
+</gmd:EX_GeographicBoundingBox>
+</gmd:geographicElement>
+<gmd:temporalElement>
+<gmd:EX_TemporalExtent>
+<gmd:extent>
+<TimePeriod xmlns="http://www.opengis.net/gml/3.2" xmlns:ns1="http://www.opengis.net/gml/3.2" ns1:id="swathTemporalExtent">
+<beginPosition>{{ granules[0]['Granule-StartTimeLong'][0] }}</beginPosition>
+<endPosition>{{ granules[0]['Granule-StopTimeLong'][0] }}</endPosition>
+</TimePeriod>
+</gmd:extent>
+</gmd:EX_TemporalExtent>
+</gmd:temporalElement>
+</gmd:EX_Extent>
+</gmd:extent>
+</gmd:MD_DataIdentification>
+</gmd:identificationInfo>
+</gmi:MI_Metadata>
+</gmd:has>
+</gmd:DS_DataSet>
+</gmd:composedOf>
+{% endif %}
+{% if doc %}
+<gmd:seriesMetadata>
+<gmi:MI_Metadata id="{{ doc['Dataset-ShortName'][0] }}">
+<gmd:fileIdentifier>
+<gco:CharacterString>{{ doc['Dataset-ShortName'][0] }}</gco:CharacterString>
+</gmd:fileIdentifier>
+<gmd:language>
+<gco:CharacterString>eng</gco:CharacterString>
+</gmd:language>
+<gmd:characterSet>
+<gmd:MD_CharacterSetCode codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_CharacterSetCode" codeListValue="UTF8">UTF8</gmd:MD_CharacterSetCode>
+</gmd:characterSet>
+<gmd:hierarchyLevel>
+<gmd:MD_ScopeCode codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_ScopeCode" codeListValue="series">series</gmd:MD_ScopeCode>
+</gmd:hierarchyLevel>
+<gmd:contact>
+<gmd:CI_ResponsibleParty id="seriesMetadataContact">
+<gmd:individualName>
+<gco:CharacterString>{{ doc['DatasetContact-Contact-FirstName'][0] }}{% if doc['DatasetContact-Contact-MiddleName'][0] != 'none' %} {{ doc['DatasetContact-Contact-MiddleName'][0] }}{% endif %} {{ doc['DatasetContact-Contact-LastName'][0] }}</gco:CharacterString>
+</gmd:individualName>
+<gmd:organisationName>
+<gco:CharacterString>{{ doc['DatasetContact-Contact-Provider-ShortName'][0] }} &gt; {{ doc['DatasetContact-Contact-Provider-LongName'][0] }}</gco:CharacterString>
+</gmd:organisationName>
+<gmd:positionName>
+<gco:CharacterString>{{ doc['DatasetContact-Contact-Role'][0] }}</gco:CharacterString>
+</gmd:positionName>
+<gmd:contactInfo>
+<gmd:CI_Contact>
+<gmd:phone>
+<gmd:CI_Telephone>
+<gmd:voice>
+<gco:CharacterString>{{ doc['DatasetContact-Contact-Phone'][0] }}</gco:CharacterString>
+</gmd:voice>
+<gmd:facsimile>
+<gco:CharacterString>{{ doc['DatasetContact-Contact-Fax'][0] }}</gco:CharacterString>
+</gmd:facsimile>
+</gmd:CI_Telephone>
+</gmd:phone>
+<gmd:address>
+<gmd:CI_Address>
+<gmd:electronicMailAddress>
+<gco:CharacterString>{{ doc['DatasetContact-Contact-Email'][0] }}</gco:CharacterString>
+</gmd:electronicMailAddress>
+</gmd:CI_Address>
+</gmd:address>
+<gmd:contactInstructions>
+<gco:CharacterString>Phone/FAX/E-mail</gco:CharacterString>
+</gmd:contactInstructions>
+</gmd:CI_Contact>
+</gmd:contactInfo>
+<gmd:role>
+<gmd:CI_RoleCode codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_RoleCode" codeListValue="pointOfContact">pointOfContact</gmd:CI_RoleCode>
+</gmd:role>
+</gmd:CI_ResponsibleParty>
+</gmd:contact>
+<gmd:dateStamp>
+<gco:Date>{{ DateStamp }}</gco:Date>
+</gmd:dateStamp>
+<gmd:metadataStandardName>
+<gco:CharacterString>ISO 19115-2 Geographic information — Metadata — Part 2: Extensions for imagery and gridded data</gco:CharacterString>
+</gmd:metadataStandardName>
+<gmd:metadataStandardVersion>
+<gco:CharacterString>ISO 19115-2:2009-02-15</gco:CharacterString>
+</gmd:metadataStandardVersion>
+<gmd:locale>
+<gmd:PT_Locale>
+<gmd:languageCode>
+<gmd:LanguageCode codeList="http://www.loc.gov/standards/iso639-2/php/English_list.php" codeListValue="eng">eng</gmd:LanguageCode>
+</gmd:languageCode>
+<gmd:country>
+<gmd:Country codeList="http://www.iso.org/iso/iso_3166-1_list_en.zip" codeListValue="US">US</gmd:Country>
+</gmd:country>
+<gmd:characterEncoding>
+<gmd:MD_CharacterSetCode codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_CharacterSetCode" codeListValue="UTF8">UTF8</gmd:MD_CharacterSetCode>
+</gmd:characterEncoding>
+</gmd:PT_Locale>
+</gmd:locale>
+<gmd:metadataExtensionInfo>
+<gmd:MD_MetadataExtensionInformation>
+<gmd:extensionOnLineResource>
+<gmd:CI_OnlineResource>
+<gmd:linkage>
+<gmd:URL>http://www.ngdc.noaa.gov/metadata/published/19115/GHRSST/ISO/CoverageExtensions.xml</gmd:URL>
+</gmd:linkage>
+<gmd:applicationProfile>
+<gco:CharacterString>Web Browser</gco:CharacterString>
+</gmd:applicationProfile>
+<gmd:description>
+<gco:CharacterString>A description of extensions developed at NGDC to classify coverages.</gco:CharacterString>
+</gmd:description>
+<gmd:function>
+<gmd:CI_OnLineFunctionCode codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_OnlineFunctionCode" codeListValue="information">information</gmd:CI_OnLineFunctionCode>
+</gmd:function>
+</gmd:CI_OnlineResource>
+</gmd:extensionOnLineResource>
+</gmd:MD_MetadataExtensionInformation>
+</gmd:metadataExtensionInfo>
+<gmd:identificationInfo>
+<gmd:MD_DataIdentification id="seriesIdentification">
+<gmd:citation>
+<gmd:CI_Citation>
+<gmd:title>
+<gco:CharacterString>{{ doc['Dataset-LongName'][0] }}</gco:CharacterString>
+</gmd:title>
+<gmd:alternateTitle>
+<gco:CharacterString>{{ doc['DatasetCitation-Title'][0] }}</gco:CharacterString>
+</gmd:alternateTitle>
+<gmd:date>
+<gmd:CI_Date>
+<gmd:date>
+<gco:Date>{{ DatasetCitation_ReleaseDate }}</gco:Date>
+</gmd:date>
+<gmd:dateType>
+<gmd:CI_DateTypeCode codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_DateTypeCode" codeListValue="creation">creation</gmd:CI_DateTypeCode>
+</gmd:dateType>
+</gmd:CI_Date>
+</gmd:date>
+<gmd:edition>
+<gco:CharacterString>{{ doc['DatasetCitation-Version'][0] }}</gco:CharacterString>
+</gmd:edition>
+<gmd:citedResponsibleParty>
+<gmd:CI_ResponsibleParty>
+<gmd:individualName>
+<gco:CharacterString>{{ doc['DatasetCitation-Creator'][0] }}</gco:CharacterString>
+</gmd:individualName>
+<gmd:contactInfo>
+<gmd:CI_Contact>
+{% if (doc['DatasetCitation-OnlineResource'][0] | trim)[0:4] == 'http' or (doc['DatasetCitation-OnlineResource'][0] | trim)[0:3] == 'ftp' %}
+<gmd:onlineResource>
+<gmd:CI_OnlineResource>
+<gmd:linkage>
+<gmd:URL>{{ doc['DatasetCitation-OnlineResource'][0] }}</gmd:URL>
+</gmd:linkage>
+</gmd:CI_OnlineResource>
+</gmd:onlineResource>
+{% else %}
+<gmd:onlineResource gco:nilReason="missing"/>
+{% endif %}
+</gmd:CI_Contact>
+</gmd:contactInfo>
+<gmd:role>
+<gmd:CI_RoleCode codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_RoleCode" codeListValue="originator">originator</gmd:CI_RoleCode>
+</gmd:role>
+</gmd:CI_ResponsibleParty>
+</gmd:citedResponsibleParty>
+<gmd:citedResponsibleParty>
+<gmd:CI_ResponsibleParty>
+<gmd:organisationName>
+<gco:CharacterString>{{ doc['DatasetCitation-Publisher'][0] }}</gco:CharacterString>
+</gmd:organisationName>
+<gmd:contactInfo>
+<gmd:CI_Contact>
+<gmd:address>
+<gmd:CI_Address>
+<gmd:city>
+<gco:CharacterString>{{ doc['DatasetCitation-ReleasePlace'][0] }}</gco:CharacterString>
+</gmd:city>
+</gmd:CI_Address>
+</gmd:address>
+</gmd:CI_Contact>
+</gmd:contactInfo>
+<gmd:role>
+<gmd:CI_RoleCode codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_RoleCode" codeListValue="publisher">publisher</gmd:CI_RoleCode>
+</gmd:role>
+</gmd:CI_ResponsibleParty>
+</gmd:citedResponsibleParty>
+</gmd:CI_Citation>
+</gmd:citation>
+<gmd:abstract>
+<gco:CharacterString>{{ doc['Dataset-Description'][0] }}</gco:CharacterString>
+</gmd:abstract>
+<gmd:credit>
+<gco:CharacterString>{{ doc['DatasetCitation-CitationDetail'][0] }}</gco:CharacterString>
+</gmd:credit>
+<gmd:status>
+<gmd:MD_ProgressCode codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_ProgressCode" codeListValue="onGoing">onGoing</gmd:MD_ProgressCode>
+</gmd:status>
+<gmd:pointOfContact>
+<gmd:CI_ResponsibleParty>
+<gmd:individualName>
+<gco:CharacterString>{{ doc['DatasetContact-Contact-FirstName'][0] }}{% if doc['DatasetContact-Contact-MiddleName'][0] != 'none' %} {{ doc['DatasetContact-Contact-MiddleName'][0] }}{% endif %} {{ doc['DatasetContact-Contact-LastName'][0] }}</gco:CharacterString>
+</gmd:individualName>
+<gmd:organisationName>
+<gco:CharacterString>{{ doc['DatasetContact-Contact-Provider-ShortName'][0] }} &gt; {{ doc['DatasetContact-Contact-Provider-LongName'][0] }}</gco:CharacterString>
+</gmd:organisationName>
+<gmd:contactInfo>
+<gmd:CI_Contact>
+<gmd:phone>
+<gmd:CI_Telephone>
+<gmd:voice>
+<gco:CharacterString>{{ doc['DatasetContact-Contact-Phone'][0] }}</gco:CharacterString>
+</gmd:voice>
+<gmd:facsimile>
+<gco:CharacterString>{{ doc['DatasetContact-Contact-Fax'][0] }}</gco:CharacterString>
+</gmd:facsimile>
+</gmd:CI_Telephone>
+</gmd:phone>
+<gmd:address>
+<gmd:CI_Address>
+<gmd:electronicMailAddress>
+<gco:CharacterString>{{ doc['DatasetContact-Contact-Email'][0] }}</gco:CharacterString>
+</gmd:electronicMailAddress>
+</gmd:CI_Address>
+</gmd:address>
+{% if (doc['DatasetCitation-OnlineResource'][0] | trim)[0:4] == 'http' or (doc['DatasetCitation-OnlineResource'][0] | trim)[0:3] == 'ftp' %}
+<gmd:onlineResource>
+<gmd:CI_OnlineResource>
+<gmd:linkage>
+<gmd:URL>{{ doc['DatasetCitation-OnlineResource'][0] }}</gmd:URL>
+</gmd:linkage>
+</gmd:CI_OnlineResource>
+</gmd:onlineResource>
+{% else %}
+<gmd:onlineResource gco:nilReason="missing"/>
+{% endif %}
+</gmd:CI_Contact>
+</gmd:contactInfo>
+<gmd:role>
+<gmd:CI_RoleCode codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_RoleCode" codeListValue="pointOfContact">pointOfContact</gmd:CI_RoleCode>
+</gmd:role>
+</gmd:CI_ResponsibleParty>
+</gmd:pointOfContact>
+<gmd:resourceFormat>
+<gmd:MD_Format id="resourceFormat">
+<gmd:name>
+<gco:CharacterString>{{ doc['DatasetPolicy-DataFormat'][0] }}</gco:CharacterString>
+</gmd:name>
+<gmd:version>
+<gco:CharacterString>{{ DatasetPolicy_DataFormat_Version }}</gco:CharacterString>
+</gmd:version>
+<gmd:fileDecompressionTechnique>
+<gco:CharacterString>{{ doc['DatasetPolicy-CompressType'][0] }}</gco:CharacterString>
+</gmd:fileDecompressionTechnique>
+</gmd:MD_Format>
+</gmd:resourceFormat>
+{% for i in range(doc['DatasetParameter-Category']|count) %}
+<gmd:descriptiveKeywords>
+<gmd:MD_Keywords>
+<gmd:keyword>
+<gco:CharacterString>{{ doc['DatasetParameter-Category'][i] }} &gt; {{ doc['DatasetParameter-Topic'][i] }} &gt; {{ doc['DatasetParameter-Term'][i] }} &gt; {{ doc['DatasetParameter-Variable'][i] }}{% if doc['DatasetParameter-VariableDetail'][i] != ''  %} &gt; {{ doc['DatasetParameter-VariableDetail'][i] }}{% endif %}</gco:CharacterString>
+</gmd:keyword>
+<gmd:type>
+<gmd:MD_KeywordTypeCode codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_KeywordTypeCode" codeListValue="theme">theme</gmd:MD_KeywordTypeCode>
+</gmd:type>
+<gmd:thesaurusName>
+<gmd:CI_Citation>
+<gmd:title>
+<gco:CharacterString>NASA/GCMD Earth Science Keywords</gco:CharacterString>
+</gmd:title>
+<gmd:date gco:nilReason="unknown"/>
+</gmd:CI_Citation>
+</gmd:thesaurusName>
+</gmd:MD_Keywords>
+</gmd:descriptiveKeywords>
+{% endfor %}
+<gmd:descriptiveKeywords>
+<gmd:MD_Keywords>
+<gmd:keyword>
+<gco:CharacterString>{{ doc['DatasetRegion-Region'][0] }}</gco:CharacterString>
+</gmd:keyword>
+<gmd:type>
+<gmd:MD_KeywordTypeCode codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_KeywordTypeCode" codeListValue="place">place</gmd:MD_KeywordTypeCode>
+</gmd:type>
+<gmd:thesaurusName>
+<gmd:CI_Citation>
+<gmd:title>
+<gco:CharacterString>NASA/GCMD Location Keywords</gco:CharacterString>
+</gmd:title>
+<gmd:date gco:nilReason="unknown"/>
+</gmd:CI_Citation>
+</gmd:thesaurusName>
+</gmd:MD_Keywords>
+</gmd:descriptiveKeywords>
+<gmd:resourceConstraints>
+<gmd:MD_LegalConstraints>
+<gmd:useLimitation>
+<gco:CharacterString>{{ doc['DatasetPolicy-UseConstraint'][0] }}</gco:CharacterString>
+</gmd:useLimitation>
+<gmd:otherConstraints>
+<gco:CharacterString>{{ doc['DatasetPolicy-AccessConstraint'][0] }}</gco:CharacterString>
+</gmd:otherConstraints>
+</gmd:MD_LegalConstraints>
+</gmd:resourceConstraints>
+<gmd:spatialRepresentationType>
+<gmd:MD_SpatialRepresentationTypeCode codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_SpatialRepresentationTypeCode" codeListValue="grid">grid</gmd:MD_SpatialRepresentationTypeCode>
+</gmd:spatialRepresentationType>
+<gmd:language>
+<gco:CharacterString>eng</gco:CharacterString>
+</gmd:language>
+<gmd:characterSet>
+<gmd:MD_CharacterSetCode codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_CharacterSetCode" codeListValue="UTF8">UTF8</gmd:MD_CharacterSetCode>
+</gmd:characterSet>
+<gmd:extent>
+<gmd:EX_Extent id="boundingExtent">
+<gmd:geographicElement>
+<gmd:EX_GeographicBoundingBox id="boundingBox">
+<gmd:extentTypeCode>
+<gco:Boolean>true</gco:Boolean>
+</gmd:extentTypeCode>
+<gmd:westBoundLongitude>
+<gco:Decimal>{{ doc['DatasetCoverage-WestLon'][0] }}</gco:Decimal>
+</gmd:westBoundLongitude>
+<gmd:eastBoundLongitude>
+<gco:Decimal>{{ doc['DatasetCoverage-EastLon'][0] }}</gco:Decimal>
+</gmd:eastBoundLongitude>
+<gmd:southBoundLatitude>
+<gco:Decimal>{{ doc['DatasetCoverage-SouthLat'][0] }}</gco:Decimal>
+</gmd:southBoundLatitude>
+<gmd:northBoundLatitude>
+<gco:Decimal>{{ doc['DatasetCoverage-NorthLat'][0] }}</gco:Decimal>
+</gmd:northBoundLatitude>
+</gmd:EX_GeographicBoundingBox>
+</gmd:geographicElement>
+<gmd:geographicElement>
+<gmd:EX_GeographicDescription>
+<gmd:extentTypeCode>
+<gco:Boolean>true</gco:Boolean>
+</gmd:extentTypeCode>
+<gmd:geographicIdentifier>
+<gmd:MD_Identifier>
+<gmd:code/>
+</gmd:MD_Identifier>
+</gmd:geographicIdentifier>
+</gmd:EX_GeographicDescription>
+</gmd:geographicElement>
+<gmd:temporalElement>
+<gmd:EX_TemporalExtent id="temporalExtent">
+<gmd:extent>
+<TimePeriod xmlns="http://www.opengis.net/gml/3.2" xmlns:ns1="http://www.opengis.net/gml/3.2" ns1:id="timePeriod">
+<beginPosition>{{ DatasetCoverage_StartTime }}</beginPosition>
+<endPosition>{{ DatasetCoverage_StopTime }}</endPosition>
+</TimePeriod>
+</gmd:extent>
+</gmd:EX_TemporalExtent>
+</gmd:temporalElement>
+<gmd:verticalElement gco:nilReason="inapplicable"/>
+</gmd:EX_Extent>
+</gmd:extent>
+</gmd:MD_DataIdentification>
+</gmd:identificationInfo>
+<gmd:contentInfo>
+<gmi:MI_CoverageDescription id="referenceInformation">
+<gmd:attributeDescription>
+<gco:RecordType xlink:href="http://www.ghrsst.org/documents.htm?parent=475"/>
+</gmd:attributeDescription>
+<gmd:contentType>
+<gmd:MD_CoverageContentTypeCode codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_CoverageContentTypeCode" codeListValue="referenceInformation">referenceInformation</gmd:MD_CoverageContentTypeCode>
+</gmd:contentType>
+<gmd:dimension>
+<gmd:MD_Band>
+<gmd:sequenceIdentifier>
+<gco:MemberName>
+<gco:aName>
+<gco:CharacterString>lat</gco:CharacterString>
+</gco:aName>
+<gco:attributeType>
+<gco:TypeName>
+<gco:aName>
+<gco:CharacterString>float</gco:CharacterString>
+</gco:aName>
+</gco:TypeName>
+</gco:attributeType>
+</gco:MemberName>
+</gmd:sequenceIdentifier>
+</gmd:MD_Band>
+</gmd:dimension>
+<gmd:dimension>
+<gmd:MD_Band>
+<gmd:sequenceIdentifier>
+<gco:MemberName>
+<gco:aName>
+<gco:CharacterString>lon</gco:CharacterString>
+</gco:aName>
+<gco:attributeType>
+<gco:TypeName>
+<gco:aName>
+<gco:CharacterString>float</gco:CharacterString>
+</gco:aName>
+</gco:TypeName>
+</gco:attributeType>
+</gco:MemberName>
+</gmd:sequenceIdentifier>
+</gmd:MD_Band>
+</gmd:dimension>
+<gmd:dimension>
+<gmd:MD_Band>
+<gmd:sequenceIdentifier>
+<gco:MemberName>
+<gco:aName>
+<gco:CharacterString>time</gco:CharacterString>
+</gco:aName>
+<gco:attributeType>
+<gco:TypeName>
+<gco:aName>
+<gco:CharacterString>int</gco:CharacterString>
+</gco:aName>
+</gco:TypeName>
+</gco:attributeType>
+</gco:MemberName>
+</gmd:sequenceIdentifier>
+</gmd:MD_Band>
+</gmd:dimension>
+</gmi:MI_CoverageDescription>
+</gmd:contentInfo>
+<gmd:distributionInfo>
+<gmd:MD_Distribution>
+<gmd:distributionFormat xlink:href="#resourceFormat"/>
+<gmd:distributor>
+<gmd:MD_Distributor>
+<gmd:distributorContact>
+<gmd:CI_ResponsibleParty>
+<gmd:individualName>
+<gco:CharacterString>PO.DAAC User Services</gco:CharacterString>
+</gmd:individualName>
+<gmd:organisationName>
+<gco:CharacterString>NASA/JPL/PODAAC &gt; Physical Oceanography Distributed Active Archive Center, Jet Propulsion Laboratory, NASA</gco:CharacterString>
+</gmd:organisationName>
+<gmd:contactInfo>
+<gmd:CI_Contact>
+<gmd:address>
+<gmd:CI_Address>
+<gmd:deliveryPoint>
+<gco:CharacterString>4800 Oak Grove Drive</gco:CharacterString>
+</gmd:deliveryPoint>
+<gmd:city>
+<gco:CharacterString>Pasadena</gco:CharacterString>
+</gmd:city>
+<gmd:administrativeArea>
+<gco:CharacterString>CA</gco:CharacterString>
+</gmd:administrativeArea>
+<gmd:postalCode>
+<gco:CharacterString>91109-8099</gco:CharacterString>
+</gmd:postalCode>
+<gmd:country>
+<gco:CharacterString>USA</gco:CharacterString>
+</gmd:country>
+<gmd:electronicMailAddress>
+<gco:CharacterString>podaac@podaac.jpl.nasa.gov</gco:CharacterString>
+</gmd:electronicMailAddress>
+</gmd:CI_Address>
+</gmd:address>
+<gmd:onlineResource>
+<gmd:CI_OnlineResource>
+<gmd:linkage>
+<gmd:URL>http://podaac.jpl.nasa.gov</gmd:URL>
+</gmd:linkage>
+</gmd:CI_OnlineResource>
+</gmd:onlineResource>
+</gmd:CI_Contact>
+</gmd:contactInfo>
+<gmd:role>
+<gmd:CI_RoleCode codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_RoleCode" codeListValue="distributor">distributor</gmd:CI_RoleCode>
+</gmd:role>
+</gmd:CI_ResponsibleParty>
+</gmd:distributorContact>
+</gmd:MD_Distributor>
+</gmd:distributor>
+{% for i in range(doc['DatasetResource-Path']|count) if doc['DatasetResource-Type'][i] != 'Thumbnail' %}
+<gmd:transferOptions>
+<gmd:MD_DigitalTransferOptions>
+<gmd:onLine>
+<gmd:CI_OnlineResource>
+<gmd:linkage>
+<gmd:URL>{{ doc['DatasetResource-Path'][i] }}</gmd:URL>
+</gmd:linkage>
+<gmd:name>
+<gco:CharacterString>{{ doc['DatasetResource-Name'][i] }}</gco:CharacterString>
+</gmd:name>
+<gmd:description>
+<gco:CharacterString>{{ doc['DatasetResource-Description'][i] }}</gco:CharacterString>
+</gmd:description>
+<gmd:function>
+<gmd:CI_OnLineFunctionCode codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_OnlineFunctionCode" codeListValue="information">information</gmd:CI_OnLineFunctionCode>
+</gmd:function>
+</gmd:CI_OnlineResource>
+</gmd:onLine>
+</gmd:MD_DigitalTransferOptions>
+</gmd:transferOptions>
+{% endfor %}
+</gmd:MD_Distribution>
+</gmd:distributionInfo>
+<gmd:metadataMaintenance>
+<gmd:MD_MaintenanceInformation>
+<gmd:maintenanceAndUpdateFrequency>
+<gmd:MD_MaintenanceFrequencyCode codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_MaintenanceFrequencyCode" codeListValue="asNeeded">asNeeded</gmd:MD_MaintenanceFrequencyCode>
+</gmd:maintenanceAndUpdateFrequency>
+<gmd:maintenanceNote>
+<gco:CharacterString>Translated from GCMD DIF </gco:CharacterString>
+</gmd:maintenanceNote>
+</gmd:MD_MaintenanceInformation>
+</gmd:metadataMaintenance>
+<gmi:acquisitionInformation>
+<gmi:MI_AcquisitionInformation>
+{% for i in UniqueDatasetSensor %}
+<gmi:instrument>
+<gmi:MI_Instrument>
+<gmi:identifier>
+<gmd:MD_Identifier>
+<gmd:code>
+<gco:CharacterString>{{ doc['DatasetSource-Sensor-ShortName'][i] }} &gt; {{ doc['DatasetSource-Sensor-LongName'][i] }}</gco:CharacterString>
+</gmd:code>
+</gmd:MD_Identifier>
+</gmi:identifier>
+<gmi:type>
+<gco:CharacterString>sensor</gco:CharacterString>
+</gmi:type>
+<gmi:description>
+<gco:CharacterString>{{ doc['DatasetSource-Sensor-Description'][i] }}</gco:CharacterString>
+</gmi:description>
+</gmi:MI_Instrument>
+</gmi:instrument>
+{% endfor %}
+{% for i in UniqueDatasetSource %}
+<gmi:platform>
+<gmi:MI_Platform>
+<gmi:identifier>
+<gmd:MD_Identifier>
+<gmd:code>
+<gco:CharacterString>{{ doc['DatasetSource-Source-ShortName'][i] }} &gt; {{ doc['DatasetSource-Source-LongName'][i] }}</gco:CharacterString>
+</gmd:code>
+</gmd:MD_Identifier>
+</gmi:identifier>
+<gmi:description>
+<gco:CharacterString>{{ doc['DatasetSource-Source-Description'][i] }}</gco:CharacterString>
+</gmi:description>
+<gmi:sponsor>
+<gmd:CI_ResponsibleParty>
+<gmd:organisationName>
+<gco:CharacterString>{{ doc['DatasetCitation-Creator'][0] }}</gco:CharacterString>
+</gmd:organisationName>
+<gmd:contactInfo>
+<gmd:CI_Contact>
+{% if (doc['DatasetCitation-OnlineResource'][0] | trim)[0:4] == 'http' or (doc['DatasetCitation-OnlineResource'][0] | trim)[0:3] == 'ftp' %}
+<gmd:onlineResource>
+<gmd:CI_OnlineResource>
+<gmd:linkage>
+<gmd:URL>{{ doc['DatasetCitation-OnlineResource'][0] }}</gmd:URL>
+</gmd:linkage>
+</gmd:CI_OnlineResource>
+</gmd:onlineResource>
+{% else %}
+<gmd:onlineResource gco:nilReason="missing"/>
+{% endif %}
+</gmd:CI_Contact>
+</gmd:contactInfo>
+<gmd:role>
+<gmd:CI_RoleCode codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_RoleCode" codeListValue="sponsor"/>
+</gmd:role>
+</gmd:CI_ResponsibleParty>
+</gmi:sponsor>
+<gmi:sponsor>
+<gmd:CI_ResponsibleParty>
+<gmd:organisationName>
+<gco:CharacterString>NASA/JPL/PODAAC &gt; Physical Oceanography Distributed Active Archive Center, Jet Propulsion Laboratory, NASA</gco:CharacterString>
+</gmd:organisationName>
+<gmd:contactInfo>
+<gmd:CI_Contact>
+<gmd:onlineResource>
+<gmd:CI_OnlineResource>
+<gmd:linkage>
+<gmd:URL>http://podaac.jpl.nasa.gov</gmd:URL>
+</gmd:linkage>
+<gmd:function>
+<gmd:CI_OnLineFunctionCode codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_OnLineFunctionCode" codeListValue="information"/>
+</gmd:function>
+</gmd:CI_OnlineResource>
+</gmd:onlineResource>
+</gmd:CI_Contact>
+</gmd:contactInfo>
+<gmd:role>
+<gmd:CI_RoleCode codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_RoleCode" codeListValue="sponsor"/>
+</gmd:role>
+</gmd:CI_ResponsibleParty>
+</gmi:sponsor>
+<gmi:instrument xlink:href="{{ doc['DatasetSource-Source-ShortName'][i] }}"/>
+</gmi:MI_Platform>
+</gmi:platform>
+{% endfor %}
+</gmi:MI_AcquisitionInformation>
+</gmi:acquisitionInformation>
+</gmi:MI_Metadata>
+</gmd:seriesMetadata>
+{% endif %}
+</gmd:DS_Series>

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/granule/iso/plugin.conf
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/granule/iso/plugin.conf b/src/main/python/plugins/granule/iso/plugin.conf
new file mode 100644
index 0000000..760017f
--- /dev/null
+++ b/src/main/python/plugins/granule/iso/plugin.conf
@@ -0,0 +1,10 @@
+[solr]
+datasetUrl=http://localhost:8983/solr.war/dataset
+granuleUrl=http://localhost:8983/solr.war/granule
+entriesPerPage=7
+
+[service]
+url=http://localhost:8890
+linkToGranule=LOCAL-FTP,REMOTE-FTP
+database=podaac_dev/podaac$dev@DAACDEV
+template=iso_template.xml

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/granule/rss/RssWriter.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/granule/rss/RssWriter.py b/src/main/python/plugins/granule/rss/RssWriter.py
new file mode 100644
index 0000000..6d2cb25
--- /dev/null
+++ b/src/main/python/plugins/granule/rss/RssWriter.py
@@ -0,0 +1,21 @@
+import logging
+
+from edge.opensearch.granulerssresponse import GranuleRssResponse
+from edge.opensearch.granulewriter import GranuleWriter
+
+class RssWriter(GranuleWriter):
+    
+    def __init__(self, configFilePath):
+        super(RssWriter, self).__init__(configFilePath, [['datasetId', 'shortName']])
+
+    def _generateOpenSearchResponse(self, solrResponse, searchText, searchUrl, searchParams, pretty):
+        response = GranuleRssResponse(self._configuration.get('service', 'linkToGranule'), 
+                                      self._configuration.get('service', 'host'),
+                                      self._configuration.get('service', 'url'))
+
+        response.title = 'PO.DAAC Granule Search Results'
+        response.description = 'Search result for "'+searchText+'"'
+        response.link = searchUrl
+        response.parameters = searchParams
+
+        return response.generate(solrResponse, pretty) 

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/granule/rss/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/granule/rss/__init__.py b/src/main/python/plugins/granule/rss/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/granule/rss/plugin.conf
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/granule/rss/plugin.conf b/src/main/python/plugins/granule/rss/plugin.conf
new file mode 100644
index 0000000..3f14b5f
--- /dev/null
+++ b/src/main/python/plugins/granule/rss/plugin.conf
@@ -0,0 +1,12 @@
+[solr]
+datasetUrl=http://localhost:8983/solr.war/dataset
+granuleUrl=http://localhost:8983/solr.war/granule
+entriesPerPage=7
+
+[service]
+url=http://localhost:8890
+linkToGranule=LOCAL-FTP,REMOTE-FTP
+host=localhost:8890
+database=podaac_dev/podaac$dev@DAACDEV
+l2=http://biaxin.jpl.nasa.gov/ws/search/granule
+bbox=l2

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/heartbeat/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/heartbeat/__init__.py b/src/main/python/plugins/heartbeat/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/heartbeat/json/Writer.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/heartbeat/json/Writer.py b/src/main/python/plugins/heartbeat/json/Writer.py
new file mode 100644
index 0000000..8628f8a
--- /dev/null
+++ b/src/main/python/plugins/heartbeat/json/Writer.py
@@ -0,0 +1,30 @@
+import logging
+import json
+
+import requestresponder
+from edge.httputility import HttpUtility
+
+class Writer(requestresponder.RequestResponder):
+    url = None
+
+    def __init__(self, configFilePath):
+        super(Writer, self).__init__(configFilePath)
+        self.url = self._configuration.get('solr', 'url') + "/admin/ping"
+
+    def get(self, requestHandler):
+        super(Writer, self).get(requestHandler)
+        try:
+            httpUtility = HttpUtility()
+            result = httpUtility.getResponse(self.url, self.onResponse)
+        except BaseException as exception:
+            raise exception
+
+    def onResponse(self, response):
+        self.requestHandler.set_header("Content-Type", "application/json")
+        self.requestHandler.set_header('Access-Control-Allow-Origin', '*')
+        if response.error:
+            self.requestHandler.write(json.dumps({"online": False}))
+            self.requestHandler.finish()
+        else:
+            self.requestHandler.write(json.dumps({"online": True}))
+            self.requestHandler.finish()

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/heartbeat/json/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/heartbeat/json/__init__.py b/src/main/python/plugins/heartbeat/json/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/heartbeat/json/plugin.conf
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/heartbeat/json/plugin.conf b/src/main/python/plugins/heartbeat/json/plugin.conf
new file mode 100644
index 0000000..98b11c8
--- /dev/null
+++ b/src/main/python/plugins/heartbeat/json/plugin.conf
@@ -0,0 +1,2 @@
+[solr]
+url=http://localhost:8983/solr/[core]

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/icoads/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/icoads/__init__.py b/src/main/python/plugins/icoads/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/icoads/json/Writer.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/icoads/json/Writer.py b/src/main/python/plugins/icoads/json/Writer.py
new file mode 100644
index 0000000..c3bf33a
--- /dev/null
+++ b/src/main/python/plugins/icoads/json/Writer.py
@@ -0,0 +1,89 @@
+import logging
+import os
+import os.path
+import urllib
+
+from edge.writer.solrtemplateresponsewriter import SolrTemplateResponseWriter
+from edge.response.solrjsontemplateresponse import SolrJsonTemplateResponse
+
+class Writer(SolrTemplateResponseWriter):
+    def __init__(self, configFilePath):
+        super(Writer, self).__init__(configFilePath)
+        
+        self.contentType = 'application/json'
+
+        templatePath = os.path.dirname(configFilePath) + os.sep
+        templatePath += self._configuration.get('service', 'template')
+        self.template = self._readTemplate(templatePath)
+
+    def _generateOpenSearchResponse(self, solrResponse, searchText, searchUrl, searchParams, pretty):
+        response = SolrJsonTemplateResponse(searchUrl, searchParams)
+        response.setTemplate(self.template)
+
+        return response.generate(solrResponse, pretty=pretty)
+
+    def _constructSolrQuery(self, startIndex, entriesPerPage, parameters, facets):
+        queries = []
+        filterQueries = []
+        sort = None
+
+        for key, value in parameters.iteritems():
+            if value != "":
+                if key == 'keyword':
+                    queries.append(urllib.quote(value))
+                elif key == 'startTime':
+                    filterQueries.append('time:['+value+'%20TO%20*]')
+                elif key == 'endTime':
+                    filterQueries.append('time:[*%20TO%20'+value+']')
+                elif key == 'bbox':
+                    coordinates = value.split(",")
+                    filterQueries.append('loc:[' + coordinates[1] + ',' + coordinates[0] + '%20TO%20' + coordinates[3] + ',' + coordinates[2] + ']')
+                elif key == 'variable':
+                    if value.lower() == 'sss':
+                        filterQueries.append('sss:[*%20TO%20*]')
+                    elif value.lower() == 'sst':
+                        filterQueries.append('sst:[*%20TO%20*]')
+                    elif value.lower() == 'wind':
+                        filterQueries.append('wind_speed:[*%20TO%20*]')
+                elif key == "minDepth":
+                    if 'variable' in parameters:
+                        if parameters['variable'].lower() == 'sss':
+                            filterQueries.append('(sss_depth:['+value+'%20TO%20*]+OR+(*:*%20NOT%20sss_depth:*))')
+                        elif parameters['variable'].lower() == 'sst':
+                            filterQueries.append('(sst_depth:['+value+'%20TO%20*]+OR+(*:*%20NOT%20sst_depth:*))')
+                        elif parameters['variable'].lower() == 'wind':
+                            filterQueries.append('(wind_depth:['+value+'%20TO%20*]+OR+(*:*%20NOT%20wind_depth:*))')
+                elif key == "maxDepth":
+                    if 'variable' in parameters:
+                        if parameters['variable'].lower() == 'sss':
+                            filterQueries.append('(sss_depth:[*%20TO%20'+value+']+OR+(*:*%20NOT%20sss_depth:*))')
+                        elif parameters['variable'].lower() == 'sst':
+                            filterQueries.append('(sst_depth:[*%20TO%20'+value+']+OR+(*:*%20NOT%20sst_depth:*))')
+                        elif parameters['variable'].lower() == 'wind':
+                            filterQueries.append('(wind_depth:[*%20TO%20'+value+']+OR+(*:*%20NOT%20wind_depth:*))')
+                elif key == 'platform':
+                    if type(value) is list:
+                        filterQueries.append('platform:(' + '+OR+'.join(value) + ')')
+                    else:
+                        filterQueries.append('platform:'+value)
+
+        if len(queries) == 0:
+            queries.append('*:*')
+
+        query = 'q='+'+AND+'.join(queries)+'&wt=json&start='+str(startIndex)+'&rows='+str(entriesPerPage)
+
+        if len(filterQueries) > 0:
+            query += '&fq='+'+AND+'.join(filterQueries)
+
+        if sort is not None:
+            query += '&sort=' + sort
+
+        if 'stats' in parameters and parameters['stats'].lower() == 'true':
+            query += '&stats=true&stats.field={!min=true%20max=true}sss_depth&stats.field={!min=true%20max=true}sst_depth&stats.field={!min=true%20max=true}wind_depth'
+
+        if 'facet' in parameters and parameters['facet'].lower() == 'true':
+            query += '&facet=true&facet.field=platform&facet.field=device&facet.limit=-1&facet.mincount=1'
+
+        logging.debug('solr query: '+query)
+
+        return query

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/icoads/json/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/icoads/json/__init__.py b/src/main/python/plugins/icoads/json/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/icoads/json/plugin.conf
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/icoads/json/plugin.conf b/src/main/python/plugins/icoads/json/plugin.conf
new file mode 100644
index 0000000..04438e6
--- /dev/null
+++ b/src/main/python/plugins/icoads/json/plugin.conf
@@ -0,0 +1,11 @@
+[solr]
+datasetUrl=http://localhost:8983/solr/icoads
+entriesPerPage=10
+maxEntriesPerPage=100000
+defaultSearchParam=keyword
+parameters=keyword,startTime,endTime,bbox,minDepth,maxDepth,variable,stats,platform,facet
+facets={}
+
+[service]
+url=http://localhost:8890
+template=template.json

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/icoads/json/template.json
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/icoads/json/template.json b/src/main/python/plugins/icoads/json/template.json
new file mode 100755
index 0000000..6133e52
--- /dev/null
+++ b/src/main/python/plugins/icoads/json/template.json
@@ -0,0 +1,64 @@
+{
+{% if last %}"last": "{{ last }}",{% endif %}
+{% if prev %}"prev": "{{ prev }}",{% endif %}
+{% if next %}"next": "{{ next }}",{% endif %}
+{% if first %}"first": "{{ first }}",{% endif %}
+"results":[
+{% for doc in docs %}
+{
+"id": "{{ doc['id'] }}",
+"time": "{{ doc['time'] }}",
+"point": "{{ doc['loc'] }}",
+"sea_water_temperature": {{ doc['sst'] | jsonify }},
+"sea_water_temperature_depth": {{ doc['sst_depth'] | jsonify }},
+"sea_water_temperature_quality": {{ doc['sst_qc_flag'] | jsonify }},
+"wind_speed": {{ doc['wind_speed'] | jsonify }},
+"eastward_wind": {{ doc['wind_u'] | jsonify }},
+"northward_wind": {{ doc['wind_v'] | jsonify }},
+"wind_depth": {{ doc['wind_depth'] | jsonify }},
+"wind_quality": {{ doc['wind_qc_flag'] | jsonify }},
+"sea_water_salinity": {{ doc['sss'] | jsonify }},
+"sea_water_salinity_depth": {{ doc['sss_depth'] | jsonify }},
+"sea_water_salinity_quality": {{ doc['sss_qc_flag'] | jsonify }},
+"mission": {{ doc['mission'] | jsonify }},
+"platform": {{ doc['platform'] | jsonify }},
+"device": {{ doc['device'] | jsonify }},
+"metadata": {{ doc['meta'] }}
+}
+{% if not loop.last %}
+,
+{% endif %}
+{% endfor %}
+]
+,"totalResults":{{ numFound }}
+,"startIndex":{{ startIndex  }}
+,"itemsPerPage":{{ itemsPerPage }}
+{% if stats %}
+,
+"stats_fields": {{ stats['stats_fields'] | jsonify }}
+{% endif %}
+{% if facets %}
+,
+"facets":[
+{% for key, facet in facets['facet_fields'].iteritems() %}
+{
+"field": "{{ key }}",
+"values":[
+{% for i in range(0, facet|count, 2) %}
+{
+"count":{{facet[i+1] }},
+"value": "{{ facet[i] }}"
+}
+{% if not loop.last %}
+,
+{% endif %}
+{% endfor %}
+]
+}
+{% if not loop.last %}
+,
+{% endif %}
+{% endfor %}
+]
+{% endif %}
+}

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/nexus/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/nexus/__init__.py b/src/main/python/plugins/nexus/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/nexus/climatology/Writer.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/nexus/climatology/Writer.py b/src/main/python/plugins/nexus/climatology/Writer.py
new file mode 100644
index 0000000..0d78465
--- /dev/null
+++ b/src/main/python/plugins/nexus/climatology/Writer.py
@@ -0,0 +1,8 @@
+import logging
+import urllib
+
+from edge.writer.genericproxywriter import GenericProxyWriter
+
+class Writer(GenericProxyWriter):
+    def __init__(self, configFilePath):
+        super(Writer, self).__init__(configFilePath)

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/nexus/climatology/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/nexus/climatology/__init__.py b/src/main/python/plugins/nexus/climatology/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/nexus/climatology/plugin.conf
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/nexus/climatology/plugin.conf b/src/main/python/plugins/nexus/climatology/plugin.conf
new file mode 100644
index 0000000..8372843
--- /dev/null
+++ b/src/main/python/plugins/nexus/climatology/plugin.conf
@@ -0,0 +1,2 @@
+[proxy]
+url=http://127.0.0.1:8080/climatology

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/nexus/solr/Writer.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/nexus/solr/Writer.py b/src/main/python/plugins/nexus/solr/Writer.py
new file mode 100644
index 0000000..0d78465
--- /dev/null
+++ b/src/main/python/plugins/nexus/solr/Writer.py
@@ -0,0 +1,8 @@
+import logging
+import urllib
+
+from edge.writer.genericproxywriter import GenericProxyWriter
+
+class Writer(GenericProxyWriter):
+    def __init__(self, configFilePath):
+        super(Writer, self).__init__(configFilePath)

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/nexus/solr/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/nexus/solr/__init__.py b/src/main/python/plugins/nexus/solr/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/nexus/solr/plugin.conf
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/nexus/solr/plugin.conf b/src/main/python/plugins/nexus/solr/plugin.conf
new file mode 100644
index 0000000..e97553c
--- /dev/null
+++ b/src/main/python/plugins/nexus/solr/plugin.conf
@@ -0,0 +1,2 @@
+[proxy]
+url=http://127.0.0.1:8080/solr

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/nexus/subsetter/Writer.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/nexus/subsetter/Writer.py b/src/main/python/plugins/nexus/subsetter/Writer.py
new file mode 100644
index 0000000..0d78465
--- /dev/null
+++ b/src/main/python/plugins/nexus/subsetter/Writer.py
@@ -0,0 +1,8 @@
+import logging
+import urllib
+
+from edge.writer.genericproxywriter import GenericProxyWriter
+
+class Writer(GenericProxyWriter):
+    def __init__(self, configFilePath):
+        super(Writer, self).__init__(configFilePath)

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/nexus/subsetter/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/nexus/subsetter/__init__.py b/src/main/python/plugins/nexus/subsetter/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/nexus/subsetter/plugin.conf
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/nexus/subsetter/plugin.conf b/src/main/python/plugins/nexus/subsetter/plugin.conf
new file mode 100644
index 0000000..0222e15
--- /dev/null
+++ b/src/main/python/plugins/nexus/subsetter/plugin.conf
@@ -0,0 +1,2 @@
+[proxy]
+url=http://127.0.0.1:8082/subsetter

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/oceanxtremes/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/oceanxtremes/__init__.py b/src/main/python/plugins/oceanxtremes/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/oceanxtremes/datacasting/Writer.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/oceanxtremes/datacasting/Writer.py b/src/main/python/plugins/oceanxtremes/datacasting/Writer.py
new file mode 100644
index 0000000..e3a83b6
--- /dev/null
+++ b/src/main/python/plugins/oceanxtremes/datacasting/Writer.py
@@ -0,0 +1,62 @@
+import logging
+import os
+import os.path
+import urllib
+import json
+
+from edge.writer.solrtemplateresponsewriter import SolrTemplateResponseWriter
+from edge.opensearch.solrtemplateresponse import SolrTemplateResponse
+
+class Writer(SolrTemplateResponseWriter):
+    def __init__(self, configFilePath):
+        super(Writer, self).__init__(configFilePath)
+
+        templatePath = os.path.dirname(configFilePath) + os.sep
+        templatePath += self._configuration.get('service', 'template')
+        self.template = self._readTemplate(templatePath)
+
+    def _generateOpenSearchResponse(self, solrResponse, searchText, searchUrl, searchParams, pretty):
+        response = SolrTemplateResponse(self._configuration, searchUrl, searchParams)
+        response.setTemplate(self.template)
+        response.variables['serviceUrl'] = self._configuration.get('service', 'url')
+
+        return response.generate(solrResponse, pretty=pretty)
+
+    def _constructSolrQuery(self, startIndex, entriesPerPage, parameters, facets):
+        sortKeys = json.loads(self._configuration.get('solr', 'sortKeys'))
+
+        queries = []
+        filterQueries = []
+        sort = None
+        sortDir = 'asc'
+        start = '*'
+        end = '*'
+
+        for key, value in parameters.iteritems():
+            if value != "":
+                if key == 'keyword':
+                    queries.append(urllib.quote(value))
+                elif key == 'shortName':
+                    queries.append("primary_dataset_short_name:" + urllib.quote(value))
+
+        if len(queries) == 0:
+            queries.append('*:*')
+
+        query = 'q='+'+AND+'.join(queries)+'&version=2.2&indent=on&wt=json'
+
+        if len(filterQueries) > 0:
+            query += '&fq='+'+AND+'.join(filterQueries)
+        
+        if self.facet:
+            query += '&rows=0&facet=true&facet.limit=-1&facet.mincount=1&'
+            query += '&'.join(['facet.field=' + facet for facet in self.facetDefs.values()])
+        else:
+            query += '&start='+str(startIndex)+'&rows='+str(entriesPerPage)
+            if sort is not None:
+                query += '&sort=' + urllib.quote(sort + ' ' + sortDir + ",InternalVersion desc")
+            else:
+                query += '&sort=' + urllib.quote("submit_date desc")
+
+        logging.debug('solr query: '+query)
+
+        return query

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/oceanxtremes/datacasting/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/oceanxtremes/datacasting/__init__.py b/src/main/python/plugins/oceanxtremes/datacasting/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/oceanxtremes/datacasting/plugin.conf
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/oceanxtremes/datacasting/plugin.conf b/src/main/python/plugins/oceanxtremes/datacasting/plugin.conf
new file mode 100644
index 0000000..42ab19d
--- /dev/null
+++ b/src/main/python/plugins/oceanxtremes/datacasting/plugin.conf
@@ -0,0 +1,12 @@
+[solr]
+datasetUrl=http://localhost:8983/solr/anomaly
+entriesPerPage=2000
+maxEntriesPerPage=2000
+defaultSearchParam=keyword
+parameters=keyword,shortName
+facets={}
+sortKeys={}
+
+[service]
+url=http://localhost:8890
+template=template.xml

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/oceanxtremes/datacasting/template.xml
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/oceanxtremes/datacasting/template.xml b/src/main/python/plugins/oceanxtremes/datacasting/template.xml
new file mode 100755
index 0000000..fd7a2a9
--- /dev/null
+++ b/src/main/python/plugins/oceanxtremes/datacasting/template.xml
@@ -0,0 +1,43 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<rss xmlns:datacasting="http://datacasting.jpl.nasa.gov/datacasting"
+    xmlns:georss="http://www.georss.org/georss" xmlns:gml="http://www.opengis.net/gml" version="2.0">
+    <channel>
+       <title>{{ parameters['shortName'] }}</title>
+       <link>{{ serviceUrl }}/anomaly?shortName={{ parameters['shortName'] }}</link>
+       <description>Anomalies flagged using dataset {{ parameters['shortName'] }}</description>
+       <datacasting:channelUID>{{ parameters['shortName'] }}</datacasting:channelUID>
+       <datacasting:dataSource>{{ parameters['shortName'] }}</datacasting:dataSource>
+       <datacasting:customEltDef displayName="Keyword" type="string" name="keyword"/>
+       <datacasting:customEltDef displayName="Lower Threshold" type="float" name="lowerThreshold"/>
+       <datacasting:customEltDef displayName="Upper Threshold" type="float" name="upperThreshold"/>
+       <datacasting:customEltDef displayName="Dataset Short Name" type="string" name="datasetShortName"/>
+       <datacasting:customEltDef displayName="Climatology Short Name" type="string" name="climatologyShortName"/>
+       <pubDate>{{ updated | convertISOTime('%a, %d %b %Y %H:%M:%S GMT') }}</pubDate>
+{% for doc in docs %}
+       <item>
+           <title>{{ doc['description'] }}</title>
+           <description>{{ doc['description'] }}</description>
+           <link>{{ doc['anomaly_url'] }}</link>
+           <datacasting:acquisitionStartDate>{{ doc['anomaly_begin_date'] | convertISOTime('%a, %d %b %Y %H:%M:%S GMT') }}</datacasting:acquisitionStartDate>
+           <datacasting:acquisitionEndDate>{{ doc['anomaly_end_date'] | convertISOTime('%a, %d %b %Y %H:%M:%S GMT') }}</datacasting:acquisitionEndDate>
+           <georss:where>
+               <gml:Envelope>
+{% set bbox = doc['bbox'][9:-1].split(',') %}
+                   <gml:lowerCorner>{{ bbox[3] }} {{ bbox[0] }}</gml:lowerCorner>
+                   <gml:upperCorner>{{ bbox[2] }} {{ bbox[1] }}</gml:upperCorner>
+               </gml:Envelope>
+           </georss:where>
+           <enclosure url="{{ doc['anomaly_url'] }}" length="0" type="application/x-gzip"/>
+           <datacasting:customElement name="keyword" value="{{ ','.join(doc['keyword']) }}"/>
+           <datacasting:customElement name="datasetShortName" value="{{ doc['primary_dataset_short_name'] }}"/>
+           <datacasting:customElement name="climatologyShortName" value="{{ doc['climatology_dataset_short_name'] }}"/>
+           <datacasting:customElement name="lowerThreshold" value="{{ doc['below_threshold'] }}"/>
+           <datacasting:customElement name="upperThreshold" value="{{ doc['above_threshold'] }}"/>
+           <datacasting:preview>{{ doc['anomaly_url'] }}</datacasting:preview>
+           <guid isPermaLink="false">{{ doc['id'] }}</guid>
+           <pubDate>{{ doc['submit_date'] | convertISOTime('%a, %d %b %Y %H:%M:%S GMT') }}</pubDate>
+           <source url="{{ doc['algorithm_url'] }}">{{ doc['primary_dataset_short_name'] }}</source>
+       </item>
+{% endfor %}
+    </channel>
+</rss>

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/oceanxtremes/post/Writer.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/oceanxtremes/post/Writer.py b/src/main/python/plugins/oceanxtremes/post/Writer.py
new file mode 100644
index 0000000..fd5fad8
--- /dev/null
+++ b/src/main/python/plugins/oceanxtremes/post/Writer.py
@@ -0,0 +1,44 @@
+import logging
+import urllib2
+import urlparse
+import uuid
+import json
+from datetime import datetime
+
+import requestresponder
+from edge.httputility import HttpUtility
+
+class Writer(requestresponder.RequestResponder):
+    def __init__(self, configFilePath):
+        super(Writer, self).__init__(configFilePath)
+
+    def options(self, requestHandler):
+        super(Writer, self).options(requestHandler)
+        self.requestHandler.set_header('Access-Control-Allow-Origin', '*')
+        self.requestHandler.set_header('Access-Control-Allow-Methods', 'POST, GET, OPTIONS')
+        self.requestHandler.set_header('Access-Control-Allow-Headers', 'Content-Type, X-Requested-With')
+        self.requestHandler.set_header('Allow', 'OPTIONS, GET, HEAD, POST')
+        self.requestHandler.set_header('Accept', 'application/json')
+        self.requestHandler.set_status(200)
+        self.requestHandler.finish()
+
+    def post(self, requestHandler):
+        super(Writer, self).post(requestHandler)
+        data = json.loads(requestHandler.request.body)
+
+        data["id"] = str(uuid.uuid4())
+        data["submit_date"] = datetime.utcnow().isoformat() + "Z"
+
+        httpUtility = HttpUtility()
+        solrUrl = self._configuration.get('solr', 'url') + "/update/json/docs?commit=true"
+        result = httpUtility.getResponse(solrUrl, self.onResponse, body=json.dumps(data), headers={'Content-Type': 'application/json'})
+
+    def onResponse(self, response):
+        self.requestHandler.set_header('Access-Control-Allow-Origin', '*')
+        if response.error:
+            self.requestHandler.set_status(404)
+            self.requestHandler.write(str(response.error))
+            self.requestHandler.finish()
+        else:
+            self.requestHandler.write(response.body)
+            self.requestHandler.finish()

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/oceanxtremes/post/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/oceanxtremes/post/__init__.py b/src/main/python/plugins/oceanxtremes/post/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/oceanxtremes/post/plugin.conf
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/oceanxtremes/post/plugin.conf b/src/main/python/plugins/oceanxtremes/post/plugin.conf
new file mode 100644
index 0000000..d5dae25
--- /dev/null
+++ b/src/main/python/plugins/oceanxtremes/post/plugin.conf
@@ -0,0 +1,3 @@
+[solr]
+url=http://localhost:8983/solr/anomaly
+

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/oiip/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/oiip/__init__.py b/src/main/python/plugins/oiip/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/oiip/json/Writer.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/oiip/json/Writer.py b/src/main/python/plugins/oiip/json/Writer.py
new file mode 100644
index 0000000..58d84e6
--- /dev/null
+++ b/src/main/python/plugins/oiip/json/Writer.py
@@ -0,0 +1,46 @@
+import logging
+import os
+import os.path
+import urllib
+
+from edge.writer.solrtemplateresponsewriter import SolrTemplateResponseWriter
+from edge.response.solrjsontemplateresponse import SolrJsonTemplateResponse
+
+class Writer(SolrTemplateResponseWriter):
+    def __init__(self, configFilePath):
+        super(Writer, self).__init__(configFilePath)
+        
+        self.contentType = 'application/json'
+
+        templatePath = os.path.dirname(configFilePath) + os.sep
+        templatePath += self._configuration.get('service', 'template')
+        self.template = self._readTemplate(templatePath)
+
+    def _generateOpenSearchResponse(self, solrResponse, searchText, searchUrl, searchParams, pretty):
+        response = SolrJsonTemplateResponse()
+        response.setTemplate(self.template)
+
+        return response.generate(solrResponse, pretty=pretty)
+
+    def _constructSolrQuery(self, startIndex, entriesPerPage, parameters, facets):
+        queries = []
+
+        for key, value in parameters.iteritems():
+            if value != "":
+                if key == 'keyword':
+                    queries.append(urllib.quote(value))
+                elif key == 'necessity':
+                    queries.append("necessity:" + urllib.quote(value))
+                elif key == 'source':
+                    queries.append("source_ss:\"" + urllib.quote(value) + "\"")
+
+        if len(queries) == 0:
+            queries.append('*:*')
+
+        query = 'q='+'+AND+'.join(queries)+'&version=2.2&indent=on&wt=json&start='+str(startIndex)+'&rows='+str(entriesPerPage)
+
+        query += '&sort=category+asc'
+
+        logging.debug('solr query: '+query)
+
+        return query

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/oiip/json/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/oiip/json/__init__.py b/src/main/python/plugins/oiip/json/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/oiip/json/plugin.conf
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/oiip/json/plugin.conf b/src/main/python/plugins/oiip/json/plugin.conf
new file mode 100644
index 0000000..882e3f1
--- /dev/null
+++ b/src/main/python/plugins/oiip/json/plugin.conf
@@ -0,0 +1,11 @@
+[solr]
+datasetUrl=http://localhost:8983/solr/attribute
+entriesPerPage=2000
+maxEntriesPerPage=2000
+defaultSearchParam=keyword
+parameters=keyword,necessity,source
+facets={}
+
+[service]
+url=http://localhost:8890
+template=template.json

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/oiip/json/template.json
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/oiip/json/template.json b/src/main/python/plugins/oiip/json/template.json
new file mode 100755
index 0000000..a5f3819
--- /dev/null
+++ b/src/main/python/plugins/oiip/json/template.json
@@ -0,0 +1,22 @@
+{
+   "attributes":
+[
+{% for doc in docs %}
+{
+"name": {{ doc['name'] | jsonify }},
+"category": {{ doc['category'] | jsonify }},
+"type": {{ doc['type'] | jsonify }},
+"description": {{ doc['description'] | jsonify }},
+"necessity": {{ doc['necessity'] | jsonify }},
+"source":
+{
+"name": {{ doc["source_name"] | jsonify }},
+"version": {{ doc["source_version"] | jsonify }}
+}
+}
+{% if not loop.last %}
+,
+{% endif %}
+{% endfor %}
+]
+}

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/oiip/xml/Writer.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/oiip/xml/Writer.py b/src/main/python/plugins/oiip/xml/Writer.py
new file mode 100644
index 0000000..c5c9c6c
--- /dev/null
+++ b/src/main/python/plugins/oiip/xml/Writer.py
@@ -0,0 +1,44 @@
+import logging
+import os
+import os.path
+import urllib
+
+from edge.writer.solrtemplateresponsewriter import SolrTemplateResponseWriter
+from edge.response.solrjsontemplateresponse import SolrJsonTemplateResponse
+
+class Writer(SolrTemplateResponseWriter):
+    def __init__(self, configFilePath):
+        super(Writer, self).__init__(configFilePath)
+
+        templatePath = os.path.dirname(configFilePath) + os.sep
+        templatePath += self._configuration.get('service', 'template')
+        self.template = self._readTemplate(templatePath)
+
+    def _generateOpenSearchResponse(self, solrResponse, searchText, searchUrl, searchParams, pretty):
+        response = SolrJsonTemplateResponse()
+        response.setTemplate(self.template)
+
+        return response.generate(solrResponse, pretty=pretty)
+
+    def _constructSolrQuery(self, startIndex, entriesPerPage, parameters, facets):
+        queries = []
+
+        for key, value in parameters.iteritems():
+            if value != "":
+                if key == 'keyword':
+                    queries.append(urllib.quote(value))
+                elif key == 'necessity':
+                    queries.append("necessity:" + urllib.quote(value))
+                elif key == 'source':
+                    queries.append("source_ss:\"" + urllib.quote(value) + "\"")
+
+        if len(queries) == 0:
+            queries.append('*:*')
+
+        query = 'q='+'+AND+'.join(queries)+'&version=2.2&indent=on&wt=json&start='+str(startIndex)+'&rows='+str(entriesPerPage)
+
+        query += '&sort=category+asc'
+
+        logging.debug('solr query: '+query)
+
+        return query

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/oiip/xml/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/oiip/xml/__init__.py b/src/main/python/plugins/oiip/xml/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/oiip/xml/plugin.conf
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/oiip/xml/plugin.conf b/src/main/python/plugins/oiip/xml/plugin.conf
new file mode 100644
index 0000000..71df193
--- /dev/null
+++ b/src/main/python/plugins/oiip/xml/plugin.conf
@@ -0,0 +1,11 @@
+[solr]
+datasetUrl=http://localhost:8983/solr/attribute
+entriesPerPage=2000
+maxEntriesPerPage=2000
+defaultSearchParam=keyword
+parameters=keyword,necessity,source
+facets={}
+
+[service]
+url=http://localhost:8890
+template=template.xml

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/oiip/xml/template.xml
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/oiip/xml/template.xml b/src/main/python/plugins/oiip/xml/template.xml
new file mode 100755
index 0000000..9fd1ef7
--- /dev/null
+++ b/src/main/python/plugins/oiip/xml/template.xml
@@ -0,0 +1,14 @@
+<?xml version='1.0' encoding='utf-8'?>
+<globalMetadataItems>
+{% for doc in docs %}
+<globalMetadataItem>
+<displayName>{{ ' '.join(doc['name'].split('_')).title() }}</displayName>
+<tagName>{{ doc['name'] }}</tagName>
+{% if doc['necessity'] == 'required' %}
+<isRequired>true</isRequired>
+{% endif %}
+<description>{{ doc['description'] }}</description>
+<metadataType>{{ doc['category'] }}</metadataType>
+</globalMetadataItem>
+{% endfor %}
+</globalMetadataItems>

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/passthrough/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/passthrough/__init__.py b/src/main/python/plugins/passthrough/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/passthrough/pt/PassThroughWriter.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/passthrough/pt/PassThroughWriter.py b/src/main/python/plugins/passthrough/pt/PassThroughWriter.py
new file mode 100644
index 0000000..f4785b7
--- /dev/null
+++ b/src/main/python/plugins/passthrough/pt/PassThroughWriter.py
@@ -0,0 +1,105 @@
+import logging
+import urllib2
+import urlparse
+
+import requestresponder
+from edge.httputility import HttpUtility
+
+class PassThroughWriter(requestresponder.RequestResponder):
+    def __init__(self, configFilePath):
+        super(PassThroughWriter, self).__init__(configFilePath)
+
+    def get(self, requestHandler):
+        super(PassThroughWriter, self).get(requestHandler)
+        url = None
+        try:
+            url = requestHandler.get_argument('url')
+        except:
+            raise Exception('Missing url.')
+
+        if self._isAllowed(url) == False:
+            raise Exception('Not allowed to connect to that url: '+url)
+
+        #io = None
+        try:
+            """
+            logging.debug('url: '+url)
+            io = urllib2.urlopen(url)
+
+            message = io.info()
+            for headerEntry in message.headers:
+                pair = headerEntry.split(':')
+                logging.debug('key: '+pair[0]+', value: '+pair[1].replace('\r\n', ''))
+                requestHandler.set_header(pair[0], pair[1].replace('\r\n', ''))
+
+            while True:
+                data = io.read()
+                if data == '':
+                    break
+                else:
+                    requestHandler.write(data)
+            """
+            
+            httpUtility = HttpUtility()
+            result = httpUtility.getResponse(url, self.onResponse)
+            """
+            for header in result['header']:
+                logging.debug('header: '+header[0]+':'+header[1])
+                requestHandler.set_header(header[0], header[1])
+
+            requestHandler.write(result['data'])
+            """
+        except BaseException as exception:
+            raise exception
+        """
+        finally:
+            if io is not None:
+                io.close()
+        """
+
+    def onResponse(self, response):
+        if response.error:
+            self.requestHandler.set_status(404)
+            self.requestHandler.write(str(response.error))
+            self.requestHandler.finish()
+        else:
+            for name, value in response.headers.iteritems():
+                logging.debug('header: '+name+':'+value)
+                self.requestHandler.set_header(name, value)
+            self.requestHandler.set_header('Access-Control-Allow-Origin', '*')
+            self.requestHandler.write(response.body)
+            self.requestHandler.finish()
+        
+    def _isAllowed(self, url):
+        allow = self._configuration.get('service', 'allow')
+        allows = allow.split(',')
+        for i in range(len(allows)):
+            allows[i] = allows[i].strip()
+
+        """
+        for value in allows:
+            logging.debug('allow: '+value)
+        """
+
+        segments = urlparse.urlparse(url)
+        netlocation = segments.netloc
+
+        targets = [netlocation]
+        netlocations = netlocation.split(':')
+        if len(netlocations) == 2:
+            if netlocations[1] == '80':
+                targets.append(netlocations[0])
+
+        """
+        for element in targets:
+            logging.debug('target: '+element)
+        """
+
+        isAllowed = False
+        for target in targets:
+            for element in allows:
+                if target == element:
+                    isAllowed = True
+                    break
+
+        return isAllowed

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/passthrough/pt/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/passthrough/pt/__init__.py b/src/main/python/plugins/passthrough/pt/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/passthrough/pt/plugin.conf
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/passthrough/pt/plugin.conf b/src/main/python/plugins/passthrough/pt/plugin.conf
new file mode 100644
index 0000000..181a0f2
--- /dev/null
+++ b/src/main/python/plugins/passthrough/pt/plugin.conf
@@ -0,0 +1,2 @@
+[service]
+allow=localhost:9200, msas-es.jpl.nasa.gov

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/product/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/product/__init__.py b/src/main/python/plugins/product/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/product/atom/AtomWriter.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/product/atom/AtomWriter.py b/src/main/python/plugins/product/atom/AtomWriter.py
new file mode 100644
index 0000000..babef98
--- /dev/null
+++ b/src/main/python/plugins/product/atom/AtomWriter.py
@@ -0,0 +1,27 @@
+import logging
+import datetime
+
+from edge.elasticsearch.opensearch.granuleatomresponse import GranuleAtomResponse
+from edge.elasticsearch.granulewriter import GranuleWriter
+
+class AtomWriter(GranuleWriter):
+    
+    def __init__(self, configFilePath):
+        super(AtomWriter, self).__init__(configFilePath, [['identifier']])
+
+    def _generateOpenSearchResponse(self, solrResponse, searchText, searchUrl, searchParams, pretty):
+        response = GranuleAtomResponse(
+            self._configuration.get('service', 'linkToGranule'),
+            self._configuration.get('service', 'host'),
+            self._configuration.get('service', 'url')
+        )
+
+        response.title = 'GIBS Product Search Results'
+        #response.description = 'Search result for "'+searchText+'"'
+        response.link = searchUrl
+        response.authors.append('GIBS Product Search Service')
+        response.updated = datetime.datetime.utcnow().isoformat()+'Z'
+        response.id = 'tag:'+self._configuration.get('service', 'host')+','+datetime.datetime.utcnow().date().isoformat()
+        response.parameters = searchParams
+
+        return response.generate(solrResponse, pretty) 

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/product/atom/Writer.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/product/atom/Writer.py b/src/main/python/plugins/product/atom/Writer.py
new file mode 100644
index 0000000..f4e085f
--- /dev/null
+++ b/src/main/python/plugins/product/atom/Writer.py
@@ -0,0 +1,71 @@
+import logging
+import os
+import os.path
+import urllib
+
+from edge.writer.solrtemplateresponsewriter import SolrTemplateResponseWriter
+from edge.opensearch.solrcmrtemplateresponse import SolrCmrTemplateResponse
+
+class Writer(SolrTemplateResponseWriter):
+    def __init__(self, configFilePath):
+        super(Writer, self).__init__(configFilePath)
+
+        templatePath = os.path.dirname(configFilePath) + os.sep
+        templatePath += self._configuration.get('service', 'template')
+        self.template = self._readTemplate(templatePath)
+
+    def _generateOpenSearchResponse(self, solrResponse, searchText, searchUrl, searchParams, pretty):
+
+        response = SolrCmrTemplateResponse(self._configuration, searchUrl, searchParams)
+        response.setTemplate(self.template)
+        response.variables['serviceUrl'] = self._configuration.get('service', 'url')
+
+        return response.generate(solrResponse, pretty=pretty)
+
+    def _constructSolrQuery(self, startIndex, entriesPerPage, parameters, facets):
+        queries = []
+        filterQueries = []
+
+        for key, value in parameters.iteritems():
+            if key == 'keyword':
+                logging.debug('product/atom/Writer.py: keyword='+value)
+                queries.append(urllib.quote(value))
+            elif key == 'product_pt_id':
+                filterQueries.append(key + ':' + self._urlEncodeSolrQueryValue(value))
+            elif key == 'startTime':
+                queries.append('EndingDateTime:['+value+'%20TO%20*]')
+            elif key == 'endTime':
+                queries.append('BeginningDateTime:[*%20TO%20'+value+']')
+            elif key == 'bbox':
+                coordinates = value.split(",")
+                filterQueries.append('Spatial-Geometry:[' + coordinates[1] + ',' + coordinates[0] + '%20TO%20' + coordinates[3] + ',' + coordinates[2] + ']')
+            elif key == 'id':
+                queries.append('id:' + self._urlEncodeSolrQueryValue(value))
+
+        for key, value in facets.iteritems():
+            if type(value) is list:
+                if (len(value) == 1):
+                    filterQueries.append(key + ':' + self._urlEncodeSolrQueryValue(value[0]))
+                else:
+                    filterQueries.append(key + ':(' + '+OR+'.join([ self._urlEncodeSolrQueryValue(x) for x in value ]) + ")")
+            else:    
+                filterQueries.append(key + ':' + self._urlEncodeSolrQueryValue(value))
+
+        if len(queries) == 0:
+            queries.append('*:*')
+
+        query = 'q='+'+AND+'.join(queries)+'&version=2.2&indent=on&wt=json'
+
+        if len(filterQueries) > 0:
+            query += '&fq='+'+AND+'.join(filterQueries)
+        
+        if self.facet:
+            query += '&rows=0&facet=true&facet.limit=-1&facet.mincount=1&'
+            query += '&'.join(['facet.field=' + facet for facet in self._configuration.get('solr', 'facets').split(',')])
+        else:
+            query += '&start='+str(startIndex)+'&rows='+str(entriesPerPage)
+            query += '&sort=' + self._configuration.get('solr', 'sort')
+
+        logging.debug('solr query: '+query)
+
+        return query

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/product/atom/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/product/atom/__init__.py b/src/main/python/plugins/product/atom/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/product/atom/plugin.conf
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/product/atom/plugin.conf b/src/main/python/plugins/product/atom/plugin.conf
new file mode 100644
index 0000000..4644a51
--- /dev/null
+++ b/src/main/python/plugins/product/atom/plugin.conf
@@ -0,0 +1,12 @@
+[solr]
+datasetUrl=http://localhost:8983/solr/product
+entriesPerPage=10
+maxEntriesPerPage=2000
+defaultSearchParam=keyword
+parameters=keyword,product_pt_id,startTime,endTime,bbox,id
+facets={}
+sort=product_name+asc
+
+[service]
+url=http://localhost:8890
+template=template.xml

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/product/atom/template.xml
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/product/atom/template.xml b/src/main/python/plugins/product/atom/template.xml
new file mode 100644
index 0000000..f92eeca
--- /dev/null
+++ b/src/main/python/plugins/product/atom/template.xml
@@ -0,0 +1,85 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<feed esipdiscovery:version="1.2"
+      xmlns="http://www.w3.org/2005/Atom"
+      xmlns:dc="http://purl.org/dc/terms/"
+      xmlns:echo="http://www.echo.nasa.gov/esip"
+      xmlns:gibs="http://gibs.jpl.nasa.gov/esip" 
+      xmlns:esipdiscovery="http://commons.esipfed.org/ns/discovery/1.2/"
+      xmlns:georss="http://www.georss.org/georss/10"
+      xmlns:gml="http://www.opengis.net/gml"
+      xmlns:os="http://a9.com/-/spec/opensearch/1.1/"
+      xmlns:time="http://a9.com/-/opensearch/extensions/time/1.0/">
+   <updated>{{ updated }}</updated>
+   <id>https://api.echo.nasa.gov:443/opensearch/datasets.atom</id>
+   <author>
+      <name>GIBS</name>
+      <email>support@echo.nasa.gov</email>
+   </author>
+   <title type="text">GIBS Product Metadata</title>
+   <os:totalResults>{{ numFound }}</os:totalResults>
+   <os:itemsPerPage>{{ itemsPerPage }}</os:itemsPerPage>
+   <os:startIndex>{{ startIndex }}</os:startIndex>
+   <os:Query role="request"
+             xmlns:echo="http://www.echo.nasa.gov/esip"
+             xmlns:geo="http://a9.com/-/opensearch/extensions/geo/1.0/"
+             xmlns:time="http://a9.com/-/opensearch/extensions/time/1.0/" />
+   <subtitle type="text">Search parameters: None</subtitle>
+   <link href="{{ myself }}" hreflang="en-US" rel="self" type="application/atom+xml" />
+   {% if last %}<link href="{{ last }}" hreflang="en-US" rel="last" type="application/atom+xml" />{% endif %}
+   {% if prev %}<link href="{{ prev }}" hreflang="en-US" rel="previous" type="application/atom+xml" />{% endif %}
+   {% if next %}<link href="{{ next }}" hreflang="en-US" rel="next" type="application/atom+xml" />{% endif %}
+   {% if first %}<link href="{{ first }}" hreflang="en-US" rel="first" type="application/atom+xml" />{% endif %}
+   <link href="https://wiki.earthdata.nasa.gov/display/echo/Open+Search+API+release+information"
+         hreflang="en-US" rel="describedBy" title="Release Notes" type="text/html" />
+   {% for doc in docs %}
+      <entry>
+         <id>{{ link }}?id={{ doc['product_id'] }}</id>
+         <dc:identifier>{{ doc['product_name'] }}</dc:identifier>
+         <author>
+            <name>GIBS</name>
+            <email>support@echo.nasa.gov</email>
+         </author>
+         <title type="text">{{ doc['product_name'] }}</title>
+         <summary type="text">{{ 'summary: ' + doc['product_name'] }}</summary>
+         <updated>{{ doc['product_meta_history_last_revision_date_string'] }}</updated>
+         <echo:shortName>{{ doc['product_name'] }}</echo:shortName>
+         <echo:longName>{{ doc['product_root_path'] + '/' + doc['product_name'] }}</echo:longName>
+         <echo:versionId>{{ doc['product_version'] }}</echo:versionId>
+         {% for i in range(doc['product_granule_dataset_id_list']|count)  %}
+            <gibs:cmr>
+               {% if doc['cmr_dataset_id'] %}<echo:datasetId>{{ doc['cmr_dataset_id'][i] }}</echo:datasetId>
+               {% else %}<echo:datasetId />{% endif %}
+               {% if doc['cmr_title'] %}<echo:description>{{ doc['cmr_title'][i] + ' : ' + doc['cmr_time_start'][i] + ' to ' +  doc['cmr_time_end'][i]}}</echo:description>
+               {% else %}<echo:description />{% endif %}
+               {% if doc['cmr_updated'] %}<echo:lastUpdate>{{ doc['cmr_updated'][i] }}</echo:lastUpdate>
+               {% else %}<echo:lastUpdate />{% endif %}
+               {% if doc['cmr_data_center'] %}<echo:dataCenter>{{ doc['cmr_data_center'][i] }}</echo:dataCenter>
+               {% else %}<echo:dataCenter />{% endif %}
+               {% if doc['cmr_original_format'] %}<echo:originalFormat>{{ doc['cmr_original_format'][i] }}</echo:originalFormat>
+               {% else %}<echo:originalFormat />{% endif %}
+               {% if doc['cmr_coordinate_system'] %}<echo:coordinateSystem>{{ doc['cmr_coordinate_system'][i] }}</echo:coordinateSystem>
+               {% else %}<echo:coordinateSystem />{% endif %}
+               {% if doc['cmr_online_access_flag'] %}<echo:onlineAccessFlag>{{ doc['cmr_online_access_flag'][i] }}</echo:onlineAccessFlag>
+               {% else %}<echo:onlineAccessFlag />{% endif %}
+               {% if doc['cmr_browse_flag'] %}<echo:browseFlag>{{ doc['cmr_browse_flag'][i] }}</echo:browseFlag>
+               {% else %}<echo:browseFlag />{% endif %}
+               {% if doc['cmr_boxes'] %}
+                  {% for box in doc['cmr_boxes'][i] %}
+                     <georss:box>{{ box }}</georss:box>
+                  {% endfor %}
+               {% else %}<georss:box />{% endif %}
+               {% if doc['cmr_links'] %}
+                  {% for cmr_link in doc['cmr_links'][i] %}
+                     <link href="{{ cmr_link['href'] }}" hreflang="en-US" rel="enclosure" title="{{ cmr_link['title'] }}" />
+                  {% endfor %}
+               {% endif %} 
+             </gibs:cmr>
+         {% endfor %}
+      <link href="{{ serviceUrl }}/ws/search/product?id={{ doc['id'] }}&amp;pretty=true"
+            hreflang="en-US" rel="enclosure" title="Product Search" type="application/xml" pretty="true" />  
+      <link href="{{ serviceUrl }}/ws/metadata/product?id={{ doc['id'] }}&amp;pretty=true"
+            hreflang="en-US" rel="alternate" title="Product metadata" type="application/xml" />
+      <dc:date>{{ doc['product_start_time_string'] }}/{{ doc['product_end_time_string'] }}</dc:date>
+   </entry>
+   {% endfor %}
+</feed>

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/product/iso/Writer.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/product/iso/Writer.py b/src/main/python/plugins/product/iso/Writer.py
new file mode 100644
index 0000000..a90552e
--- /dev/null
+++ b/src/main/python/plugins/product/iso/Writer.py
@@ -0,0 +1,38 @@
+import logging
+import os
+import os.path
+
+from edge.writer.solrtemplateresponsewriter import SolrTemplateResponseWriter
+from edge.opensearch.solrtemplateresponse import SolrTemplateResponse
+
+class Writer(SolrTemplateResponseWriter):
+    def __init__(self, configFilePath):
+        super(Writer, self).__init__(configFilePath)
+
+        templatePath = os.path.dirname(configFilePath) + os.sep
+        templatePath += self._configuration.get('service', 'template')
+        self.template = self._readTemplate(templatePath)
+
+    def _generateOpenSearchResponse(self, solrResponse, searchText, searchUrl, searchParams, pretty):
+
+        print "iso product:seachParams = [%s]\n" %searchParams
+
+        response = SolrTemplateResponse(self._configuration, searchUrl, searchParams)
+        response.setTemplate(self.template)
+
+        return response.generate(solrResponse, pretty=pretty)
+
+    def _constructSolrQuery(self, startIndex, entriesPerPage, parameters, facets):
+        queries = []
+
+        for key, value in parameters.iteritems():
+            if key == 'id':
+                queries.append('id:' + self._urlEncodeSolrQueryValue(value))
+            elif key == 'title':
+                queries.append('product_type_title:' + self._urlEncodeSolrQueryValue(value))
+
+        query = 'q='+'+AND+'.join(queries)+'&version=2.2&indent=on&wt=json'+'&rows='+str(entriesPerPage)
+
+        logging.debug('solr query: '+query)
+
+        return query

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/product/iso/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/product/iso/__init__.py b/src/main/python/plugins/product/iso/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/product/iso/plugin.conf
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/product/iso/plugin.conf b/src/main/python/plugins/product/iso/plugin.conf
new file mode 100644
index 0000000..d40c358
--- /dev/null
+++ b/src/main/python/plugins/product/iso/plugin.conf
@@ -0,0 +1,8 @@
+[solr]
+datasetUrl=http://localhost:8983/solr/product
+entriesPerPage=1
+parameters=id,title
+
+[service]
+url=http://localhost:8890
+template=template.xml


[08/15] incubator-sdap-edge git commit: SDAP-1 Import all code under the SDAP SGA

Posted by le...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/product_type/iso/template.xml
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/product_type/iso/template.xml b/src/main/python/plugins/product_type/iso/template.xml
new file mode 100644
index 0000000..bb0d415
--- /dev/null
+++ b/src/main/python/plugins/product_type/iso/template.xml
@@ -0,0 +1,914 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<gmd:DS_Series xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+	xsi:schemaLocation="http://www.isotc211.org/2005/gmd http://www.ngdc.noaa.gov/metadata/published/xsd/schema.xsd"
+	xmlns:gmd="http://www.isotc211.org/2005/gmd" xmlns:gco="http://www.isotc211.org/2005/gco"
+	xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:gml="http://www.opengis.net/gml"
+	xmlns:gsr="http://www.isotc211.org/2005/gsr" xmlns:gss="http://www.isotc211.org/2005/gss"
+	xmlns:gts="http://www.isotc211.org/2005/gts" xmlns:gmx="http://www.isotc211.org/2005/gmx"
+	xmlns:gmi="http://www.isotc211.org/2005/gmi">
+	<gmd:composedOf gco:nilReason="inapplicable" />
+	<gmd:seriesMetadata>
+	    <gmi:MI_Metadata id="{{ docs[0]['id'] }}">
+		<gmd:fileIdentifier>
+		    <gco:CharacterString>{{ docs[0]['product_type_identifier'] }}</gco:CharacterString>
+		</gmd:fileIdentifier>                
+		<gmd:language>
+		    <gco:CharacterString>eng</gco:CharacterString>
+		</gmd:language>
+		<gmd:characterSet>
+		    <gmd:MD_CharacterSetCode
+			codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_CharacterSetCode"
+			codeListValue="UTF8">UTF8
+                    </gmd:MD_CharacterSetCode>
+		</gmd:characterSet>
+		<gmd:hierarchyLevel>
+		    <gmd:MD_ScopeCode
+			codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_ScopeCode"
+			codeListValue="series">series
+                    </gmd:MD_ScopeCode>
+		</gmd:hierarchyLevel>
+                {% if docs[0].__contains__('product_type_provider_contact_role_list') and
+                    docs[0]['product_type_provider_contact_role_list'].__contains__('POINT_OF_CONTACT') %}
+                    <gmd:contact>
+                        {% for i in range(docs[0]['product_type_provider_contact_role_list'].__len__()) %}
+                            {% if docs[0]['product_type_provider_contact_role_list'][i] == 'POINT_OF_CONTACT' %}
+                                <gmd:CI_ResponsibleParty id="seriesMetadataContact">
+                                    <gmd:individualName>                                            
+                                        <gco:CharacterString>
+                                            {% if docs[0]['product_type_provider_contact_first_name_list'][i] != 'null' %} 
+                                                {{ docs[0]['product_type_provider_contact_first_name_list'][i] }}
+                                            {% endif %}
+                                            {% if docs[0]['product_type_provider_contact_middle_name_list'][i] != 'null' %} 
+                                                {{ docs[0]['product_type_provider_contact_middle_name_list'][i] }}
+                                            {% endif %} 
+                                            {{ docs[0]['product_type_provider_contact_last_name_list'][i] }}
+                                        </gco:CharacterString>
+                                    </gmd:individualName>
+                                    <gmd:organisationName>
+                                        <gco:CharacterString>{{ docs[0]['product_type_provider_short_name'] }} &gt; {{ docs[0]['product_type_provider_long_name'] }}</gco:CharacterString>
+                                    </gmd:organisationName>
+                                    <gmd:positionName>
+                                        <gco:CharacterString>{{ docs[0]['product_type_provider_contact_role_list'][i] }}</gco:CharacterString>
+                                    </gmd:positionName>             
+                                    <gmd:contactInfo>
+                                        <gmd:CI_Contact>
+                                            <gmd:phone>
+                                                <gmd:CI_Telephone>
+                                                    {% if docs[0]['product_type_provider_contact_phone_list'][i] != 'null' %}
+                                                        <gmd:voice>
+                                                            <gco:CharacterString>{{ docs[0]['product_type_provider_contact_phone_list'][i] }}</gco:CharacterString>
+                                                        </gmd:voice>
+                                                    {% else %}
+                                                        <gmd:voice gco:nilReason="missing" />
+                                                    {% endif %}
+                                                    {% if docs[0]['product_type_provider_contact_fax_list'][i] != 'null' %}
+                                                        <gmd:facsimile>
+                                                            <gco:CharacterString>{{ docs[0]['product_type_provider_contact_fax_list'][i] }}</gco:CharacterString>
+                                                        </gmd:facsimile>
+                                                    {% else %}
+                                                        <gmd:facsimile gco:nilReason="missing" />
+                                                    {% endif %}
+                                                </gmd:CI_Telephone>
+                                            </gmd:phone>
+                                            <gmd:address>
+                                                <gmd:CI_Address>
+                                                    {% if docs[0]['product_type_provider_contact_address_list'][i] != 'null' %}
+                                                        <gmd:electronicMailAddress>
+                                                            <gco:CharacterString>{{ docs[0]['product_type_provider_contact_address_list'][i] }}</gco:CharacterString>
+                                                        </gmd:electronicMailAddress>
+                                                    {% else %}
+                                                        <gmd:electronicMailAddress gco:nilReason="missing" />
+                                                    {% endif %}
+                                                </gmd:CI_Address>
+                                            </gmd:address>
+                                            <gmd:contactInstructions>
+                                                <gco:CharacterString>Phone/FAX/E-mail</gco:CharacterString>
+                                            </gmd:contactInstructions>
+                                        </gmd:CI_Contact>
+                                    </gmd:contactInfo>
+                                    <gmd:role>
+                                        <gmd:CI_RoleCode
+                                            codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_RoleCode"
+                                            codeListValue="pointOfContact">pointOfContact
+                                        </gmd:CI_RoleCode>
+                                    </gmd:role>
+                                </gmd:CI_ResponsibleParty>
+                            {% endif %}
+                        {% endfor %}
+                    </gmd:contact>
+                {% else %}
+                    <gmd:contact gco:nilReason="missing" />
+                {% endif %}
+                <gmd:dateStamp>
+		    <gco:Date>{{ docs[0]['product_type_last_updated_string'] }}</gco:Date>
+		</gmd:dateStamp>
+		<gmd:metadataStandardName>
+		    <gco:CharacterString>ISO 19115-2 Geographic information — Metadata — Part 2: Extensions for imagery and gridded data</gco:CharacterString>
+		</gmd:metadataStandardName>
+		<gmd:metadataStandardVersion>
+		    <gco:CharacterString>ISO 19115-2:2009-02-15</gco:CharacterString>
+		</gmd:metadataStandardVersion>
+		<gmd:locale>
+		    <gmd:PT_Locale>
+			<gmd:languageCode>
+			    <gmd:LanguageCode
+                                codeList="http://www.loc.gov/standards/iso639-2/php/English_list.php"
+                                codeListValue="eng">eng
+                            </gmd:LanguageCode>
+                	</gmd:languageCode>
+			<gmd:country>
+			    <gmd:Country
+                                codeList="http://www.iso.org/iso/iso_3166-1_list_en.zip"
+                                codeListValue="US">US
+                            </gmd:Country>
+			</gmd:country>
+			<gmd:characterEncoding>
+			    <gmd:MD_CharacterSetCode
+				codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_CharacterSetCode"
+				codeListValue="UTF8">UTF8
+                            </gmd:MD_CharacterSetCode>
+			</gmd:characterEncoding>
+		    </gmd:PT_Locale>
+		</gmd:locale>
+                <gmd:metadataExtensionInfo>
+		    <gmd:MD_MetadataExtensionInformation>
+			<gmd:extensionOnLineResource>
+			    <gmd:CI_OnlineResource>
+				<gmd:linkage>
+				    <gmd:URL>http://www.ngdc.noaa.gov/metadata/published/19115/GHRSST/ISO/CoverageExtensions.xml</gmd:URL>
+				</gmd:linkage>
+                                <gmd:applicationProfile>
+				    <gco:CharacterString>Web Browser</gco:CharacterString>
+				</gmd:applicationProfile>
+				<gmd:description>
+				    <gco:CharacterString>A description of extensions developed at NGDC to classify coverages.</gco:CharacterString>
+                                </gmd:description>
+				<gmd:function>
+				    <gmd:CI_OnLineFunctionCode
+					codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_OnlineFunctionCode"
+					codeListValue="information">information
+                                    </gmd:CI_OnLineFunctionCode>
+				</gmd:function>
+			    </gmd:CI_OnlineResource>
+			</gmd:extensionOnLineResource>
+		    </gmd:MD_MetadataExtensionInformation>
+		</gmd:metadataExtensionInfo>
+		<gmd:identificationInfo>
+		    <gmd:MD_DataIdentification id="seriesIdentification">
+			<gmd:citation>
+			    <gmd:CI_Citation>
+				{% if docs[0].__contains__('product_type_title') %}
+                                    <gmd:title>
+                                        <gco:CharacterString>{{ docs[0]['product_type_title'] }}</gco:CharacterString>
+                                    </gmd:title> 
+                                {% else %} 
+                                    <gmd:title gco:nilReason="missing" />
+			        {% endif %}
+                                {% if docs[0].__contains__('product_type_identifier') %}
+                                    <gmd:alternateTitle>
+                                        <gco:CharacterString>{{ docs[0]['product_type_identifier'] }}</gco:CharacterString>
+                                    </gmd:alternateTitle> 
+                                {% else %} 
+                                    <gmd:alternateTitle gco:nilReason="missing" />
+			        {% endif %}
+				<gmd:date>
+				    <gmd:CI_Date>
+					<gmd:date>					
+					    <gco:Date>{{ docs[0]['product_type_last_updated_string'] }}</gco:Date>
+					</gmd:date>
+                                        <gmd:dateType>
+					    <gmd:CI_DateTypeCode
+						codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_DateTypeCode"
+						codeListValue="creation">creation
+                                            </gmd:CI_DateTypeCode>
+					</gmd:dateType>
+				    </gmd:CI_Date>
+				</gmd:date>
+				{% if docs[0].__contains__('product_type_version') %}
+                                    <gmd:edition>
+                                        <gco:CharacterString>{{ docs[0]['product_type_version'] }}</gco:CharacterString>
+                                    </gmd:edition> 
+                                {% else %} 
+                                    <gmd:edition gco:nilReason="missing" />
+			        {% endif %}           
+                                {% if docs[0].__contains__('product_type_provider_resource_type_list') and
+                                    docs[0]['product_type_provider_resource_type_list'].__contains__('ORIGINATOR') %}
+                                    <gmd:citedResponsibleParty>
+                                        {% for i in range(docs[0]['product_type_provider_resource_type_list'].__len__()) %}
+                                            {% if docs[0]['product_type_provider_resource_type_list'][i] == 'ORIGINATOR' %}
+                                                <gmd:CI_ResponsibleParty id="seriesMetadataContact">
+                                                    <gmd:individualName>                                            
+                                                        <gco:CharacterString>{{ docs[0]['product_type_provider_resource_name_list'][i] }}</gco:CharacterString>
+                                                    </gmd:individualName>
+                                                    <gmd:contactInfo>
+                                                        <gmd:CI_Contact>
+                                                            {% if docs[0]['product_type_provider_resource_path_list'][i] != 'null' %}
+                                                                <gmd:onlineResource>
+                                                                    <gmd:CI_OnlineResource>
+                                                                        <gmd:linkage>
+                                                                            <gmd:URL>{{ docs[0]['product_type_provider_resource_path_list'][i] }}</gmd:URL>
+                                                                        </gmd:linkage>
+                                                                    </gmd:CI_OnlineResource>
+                                                                </gmd:onlineResource>
+                                                            {% else %}
+                                                                <gmd:onlineResource gco:nilReason="missing" />
+                                                            {% endif %}
+                                                        </gmd:CI_Contact>
+                                                    </gmd:contactInfo>
+                                                    <gmd:role>
+                                                        <gmd:CI_RoleCode
+                                                            codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_RoleCode"
+                                                            codeListValue="originator">originator
+                                                        </gmd:CI_RoleCode>
+                                                    </gmd:role>
+                                                </gmd:CI_ResponsibleParty>
+                                            {% endif %}
+                                        {% endfor %}                                        
+                                    </gmd:citedResponsibleParty>
+                                {% else %}
+                                    <gmd:citedResponsibleParty>
+                                        <gmd:CI_ResponsibleParty id="seriesMetadataContact">
+                                            <gmd:individualName gco:nilReason="missing" />
+                                            <gmd:contactInfo gco:nilReason="missing" />
+                                            <gmd:role>
+                                                <gmd:CI_RoleCode
+                                                    codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_RoleCode"
+                                                    codeListValue="originator">originator
+                                                </gmd:CI_RoleCode>
+                                            </gmd:role>
+                                        </gmd:CI_ResponsibleParty>
+                                    </gmd:citedResponsibleParty>
+                                {% endif %}        
+                                {% if docs[0].__contains__('product_type_provider_resource_type_list') and
+                                    docs[0]['product_type_provider_resource_type_list'].__contains__('PUBLISHER') %}
+                                    <gmd:citedResponsibleParty>
+                                        {% for i in range(docs[0]['product_type_provider_resource_type_list'].__len__()) %}
+                                            {% if docs[0]['product_type_provider_resource_type_list'][i] == 'PUBLISHER' %}
+                                                <gmd:CI_ResponsibleParty id="seriesMetadataContact">
+                                                    <gmd:individualName>                                            
+                                                        <gco:CharacterString>{{ docs[0]['product_type_provider_resource_name_list'][i] }}</gco:CharacterString>
+                                                    </gmd:individualName>
+                                                    <gmd:organisationName>
+                                                        <gco:CharacterString>{{ docs[0]['product_type_provider_short_name'] }} &gt; {{ docs[0]['product_type_provider_long_name'] }}</gco:CharacterString>
+                                                    </gmd:organisationName>
+                                                    <gmd:contactInfo>                                    
+                                                        <gmd:CI_Contact>
+                                                            <gmd:address>
+                                                                <gmd:CI_Address>
+                                                                    <gmd:city gco:nilReason="missing" />
+                                                                </gmd:CI_Address>
+                                                            </gmd:address>
+                                                        </gmd:CI_Contact>
+                                                    </gmd:contactInfo>
+                                                    <gmd:role>
+                                                        <gmd:CI_RoleCode
+                                                            codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_RoleCode"
+                                                            codeListValue="publisher">publisher
+                                                        </gmd:CI_RoleCode>
+                                                    </gmd:role>
+                                                </gmd:CI_ResponsibleParty>
+                                            {% endif %}
+                                        {% endfor %}
+                                    </gmd:citedResponsibleParty>    
+                                {% else %}
+                                    <gmd:citedResponsibleParty>
+                                        <gmd:CI_ResponsibleParty id="seriesMetadataContact">
+                                            <gmd:individualName gco:nilReason="missing" />
+                                            <gmd:contactInfo gco:nilReason="missing" />
+                                            <gmd:role>
+                                                <gmd:CI_RoleCode
+                                                    codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_RoleCode"
+                                                    codeListValue="publisher">publisher
+                                                </gmd:CI_RoleCode>
+                                            </gmd:role>
+                                        </gmd:CI_ResponsibleParty>
+                                    </gmd:citedResponsibleParty>
+                                {% endif %}   
+                            </gmd:CI_Citation>
+                        </gmd:citation>
+                        {% if docs[0].__contains__('product_type_description') %}
+                            <gmd:abstract>
+                                <gco:CharacterString>{{ docs[0]['product_type_description'] }}</gco:CharacterString>
+                            </gmd:abstract>
+                        {% else %}
+                            <gmd:abstract  gco:nilReason="missing" />
+                        {% endif %}
+			<gmd:credit gco:nilReason="missing" />
+			<gmd:status>
+			    <gmd:MD_ProgressCode
+			        codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_ProgressCode"
+                                codeListValue="onGoing">onGoing
+                            </gmd:MD_ProgressCode>
+                        </gmd:status>
+                        {% if docs[0].__contains__('product_type_provider_contact_role_list') and
+                            docs[0]['product_type_provider_contact_role_list'].__contains__('POINT_OF_CONTACT') %}
+                            <gmd:pointOfContact>
+                                {% for i in range(docs[0]['product_type_provider_contact_role_list'].__len__()) %}
+                                    {% if docs[0]['product_type_provider_contact_role_list'][i] == 'POINT_OF_CONTACT' %}
+                                        <gmd:CI_ResponsibleParty id="seriesMetadataContact">
+                                            <gmd:individualName>                                            
+                                                <gco:CharacterString>
+                                                    {% if docs[0]['product_type_provider_contact_first_name_list'][i] != 'null' %} 
+                                                        {{ docs[0]['product_type_provider_contact_first_name_list'][i] }}
+                                                    {% endif %}
+                                                    {% if docs[0]['product_type_provider_contact_middle_name_list'][i] != 'null' %} 
+                                                        {{ docs[0]['product_type_provider_contact_middle_name_list'][i] }}
+                                                    {% endif %} 
+                                                    {{ docs[0]['product_type_provider_contact_last_name_list'][i] }}
+                                                </gco:CharacterString>
+                                            </gmd:individualName>
+                                            <gmd:organisationName>
+                                                <gco:CharacterString>{{ docs[0]['product_type_provider_short_name'] }} &gt; {{ docs[0]['product_type_provider_long_name'] }}</gco:CharacterString>
+                                            </gmd:organisationName>     
+                                            <gmd:contactInfo>
+                                                <gmd:CI_Contact>
+                                                    <gmd:phone>
+                                                        <gmd:CI_Telephone>
+                                                            <gmd:voice>
+                                                                {% if docs[0]['product_type_provider_contact_phone_list'][i] != 'null' %}
+                                                                    <gco:CharacterString>{{ docs[0]['product_type_provider_contact_phone_list'][i] }}</gco:CharacterString>
+                                                                {% else %} 
+                                                                    <gco:CharacterString>GIBS: status-pointOfContact-contactInfo-phone-voice not provided</gco:CharacterString>
+                                                                {% endif %} 
+                                                            </gmd:voice>
+                                                            <gmd:facsimile>
+                                                                {% if docs[0]['product_type_provider_contact_fax_list'][i] != 'null' %}
+                                                                    <gco:CharacterString>{{ docs[0]['product_type_provider_contact_fax_list'][i] }}</gco:CharacterString>
+                                                                {% else %} 
+                                                                    <gco:CharacterString>GIBS: status-pointOfContact-contactInfo-phone-facsimile not provided</gco:CharacterString>
+                                                                {% endif %} 
+                                                            </gmd:facsimile>
+                                                        </gmd:CI_Telephone>
+                                                    </gmd:phone>
+                                                    <gmd:address>
+                                                        <gmd:CI_Address>
+                                                            <gmd:electronicMailAddress>
+                                                                {% if docs[0]['product_type_provider_contact_address_list'][i] != 'null' %}
+                                                                    <gco:CharacterString>{{ docs[0]['product_type_provider_contact_address_list'][i] }}</gco:CharacterString>
+                                                                {% else %} 
+                                                                    <gco:CharacterString>GIBS: status-pointOfContact-contactInfo-address-electronicMailAddress not provided</gco:CharacterString>
+                                                                {% endif %}
+                                                            </gmd:electronicMailAddress>
+                                                        </gmd:CI_Address>
+                                                    </gmd:address>
+                                                    <gmd:contactInstructions>
+                                                        <gco:CharacterString>Phone/FAX/E-mail</gco:CharacterString>
+                                                    </gmd:contactInstructions>
+                                                    {% if docs[0]['product_type_provider_resource_path_list'][i] != 'null' %}
+                                                        <gmd:onlineResource>
+                                                            <gmd:CI_OnlineResource>
+                                                                <gmd:linkage>
+                                                                    <gmd:URL>{{ docs[0]['product_type_provider_resource_path_list'][i] }}</gmd:URL>
+                                                                </gmd:linkage>
+                                                            </gmd:CI_OnlineResource>
+                                                        </gmd:onlineResource>
+                                                    {% else %}
+                                                        <gmd:onlineResource gco:nilReason="missing" />
+                                                    {% endif %}
+                                                </gmd:CI_Contact>
+	    				    </gmd:contactInfo>
+					    <gmd:role>
+					        <gmd:CI_RoleCode
+                                                    codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_RoleCode"
+                                                    codeListValue="pointOfContact">pointOfContact
+                                                </gmd:CI_RoleCode>
+					    </gmd:role>
+					</gmd:CI_ResponsibleParty>
+				    {% endif %}
+                                {% endfor %}
+                            </gmd:pointOfContact>
+                        {% else %}
+                            <gmd:pointOfContact gco:nilReason="missing" />
+                        {% endif %}
+                        {% if docs[0].__contains__('product_type_resource_name_list') %}    
+                            <gmd:resourceFormat>
+                                {% for i in range(docs[0]['product_type_resource_name_list'].__len__()) %}
+                                    <gmd:MD_Format id="resourceFormat">
+                                        {% if docs[0].__contains__('product_type_resource_name_list') %}
+                                            <gmd:name>
+                                                <gco:CharacterString>{{ docs[0]['product_type_resource_name_list'][i] }}</gco:CharacterString>
+                                            </gmd:name>
+                                        {% else %}
+                                            <gmd:name gco:nilReason="missing" />
+                                        {% endif %}
+                                	{% if docs[0].__contains__('product_type_policy_version_list') %}
+                                            <gmd:version>
+                                                <gco:CharacterString>{{ docs[0]['product_type_policy_version_list'][i] }}</gco:CharacterString>
+                                            </gmd:version>
+                                        {% else %}
+                                            <gmd:version gco:nilReason="missing" />
+                                        {% endif %}
+                                        {% if docs[0].__contains__('product_type_policy_compress_type_list') %}
+                                            <gmd:fileDecompressionTechnique>
+                                                <gco:CharacterString>{{ docs[0]['product_type_policy_compress_type_list'][i] }}</gco:CharacterString>
+                                            </gmd:fileDecompressionTechnique>
+                                        {% else %}
+                                            <gmd:fileDecompressionTechnique gco:nilReason="missing" />
+                                        {% endif %}
+                                    </gmd:MD_Format>
+                                {% endfor %}
+                            </gmd:resourceFormat>
+                        {% else %}
+                            <gmd:resourceFormat gco:nilReason="missing" /> 
+                        {% endif %}                                     
+			{% if docs[0].__contains__('product_type_resource_name_list') %}
+                            {% for i in range(docs[0]['product_type_resource_name_list'].__len__()) %}
+                                <gmd:descriptiveKeywords>
+                                    <gmd:MD_Keywords>
+                                        <gmd:keyword>   
+                                            <gco:CharacterString>
+                                                {{ docs[0]['product_type_resource_name_list'][i] }} &gt;
+                                                {{ docs[0]['product_type_resource_description_list'][i] }} &gt;
+                                                {{ docs[0]['product_type_resource_path_list'][i] }} &gt;
+                                                Type: {{ docs[0]['product_type_resource_type_list'][i] }}
+                                            </gco:CharacterString>
+                                        </gmd:keyword>
+                                        <gmd:type>
+                                            <gmd:MD_KeywordTypeCode
+                                                codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_KeywordTypeCode"
+                                                codeListValue="theme">theme
+                                            </gmd:MD_KeywordTypeCode>
+                                        </gmd:type>
+                                        <gmd:thesaurusName>
+                                            <gmd:CI_Citation>
+                                                <gmd:title>
+                                                    <gco:CharacterString>NASA/GCMD Earth Science Keywords</gco:CharacterString>
+                                                </gmd:title>
+                                                <gmd:date gco:nilReason="unknown" />
+                                            </gmd:CI_Citation>
+                                        </gmd:thesaurusName>
+                                    </gmd:MD_Keywords>
+                                </gmd:descriptiveKeywords>
+                            {% endfor %}
+                        {% else %}
+                            <gmd:descriptiveKeywords>
+                                <gmd:MD_Keywords>
+                                    <gmd:keyword gco:nilReason="missing" /> 
+                                    <gmd:type>
+                                        <gmd:MD_KeywordTypeCode
+                                            codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_KeywordTypeCode"
+                                            codeListValue="theme">theme
+                                        </gmd:MD_KeywordTypeCode>
+                                    </gmd:type>
+                                    <gmd:thesaurusName>
+                                        <gmd:CI_Citation>
+                                            <gmd:title>
+                                                <gco:CharacterString>NASA/GCMD Earth Science Keywords</gco:CharacterString>
+                                            </gmd:title>
+                                            <gmd:date gco:nilReason="unknown" />
+                                        </gmd:CI_Citation>
+                                    </gmd:thesaurusName>
+                                </gmd:MD_Keywords>
+                            </gmd:descriptiveKeywords>
+                        {% endif %}                                     
+                        {% if docs[0].__contains__('product_type_metadata_region_coverage') %}
+                            <gmd:descriptiveKeywords>
+                                <gmd:MD_Keywords>
+                                    <gmd:keyword>
+                                        <gco:CharacterString>{{docs[0]['product_type_metadata_region_coverage'] }}</gco:CharacterString>
+                                    </gmd:keyword>
+                                    <gmd:type>
+                                        <gmd:MD_KeywordTypeCode
+                                            codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_KeywordTypeCode"
+                                            codeListValue="place">place
+                                        </gmd:MD_KeywordTypeCode>
+                                    </gmd:type>
+                                    <gmd:thesaurusName>
+                                        <gmd:CI_Citation>
+                                            <gmd:title>
+                                                <gco:CharacterString>NASA/GCMD Location Keywords</gco:CharacterString>
+                                            </gmd:title>
+                                            <gmd:date gco:nilReason="unknown" />
+                                        </gmd:CI_Citation>
+                                    </gmd:thesaurusName>
+                                </gmd:MD_Keywords>
+                            </gmd:descriptiveKeywords>
+                        {% else %}
+                            <gmd:descriptiveKeywords>
+                                <gmd:MD_Keywords>
+                                    <gmd:keyword gco:nilReason="missing" />
+                                    <gmd:type>
+                                        <gmd:MD_KeywordTypeCode
+                                            codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_KeywordTypeCode"
+                                            codeListValue="place">place
+                                        </gmd:MD_KeywordTypeCode>
+                                    </gmd:type>
+                                    <gmd:thesaurusName>
+                                        <gmd:CI_Citation>
+                                            <gmd:title>
+                                                <gco:CharacterString>NASA/GCMD Location Keywords</gco:CharacterString>
+                                            </gmd:title>
+                                            <gmd:date gco:nilReason="unknown" />
+                                        </gmd:CI_Citation>
+                                    </gmd:thesaurusName>
+                                </gmd:MD_Keywords>
+                            </gmd:descriptiveKeywords>
+                        {% endif %}                                      
+                        {% if docs[0].__contains__('product_type_policy_use_constraint_list') %}
+                            {% for i in range(docs[0]['product_type_policy_use_constraint_list'].__len__()) %}
+                                <gmd:resourceConstraints>
+                                    <gmd:MD_LegalConstraints>
+                                        <gmd:useLimitation>
+                                            <gco:CharacterString>{{docs[0]['product_type_policy_use_constraint_list'][i] }}</gco:CharacterString>
+                                        </gmd:useLimitation>                                                        
+                                        {% if docs[0]['product_type_policy_use_constraint_list'][i] != 'null' %}
+                                            <gmd:otherConstraints>
+                                                <gco:CharacterString>{{docs[0]['product_type_policy_use_constraint_list'][i] }}</gco:CharacterString>
+                                            </gmd:otherConstraints>
+                                        {% endif %}
+                                    </gmd:MD_LegalConstraints>
+                                </gmd:resourceConstraints>
+                            {% endfor %}
+                        {% else %}
+                            <gmd:resourceConstraints gco:nilReason="unknown" />
+                        {% endif %}                                      
+			<gmd:spatialRepresentationType>
+			    <gmd:MD_SpatialRepresentationTypeCode
+                                codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_SpatialRepresentationTypeCode"
+                                codeListValue="grid">grid
+                            </gmd:MD_SpatialRepresentationTypeCode>
+		        </gmd:spatialRepresentationType>
+					
+                        <gmd:language>
+                            <gco:CharacterString>eng</gco:CharacterString>
+			</gmd:language>
+				    
+                        <gmd:characterSet>
+                            <gmd:MD_CharacterSetCode
+    				codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_CharacterSetCode"
+				codeListValue="UTF8">UTF8
+                            </gmd:MD_CharacterSetCode>
+			</gmd:characterSet>
+				
+                        {% if docs[0].__contains__('product_type_coverage_west_longitude_list') %}
+                            <gmd:extent>
+                                {% for i in range(docs[0]['product_type_coverage_west_longitude_list'].__len__()) %}
+                                    <gmd:EX_Extent id="boundingExtent">
+                                        <gmd:geographicElement>
+                                            <gmd:EX_GeographicBoundingBox id="boundingBox">
+                                                <gmd:extentTypeCode>
+                                                    <gco:Boolean>true</gco:Boolean>
+                                                </gmd:extentTypeCode>
+                                                {% if docs[0].__contains__('product_type_coverage_west_longitude_list') %}
+                                                    <gmd:westBoundLongitude>
+                                                        <gco:Decimal>{{ docs[0]['product_type_coverage_west_longitude_list'][i] }}</gco:Decimal>
+                                                    </gmd:westBoundLongitude>
+                                                {% else %}
+                                                    <gmd:westBoundLongitude gco:nilReason="missing" />
+                                                {% endif %}
+						{% if docs[0].__contains__('product_type_coverage_east_longitude_list') %}
+                                                    <gmd:eastBoundLongitude>
+                                                        <gco:Decimal>{{ docs[0]['product_type_coverage_east_longitude_list'][i] }}</gco:Decimal>
+                                                    </gmd:eastBoundLongitude>
+                                                {% else %}
+                                                    <gmd:eastBoundLongitude gco:nilReason="missing" />
+                                                {% endif %}   
+                                                {% if docs[0].__contains__('product_type_coverage_south_latitude_list') %}
+                                                    <gmd:southBoundLongitude>
+                                                        <gco:Decimal>{{ docs[0]['product_type_coverage_south_latitude_list'][i] }}</gco:Decimal>
+                                                    </gmd:southBoundLongitude>
+                                                {% else %}
+                                                    <gmd:southBoundLongitude gco:nilReason="missing" />
+                                                {% endif %} 
+                                                {% if docs[0].__contains__('product_type_coverage_north_latitude_list') %}
+                                                    <gmd:northBoundLongitude>
+                                                        <gco:Decimal>{{ docs[0]['product_type_coverage_north_latitude_list'][i] }}</gco:Decimal>
+                                                    </gmd:northBoundLongitude>
+                                                {% else %}
+                                                    <gmd:northBoundLongitude gco:nilReason="missing" />
+                                                {% endif %} 
+                                            </gmd:EX_GeographicBoundingBox>
+                                        </gmd:geographicElement>
+                                        <gmd:geographicElement>
+                                            <gmd:EX_GeographicDescription>
+                                                <gmd:extentTypeCode>
+                                                    <gco:Boolean>true</gco:Boolean>
+                                                </gmd:extentTypeCode>
+                                                <gmd:geographicIdentifier>
+                                                    <gmd:MD_Identifier>
+                                                        <gmd:code />
+                                                    </gmd:MD_Identifier>
+                                                </gmd:geographicIdentifier>
+                                            </gmd:EX_GeographicDescription>
+                                        </gmd:geographicElement>
+                                        <gmd:temporalElement>
+                                            <gmd:EX_TemporalExtent id="temporalExtent">
+                                                <gmd:extent>
+                                                    <TimePeriod
+                                                        xmlns="http://www.opengis.net/gml/3.2"
+                                                        xmlns:ns1="http://www.opengis.net/gml/3.2" ns1:id="timePeriod">
+                                                        {% if docs[0].__contains__('product_type_coverage_start_time_list') %}
+                                                            <beginPosition>{{ docs[0]['product_type_coverage_start_time_list'][i] }}</beginPosition>
+                                                        {% else %}
+                                                            <beginPosition gco:nilReason="missing" />
+                                                        {% endif %}
+                                                         {% if docs[0].__contains__('product_type_coverage_stop_time_list') %}
+                                                            <endPosition>{{ docs[0]['product_type_coverage_stop_time_list'][i] }}</endPosition>
+                                                        {% else %}
+                                                            <endPosition gco:nilReason="missing" />
+                                                        {% endif %}
+                                                    </TimePeriod>
+						</gmd:extent>
+					    </gmd:EX_TemporalExtent>
+				        </gmd:temporalElement>
+                                        <gmd:verticalElement gco:nilReason="inapplicable" />
+				    </gmd:EX_Extent>
+                                {% endfor %}
+                            </gmd:extent>      
+                        {% else %}
+                            <gmd:extent gco:nilReason="missing" />        
+                        {% endif %}
+                    </gmd:MD_DataIdentification>
+		</gmd:identificationInfo>
+                <gmd:contentInfo>
+		    <gmi:MI_CoverageDescription id="referenceInformation">
+			<gmd:attributeDescription>
+			    <gco:RecordType xlink:href="http://www.ghrsst.org/documents.htm?parent=475" />
+			</gmd:attributeDescription>
+			<gmd:contentType>
+			    <gmd:MD_CoverageContentTypeCode
+				codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_CoverageContentTypeCode"
+				codeListValue="referenceInformation">referenceInformation
+                            </gmd:MD_CoverageContentTypeCode>
+			</gmd:contentType>
+			<gmd:dimension>
+			    <gmd:MD_Band>
+				<gmd:sequenceIdentifier>
+				    <gco:MemberName>
+					<gco:aName>
+					    <gco:CharacterString>lat</gco:CharacterString>
+					</gco:aName>
+					<gco:attributeType>
+					    <gco:TypeName>
+						<gco:aName>
+						    <gco:CharacterString>float</gco:CharacterString>
+						</gco:aName>
+					    </gco:TypeName>
+					</gco:attributeType>
+				    </gco:MemberName>
+				</gmd:sequenceIdentifier>
+			    </gmd:MD_Band>
+			</gmd:dimension>
+			<gmd:dimension>
+			    <gmd:MD_Band>
+				<gmd:sequenceIdentifier>
+				    <gco:MemberName>
+	    				<gco:aName>
+                                            <gco:CharacterString>lon</gco:CharacterString>
+					</gco:aName>
+					<gco:attributeType>
+					    <gco:TypeName>
+	    					<gco:aName>
+                                                    <gco:CharacterString>float</gco:CharacterString>
+					    	</gco:aName>
+					    </gco:TypeName>
+					</gco:attributeType>
+				    </gco:MemberName>
+				</gmd:sequenceIdentifier>
+			    </gmd:MD_Band>
+			</gmd:dimension>
+			<gmd:dimension>
+			    <gmd:MD_Band>
+				<gmd:sequenceIdentifier>
+				    <gco:MemberName>
+					<gco:aName>
+					    <gco:CharacterString>time</gco:CharacterString>
+					</gco:aName>
+					<gco:attributeType>
+					    <gco:TypeName>
+						<gco:aName>
+						    <gco:CharacterString>int</gco:CharacterString>
+						</gco:aName>
+					    </gco:TypeName>
+					</gco:attributeType>
+				    </gco:MemberName>
+				</gmd:sequenceIdentifier>
+			    </gmd:MD_Band>
+			</gmd:dimension>
+		    </gmi:MI_CoverageDescription>
+		</gmd:contentInfo>
+                <gmd:distributionInfo>
+		    <gmd:MD_Distribution>
+			<gmd:distributionFormat xlink:href="#resourceFormat" />
+			<gmd:distributor>
+			    <gmd:MD_Distributor>
+			        <gmd:distributorContact>
+                                    <gmd:CI_ResponsibleParty>
+					<gmd:individualName>
+                                            <gco:CharacterString>GIBS User Services</gco:CharacterString>
+				        </gmd:individualName>
+                                	<gmd:organisationName>
+					    <gco:CharacterString>NASA/JPL/GIBS &gt; Global Imagery Browse Services, Jet Propulsion Laboratory, NASA</gco:CharacterString>
+					</gmd:organisationName>
+                                        <gmd:contactInfo>
+					    <gmd:CI_Contact>
+						<gmd:address>
+						    <gmd:CI_Address>
+                                                        <gmd:deliveryPoint>
+							    <gco:CharacterString>4800 Oak Grove Drive</gco:CharacterString>
+						        </gmd:deliveryPoint>
+                                                        <gmd:city>
+                                                            <gco:CharacterString>Pasadena</gco:CharacterString>
+                                                        </gmd:city>
+							<gmd:administrativeArea>
+							    <gco:CharacterString>CA</gco:CharacterString>
+							</gmd:administrativeArea>
+							<gmd:postalCode>
+							    <gco:CharacterString>91109-8099</gco:CharacterString>
+                                                        </gmd:postalCode>
+							<gmd:country>
+							    <gco:CharacterString>USA</gco:CharacterString>
+                                                        </gmd:country>
+							<gmd:electronicMailAddress>
+							    <gco:CharacterString>gibs@gibs.jpl.nasa.gov</gco:CharacterString>
+							</gmd:electronicMailAddress>
+						    </gmd:CI_Address>
+                                                </gmd:address>
+                                                <gmd:onlineResource>
+						    <gmd:CI_OnlineResource>
+							<gmd:linkage>
+							    <gmd:URL>http://gibs.jpl.nasa.gov</gmd:URL>
+						        </gmd:linkage>
+                                                    </gmd:CI_OnlineResource>
+                                                </gmd:onlineResource>
+                                            </gmd:CI_Contact>
+                                        </gmd:contactInfo>
+					<gmd:role>
+                                            <gmd:CI_RoleCode
+						codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_RoleCode"
+						codeListValue="distributor">distributor
+                                            </gmd:CI_RoleCode>
+                                        </gmd:role>
+				    </gmd:CI_ResponsibleParty>
+			        </gmd:distributorContact>
+                            </gmd:MD_Distributor>
+			</gmd:distributor>
+			{% if docs[0].__contains__('product_type_resource_type_list') and docs[0]['product_type_resource_type_list'][i] != 'Thumbnail' %} 
+			    {% for i in range(docs[0]['product_type_resource_type_list'].__len__()) %}
+                                {% if docs[0]['product_type_resource_type_list'][i] != 'Thumbnail' %}
+                                    <gmd:transferOptions>
+                                        <gmd:MD_DigitalTransferOptions>
+                                            <gmd:onLine>
+                                                <gmd:CI_OnlineResource>
+                                                    {% if docs[0].__contains__('product_type_resource_path_list') %}
+                                                        <gmd:linkage>		
+                                                            <gmd:URL>{{ docs[0]['product_type_resource_path_list'][i] }}</gmd:URL>
+                                                        </gmd:linkage>
+                                                    {% else %}
+                                                        <gmd:linkage gco:nilReason="missing" />
+                                                    {% endif %}
+                                                    {% if docs[0].__contains__('product_type_resource_name_list') %}
+                                                        <gmd:name>		
+                                                            <gmd:CharacterString>{{ docs[0]['product_type_resource_name_list'][i] }}</gmd:CharacterString>
+                                                        </gmd:name>
+                                                    {% else %}
+                                                        <gmd:name gco:nilReason="missing" />
+                                                    {% endif %}
+						    {% if docs[0].__contains__('product_type_resource_description_list') %}
+                                                        <gmd:description>		
+                                                            <gmd:CharacterString>{{ docs[0]['product_type_resource_description_list'][i] }}</gmd:CharacterString>
+                                                        </gmd:description>
+                                                    {% else %}
+                                                        <gmd:description gco:nilReason="missing" />
+                                                    {% endif %}
+                                                    <gmd:function>
+							<gmd:CI_OnLineFunctionCode
+							    codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_OnlineFunctionCode"
+							    codeListValue="information">information
+                                                        </gmd:CI_OnLineFunctionCode>
+						    </gmd:function>
+						</gmd:CI_OnlineResource>
+					    </gmd:onLine>
+					</gmd:MD_DigitalTransferOptions>
+                                    </gmd:transferOptions>
+                                {% endif %}
+			    {% endfor %}	
+		        {% else %} 
+			    <gmd:transferOptions gco:nilReason="missing" />
+                        {% endif %}
+		    </gmd:MD_Distribution>
+	        </gmd:distributionInfo>
+                <gmd:metadataMaintenance>
+		    <gmd:MD_MaintenanceInformation>
+			<gmd:maintenanceAndUpdateFrequency>
+			    <gmd:MD_MaintenanceFrequencyCode
+				codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_MaintenanceFrequencyCode"
+				codeListValue="asNeeded">asNeeded
+                            </gmd:MD_MaintenanceFrequencyCode>
+			</gmd:maintenanceAndUpdateFrequency>
+                        <gmd:maintenanceNote>
+			    <gco:CharacterString>Translated from GCMD DIF</gco:CharacterString>
+			</gmd:maintenanceNote>
+		    </gmd:MD_MaintenanceInformation>
+		</gmd:metadataMaintenance>
+		<gmi:acquisitionInformation>
+		    <gmi:MI_AcquisitionInformation>
+			<gmi:instrument>
+			    <gmi:MI_Instrument>
+                                {% if docs[0].__contains__('product_type_identifier') %}
+                                    <gmi:identifier>
+                                        <gmd:MD_Identifier>
+                                            <gmd:code>
+                                                <gco:CharacterString>{{ docs[0]['product_type_identifier'] }}</gco:CharacterString>
+                                            </gmd:code>
+                                        </gmd:MD_Identifier>
+                                    </gmi:identifier>
+                                {% else %} 
+                                    <gmi:identifier gco:nilReason="missing" />
+                                {% endif %}    
+                                {% if docs[0].__contains__('product_type_metadata_instrument') %}
+                                    <gmi:description>
+                                        <gco:CharacterString>{{ docs[0]['product_type_metadata_instrument'] }}</gco:CharacterString>								
+                                    </gmi:description>
+                                {% else %} 
+                                    <gmd:description gco:nilReason="missing" />
+                                {% endif %}
+			    </gmi:MI_Instrument>
+			</gmi:instrument>
+			<gmi:platform>
+			    <gmi:MI_Platform>
+				{% if docs[0].__contains__('product_type_identifier') %}
+                                    <gmi:identifier>
+                                        <gmd:MD_Identifier>
+                                            <gmd:code>
+                                                <gco:CharacterString>{{ docs[0]['product_type_identifier'] }}</gco:CharacterString>
+                                            </gmd:code>
+                                        </gmd:MD_Identifier>
+                                    </gmi:identifier>
+                                {% else %} 
+                                    <gmi:identifier gco:nilReason="missing" />
+                                {% endif %}			
+				{% if docs[0].__contains__('product_type_metadata_platform') %}
+                                    <gmi:description>
+                                        <gco:CharacterString>{{ docs[0]['product_type_metadata_platform'] }}</gco:CharacterString>								
+                                    </gmi:description>
+                                {% else %} 
+                                    <gmd:description gco:nilReason="missing" />
+                                {% endif %}
+				<gmi:sponsor>
+				    <gmd:CI_ResponsibleParty>
+                                        {% if docs[0].__contains__('product_type_metadata_project') %}
+                                            <gmd:organisationName>
+                                                <gco:CharacterString>{{ docs[0]['product_type_metadata_project'] }}</gco:CharacterString>
+                                            </gmd:organisationName>
+                                        {% else %} 
+                                            <gmd:organisationName gco:nilReason="missing" />
+                                        {% endif %}
+					<gmd:contactInfo>
+					    <gmd:CI_Contact>
+                                                {% if docs[0]['product_type_provider_resource_path_list'][i] != 'null' %}
+                                                    <gmd:onlineResource>
+                                                        <gmd:CI_OnlineResource>
+                                                            <gmd:linkage>
+                                                                <gmd:URL>{{ docs[0]['product_type_provider_resource_path_list'][i] }}</gmd:URL>
+                                                            </gmd:linkage>
+                                                        </gmd:CI_OnlineResource>
+                                                    </gmd:onlineResource>
+                                                {% else %}
+                                                    <gmd:onlineResource gco:nilReason="missing" />
+                                                {% endif %}
+					    </gmd:CI_Contact>
+					</gmd:contactInfo>
+					<gmd:role>
+					    <gmd:CI_RoleCode
+						codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_RoleCode"
+						codeListValue="sponsor" />
+					</gmd:role>
+				    </gmd:CI_ResponsibleParty>
+				</gmi:sponsor>
+				<gmi:sponsor>
+				    <gmd:CI_ResponsibleParty>
+					<gmd:organisationName>
+					    <gco:CharacterString>NASA/JPL/GIBS &gt; Global Imagery Browse Services, Jet Propulsion Laboratory, NASA</gco:CharacterString>
+					</gmd:organisationName>
+					<gmd:contactInfo>
+					    <gmd:CI_Contact>
+						<gmd:onlineResource>
+						    <gmd:CI_OnlineResource>
+							<gmd:linkage>
+							    <gmd:URL>http://gibs.jpl.nasa.gov</gmd:URL>
+							</gmd:linkage>
+							<gmd:function>
+							    <gmd:CI_OnLineFunctionCode
+								codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_OnLineFunctionCode"
+								codeListValue="information" />
+							</gmd:function>
+						    </gmd:CI_OnlineResource>
+	    					</gmd:onlineResource>
+					    </gmd:CI_Contact>
+					</gmd:contactInfo>
+					<gmd:role>
+					    <gmd:CI_RoleCode
+						codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_RoleCode"
+						codeListValue="sponsor" />
+					</gmd:role>
+				    </gmd:CI_ResponsibleParty>
+				</gmi:sponsor>
+				<gmi:instrument xlink:href="{{ docs[0]['product_type_title'] }}" />
+			    </gmi:MI_Platform>
+			</gmi:platform>
+		    </gmi:MI_AcquisitionInformation>
+		</gmi:acquisitionInformation>
+            </gmi:MI_Metadata>
+	</gmd:seriesMetadata>
+</gmd:DS_Series>

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/samos/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/samos/__init__.py b/src/main/python/plugins/samos/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/samos/json/Writer.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/samos/json/Writer.py b/src/main/python/plugins/samos/json/Writer.py
new file mode 100644
index 0000000..3b2ac7d
--- /dev/null
+++ b/src/main/python/plugins/samos/json/Writer.py
@@ -0,0 +1,89 @@
+import logging
+import os
+import os.path
+import urllib
+
+from edge.writer.solrtemplateresponsewriter import SolrTemplateResponseWriter
+from edge.response.solrjsontemplateresponse import SolrJsonTemplateResponse
+
+class Writer(SolrTemplateResponseWriter):
+    def __init__(self, configFilePath):
+        super(Writer, self).__init__(configFilePath)
+        
+        self.contentType = 'application/json'
+
+        templatePath = os.path.dirname(configFilePath) + os.sep
+        templatePath += self._configuration.get('service', 'template')
+        self.template = self._readTemplate(templatePath)
+
+    def _generateOpenSearchResponse(self, solrResponse, searchText, searchUrl, searchParams, pretty):
+        response = SolrJsonTemplateResponse(searchUrl, searchParams)
+        response.setTemplate(self.template)
+
+        return response.generate(solrResponse, pretty=pretty)
+
+    def _constructSolrQuery(self, startIndex, entriesPerPage, parameters, facets):
+        queries = []
+        filterQueries = []
+        sort = None
+
+        for key, value in parameters.iteritems():
+            if value != "":
+                if key == 'keyword':
+                    queries.append(urllib.quote(value))
+                elif key == 'startTime':
+                    filterQueries.append('time:['+value+'%20TO%20*]')
+                elif key == 'endTime':
+                    filterQueries.append('time:[*%20TO%20'+value+']')
+                elif key == 'bbox':
+                    coordinates = value.split(",")
+                    filterQueries.append('loc:[' + coordinates[1] + ',' + coordinates[0] + '%20TO%20' + coordinates[3] + ',' + coordinates[2] + ']')
+                elif key == 'variable':
+                    if value.lower() == 'sss':
+                        filterQueries.append('SSS:[*%20TO%20*]')
+                    elif value.lower() == 'sst':
+                        filterQueries.append('SST:[*%20TO%20*]')
+                    elif value.lower() == 'wind':
+                        filterQueries.append('wind_speed:[*%20TO%20*]')
+                elif key == "minDepth":
+                    if 'variable' in parameters:
+                        if parameters['variable'].lower() == 'sss':
+                            filterQueries.append('(SSS_depth:['+value+'%20TO%20*]+OR+(*:*%20NOT%20SSS_depth:*))')
+                        elif parameters['variable'].lower() == 'sst':
+                            filterQueries.append('(SST_depth:['+value+'%20TO%20*]+OR+(*:*%20NOT%20SST_depth:*))')
+                        elif parameters['variable'].lower() == 'wind':
+                            filterQueries.append('(wind_depth:['+value+'%20TO%20*]+OR+(*:*%20NOT%20wind_depth:*))')
+                elif key == "maxDepth":
+                    if 'variable' in parameters:
+                        if parameters['variable'].lower() == 'sss':
+                            filterQueries.append('(SSS_depth:[*%20TO%20'+value+']+OR+(*:*%20NOT%20SSS_depth:*))')
+                        elif parameters['variable'].lower() == 'sst':
+                            filterQueries.append('(SST_depth:[*%20TO%20'+value+']+OR+(*:*%20NOT%20SST_depth:*))')
+                        elif parameters['variable'].lower() == 'wind':
+                            filterQueries.append('(wind_depth:[*%20TO%20'+value+']+OR+(*:*%20NOT%20wind_depth:*))')
+                elif key == 'platform':
+                    if type(value) is list:
+                        filterQueries.append('platform:(' + '+OR+'.join(value) + ')')
+                    else:
+                        filterQueries.append('platform:'+value)
+
+        if len(queries) == 0:
+            queries.append('*:*')
+
+        query = 'q='+'+AND+'.join(queries)+'&wt=json&start='+str(startIndex)+'&rows='+str(entriesPerPage)
+
+        if len(filterQueries) > 0:
+            query += '&fq='+'+AND+'.join(filterQueries)
+
+        if sort is not None:
+            query += '&sort=' + sort
+
+        if 'stats' in parameters and parameters['stats'].lower() == 'true':
+            query += '&stats=true&stats.field={!min=true%20max=true}SSS_depth&stats.field={!min=true%20max=true}SST_depth&stats.field={!min=true%20max=true}wind_depth'
+
+        if 'facet' in parameters and parameters['facet'].lower() == 'true':
+            query += '&facet=true&facet.field=platform&facet.field=device&facet.limit=-1&facet.mincount=1'
+
+        logging.debug('solr query: '+query)
+
+        return query

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/samos/json/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/samos/json/__init__.py b/src/main/python/plugins/samos/json/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/samos/json/plugin.conf
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/samos/json/plugin.conf b/src/main/python/plugins/samos/json/plugin.conf
new file mode 100644
index 0000000..2df9aa0
--- /dev/null
+++ b/src/main/python/plugins/samos/json/plugin.conf
@@ -0,0 +1,11 @@
+[solr]
+datasetUrl=http://localhost:8983/solr/samos
+entriesPerPage=10
+maxEntriesPerPage=100000
+defaultSearchParam=keyword
+parameters=keyword,startTime,endTime,bbox,minDepth,maxDepth,variable,stats,platform,facet
+facets={}
+
+[service]
+url=http://doms.coaps.fsu.edu
+template=template.json

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/samos/json/template.json
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/samos/json/template.json b/src/main/python/plugins/samos/json/template.json
new file mode 100755
index 0000000..e3071b5
--- /dev/null
+++ b/src/main/python/plugins/samos/json/template.json
@@ -0,0 +1,63 @@
+{
+{% if last %}"last": "{{ last }}",{% endif %}
+{% if prev %}"prev": "{{ prev }}",{% endif %}
+{% if next %}"next": "{{ next }}",{% endif %}
+{% if first %}"first": "{{ first }}",{% endif %}
+"results":[
+{% for doc in docs %}
+{
+"time": "{{ doc['time'] }}",
+"point": "Point({{ doc['loc'].split(',')[1] }} {{ doc['loc'].split(',')[0] }})",
+"sea_water_temperature": {{ doc['SST'] | jsonify }},
+"sea_water_temperature_depth": {{ doc['SST_depth'] | jsonify }},
+"sea_water_temperature_quality": {% if doc['SST_quality'] == False %}0{% elif doc['SST_quality'] == True %}1{% else %}null{% endif %},
+"wind_speed": {{ doc['wind_speed'] | jsonify }},
+"eastward_wind": {{ doc['wind_u'] | jsonify }},
+"northward_wind": {{ doc['wind_v'] | jsonify }},
+"wind_depth": {{ doc['wind_depth'] | jsonify }},
+"wind_quality": {% if doc['wind_quality'] == False %}0{% elif doc['wind_quality'] == True %}1{% else %}null{% endif %},
+"sea_water_salinity": {{ doc['SSS'] | jsonify }},
+"sea_water_salinity_depth": {{ doc['SSS_depth'] | jsonify }},
+"sea_water_salinity_quality": {% if doc['SSS_quality'] == False %}0{% elif doc['SSS_quality'] == True %}1{% else %}null{% endif %},
+"platform": {{ doc['platform'] | jsonify }},
+"device": {{ doc['device'] | jsonify }},
+"mission": {{ doc['mission'] | jsonify }},
+"metadata": "{{ doc['meta'] }}"
+}
+{% if not loop.last %}
+,
+{% endif %}
+{% endfor %}
+]
+,"totalResults":{{ numFound }}
+,"startIndex":{{ startIndex  }}
+,"itemsPerPage":{{ itemsPerPage }}
+{% if stats %}
+,
+"stats_fields": {{ stats['stats_fields'] | jsonify }}
+{% endif %}
+{% if facets %}
+,
+"facets":[
+{% for key, facet in facets['facet_fields'].iteritems() %}
+{
+"field": "{{ key }}",
+"values":[
+{% for i in range(0, facet|count, 2) %}
+{
+"count":{{facet[i+1] }},
+"value": "{{ facet[i] }}"
+}
+{% if not loop.last %}
+,
+{% endif %}
+{% endfor %}
+]
+}
+{% if not loop.last %}
+,
+{% endif %}
+{% endfor %}
+]
+{% endif %}
+}

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/slcp/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/slcp/__init__.py b/src/main/python/plugins/slcp/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/slcp/atom/Writer.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/slcp/atom/Writer.py b/src/main/python/plugins/slcp/atom/Writer.py
new file mode 100644
index 0000000..7bdae8c
--- /dev/null
+++ b/src/main/python/plugins/slcp/atom/Writer.py
@@ -0,0 +1,86 @@
+import logging
+import os
+import os.path
+import urllib
+import json
+
+from edge.writer.solrtemplateresponsewriter import SolrTemplateResponseWriter
+from edge.opensearch.solrtemplateresponse import SolrTemplateResponse
+
+class Writer(SolrTemplateResponseWriter):
+    def __init__(self, configFilePath):
+        super(Writer, self).__init__(configFilePath)
+
+        templatePath = os.path.dirname(configFilePath) + os.sep
+        templatePath += self._configuration.get('service', 'template')
+        self.template = self._readTemplate(templatePath)
+
+    def _generateOpenSearchResponse(self, solrResponse, searchText, searchUrl, searchParams, pretty):
+        response = SolrTemplateResponse(self._configuration, searchUrl, searchParams)
+        response.setTemplate(self.template)
+        response.variables['serviceUrl'] = self._configuration.get('service', 'url')
+
+        return response.generate(solrResponse, pretty=pretty)
+
+    def _constructSolrQuery(self, startIndex, entriesPerPage, parameters, facets):
+        sortKeys = json.loads(self._configuration.get('solr', 'sortKeys'))
+
+        queries = []
+        filterQueries = []
+        sort = None
+        sortDir = 'asc'
+        start = '*'
+        end = '*'
+
+        for key, value in parameters.iteritems():
+            if value != "":
+                if key == 'keyword':
+                    queries.append(urllib.quote(value))
+                elif key == 'startTime':
+                    start = value
+                elif key == 'endTime':
+                    end = value
+                elif key == 'bbox':
+                    coordinates = value.split(",")
+                    filterQueries.append('Spatial-Geometry:[' + coordinates[1] + ',' + coordinates[0] + '%20TO%20' + coordinates[3] + ',' + coordinates[2] + ']')
+                elif key == 'concept_id':
+                    queries.append('concept-id:' + self._urlEncodeSolrQueryValue(value))
+                elif key == 'sortKey':
+                    if value in sortKeys.keys():
+                        sort = sortKeys[value]
+                elif key == 'sortDir':
+                    sortDir = value
+                elif key == 'inDAT':
+                    filterQueries.append('InDAT:%s' % value)
+        queries.append('(BeginningEndingDateTime:['+start+'%20TO%20' + end + ']+OR+(*:*%20NOT%20BeginningEndingDateTime:*))')
+
+        for key, value in facets.iteritems():
+            if type(value) is list:
+                if (len(value) == 1):
+                    filterQueries.append(key + ':' + self._urlEncodeSolrQueryValue(value[0]))
+                else:
+                    filterQueries.append(key + ':(' + '+OR+'.join([ self._urlEncodeSolrQueryValue(x) for x in value ]) + ")")
+            else:    
+                filterQueries.append(key + ':' + self._urlEncodeSolrQueryValue(value))
+
+        if len(queries) == 0:
+            queries.append('*:*')
+
+        query = 'q='+'+AND+'.join(queries)+'&version=2.2&indent=on&wt=json'
+
+        if len(filterQueries) > 0:
+            query += '&fq='+'+AND+'.join(filterQueries)
+        
+        if self.facet:
+            query += '&rows=0&facet=true&facet.limit=-1&facet.mincount=1&'
+            query += '&'.join(['facet.field=' + facet for facet in self.facetDefs.values()])
+        else:
+            query += '&start='+str(startIndex)+'&rows='+str(entriesPerPage)
+            if sort is not None:
+                query += '&sort=' + urllib.quote(sort + ' ' + sortDir + ",InternalVersion desc")
+            else:
+                query += '&sort=' + urllib.quote("score desc,InternalVersion desc")
+
+        logging.debug('solr query: '+query)
+
+        return query

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/slcp/atom/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/slcp/atom/__init__.py b/src/main/python/plugins/slcp/atom/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/slcp/atom/plugin.conf
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/slcp/atom/plugin.conf b/src/main/python/plugins/slcp/atom/plugin.conf
new file mode 100644
index 0000000..428406f
--- /dev/null
+++ b/src/main/python/plugins/slcp/atom/plugin.conf
@@ -0,0 +1,12 @@
+[solr]
+datasetUrl=http://localhost:8983/solr/dataset
+entriesPerPage=10
+maxEntriesPerPage=2000
+defaultSearchParam=keyword
+parameters=keyword,startTime,endTime,bbox,concept_id,sortKey,sortDir,inDAT
+facets={"Collection": "Collection", "Processing_Level": "ProcessingLevelBin", "Swath_Spatial_Resolution": "SwathSpatialResolution", "Grid_Spatial_Resolution": "GridSpatialResolution", "Temporal_Resolution": "TemporalResolution", "Parameter": "TermKeyword"}
+sortKeys={"Relevance": "score", "Long_Name": "LongName", "Short_Name": "ShortName", "Processing_Level": "ProcessingLevelId", "Start_Date": "BeginningDateTime", "Stop_Date": "EndingDateTime", "Last_Updated": "LastUpdate"}
+
+[service]
+url=http://localhost:8890
+template=template.xml

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/slcp/atom/template.xml
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/slcp/atom/template.xml b/src/main/python/plugins/slcp/atom/template.xml
new file mode 100755
index 0000000..3b7fdce
--- /dev/null
+++ b/src/main/python/plugins/slcp/atom/template.xml
@@ -0,0 +1,148 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<feed esipdiscovery:version="1.2" xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/terms/" xmlns:echo="http://www.echo.nasa.gov/esip" xmlns:esipdiscovery="http://commons.esipfed.org/ns/discovery/1.2/" xmlns:georss="http://www.georss.org/georss/10" xmlns:gml="http://www.opengis.net/gml" xmlns:os="http://a9.com/-/spec/opensearch/1.1/" xmlns:time="http://a9.com/-/opensearch/extensions/time/1.0/">
+<updated>{{ updated }}</updated>
+<id>https://api.echo.nasa.gov:443/opensearch/datasets.atom</id>
+<author>
+<name>ECHO</name>
+<email>support@echo.nasa.gov</email>
+</author>
+<title type="text">ECHO dataset metadata</title>
+<os:totalResults>{{ numFound }}</os:totalResults>
+<os:itemsPerPage>{{ itemsPerPage }}</os:itemsPerPage>
+<os:startIndex>{{ startIndex }}</os:startIndex>
+<os:Query role="request" xmlns:echo="http://www.echo.nasa.gov/esip" xmlns:geo="http://a9.com/-/opensearch/extensions/geo/1.0/" xmlns:time="http://a9.com/-/opensearch/extensions/time/1.0/" />
+<subtitle type="text">Search parameters: None</subtitle>
+<link href="https://api.echo.nasa.gov:443/opensearch/granules/descriptor_document.xml" hreflang="en-US" rel="search" type="application/opensearchdescription+xml" />
+<link href="{{ myself }}" hreflang="en-US" rel="self" type="application/atom+xml" />
+{% if last %}<link href="{{ last }}" hreflang="en-US" rel="last" type="application/atom+xml" />{% endif %}
+{% if prev %}<link href="{{ prev }}" hreflang="en-US" rel="previous" type="application/atom+xml" />{% endif %}
+{% if next %}<link href="{{ next }}" hreflang="en-US" rel="next" type="application/atom+xml" />{% endif %}
+{% if first %}<link href="{{ first }}" hreflang="en-US" rel="first" type="application/atom+xml" />{% endif %}
+<link href="https://wiki.earthdata.nasa.gov/display/echo/Open+Search+API+release+information" hreflang="en-US" rel="describedBy" title="Release Notes" type="text/html" />
+{% for doc in docs %}
+<entry>
+<id>{{ link }}?concept_id={{ doc['concept-id'] }}</id>
+<dc:identifier>{{ doc['SlcpShortName'] }}</dc:identifier>
+<author>
+<name>ECHO</name>
+<email>support@echo.nasa.gov</email>
+</author>
+<title type="text">{{ doc['LongName'] }}</title>
+<summary type="text">{{ doc['Description'] }}</summary>
+<updated>{{ doc['LastUpdate'] }}</updated>
+<echo:datasetId>{{ doc['DataSetId'] }}</echo:datasetId>
+<echo:shortName>{{ doc['ShortName'] }}</echo:shortName>
+<echo:longName>{{ doc['LongName'] }}</echo:longName>
+<echo:versionId>{{ doc['VersionId'] }}</echo:versionId>
+<echo:dataCenter>{{ doc['ArchiveCenter'] }}</echo:dataCenter>
+{% if doc['ProcessingLevelId'] %}
+<echo:processingLevelId>{{ doc['ProcessingLevelId'] }}</echo:processingLevelId>
+{% endif %}
+{% if doc['DataFormat'] %}
+<echo:dataFormat>{{ doc['DataFormat'] }}</echo:dataFormat>
+{% endif %}
+{% if doc['CategoryKeyword'] %}
+<echo:scienceKeywords>
+{% for i in range(doc['CategoryKeyword']|count)  %}
+<echo:scienceKeyword>
+<echo:categoryKeyword>{{ doc['CategoryKeyword'][i] }}</echo:categoryKeyword>
+<echo:topicKeyword>{{ doc['TopicKeyword'][i] }}</echo:topicKeyword>
+<echo:termKeyword>{{ doc['TermKeyword'][i] }}</echo:termKeyword>
+<echo:variableLevel1Keyword>
+<echo:value>{{ doc['VariableLevel1Keyword'][i] }}</echo:value>
+</echo:variableLevel1Keyword>
+{% if doc['DetailedVariableKeyword'] %}
+<echo:detailedVariableKeyword>{{ doc['DetailedVariableKeyword'][i] }}</echo:detailedVariableKeyword>
+{% endif %}
+</echo:scienceKeyword>
+{% endfor %}
+</echo:scienceKeywords>
+{% endif %}
+<echo:platforms>
+{% for i in range(doc['Platform-ShortName']|count)  %}
+<echo:platform>
+<echo:shortName>{{ doc['Platform-ShortName'][i] }}</echo:shortName>
+<echo:longName>{{ doc['Platform-LongName'][i] }}</echo:longName>
+{% if doc['Instrument-ShortName_' + i|string] %}
+<echo:instruments>
+{% for j in range(doc['Instrument-ShortName_' + i|string]|count)  %}
+<echo:instrument>
+<echo:shortName>{{ doc['Instrument-ShortName_' + i|string][j] }}</echo:shortName>
+<echo:longName>{{ doc['Instrument-LongName_' + i|string][j] }}</echo:longName>
+{% if doc['Sensor-ShortName_' + i|string + '_' + j|string] %}
+<echo:sensors>
+{% for k in range(doc['Sensor-ShortName_' + i|string + '_' + j|string]|count)  %}
+<echo:sensor>
+<echo:shortName>{{ doc['Sensor-ShortName_' + i|string + '_' + j|string][k] }}</echo:shortName>
+<echo:longName>{{ doc['Sensor-LongName_' + i|string + '_' + j|string][k] }}</echo:longName>
+</echo:sensor>
+{% endfor %}
+</echo:sensors>
+{% endif %}
+</echo:instrument>
+{% endfor %}
+</echo:instruments>
+{% endif %}
+</echo:platform>
+{% endfor %}
+</echo:platforms>
+<echo:additionalAttributes>
+{% for key in ['GlobalAttrTitle', 'GlobalAttrDescription', 'GlobalAttrSource', 'GlobalAttrContact', 'GlobalAttrUnits'] %}
+{% if doc[key] %}
+<echo:additionalAttribute>
+<echo:name>{{ key }}</echo:name>
+<echo:value>{{ doc[key] }}</echo:value>
+</echo:additionalAttribute>
+{% endif %}
+{% endfor %}
+</echo:additionalAttributes>
+{% for i in range(doc['OnlineAccessURL-URL']|count)  %}
+{% if doc['OnlineAccessURL-URLDescription'][i].endswith('PO.DAAC') %}
+{% elif doc['OnlineAccessURL-URLDescription'][i].endswith('NODC') %}
+<link href="{{ doc['OnlineAccessURL-URL'][i] }}" hreflang="en-US" rel="enclosure" {% if doc['OnlineAccessURL-URLDescription'] and doc['OnlineAccessURL-URLDescription'][i] != ''  %} title="Data Access {{ doc['OnlineAccessURL-URLDescription'][i][:-5] }}" {% endif %} />
+{% else %}
+<link href="{{ doc['OnlineAccessURL-URL'][i] }}" hreflang="en-US" rel="enclosure" {% if doc['OnlineAccessURL-URLDescription'] and doc['OnlineAccessURL-URLDescription'][i] != ''  %} title="Data Access {{ doc['OnlineAccessURL-URLDescription'][i] }}" {% endif %} />
+{% endif %}
+{% endfor %}
+{% for i in range(doc['OnlineResource-URL']|count)  %}
+<link href="{{ doc['OnlineResource-URL'][i] }}" hreflang="en-US" rel="describedBy" {% if doc['OnlineResource-Description'] and doc['OnlineResource-Description'][i] != ''  %} title="{{ doc['OnlineResource-Description'][i] }}" {% endif %} />
+{% endfor %}
+{% if doc['CoordinateSystem'] %}
+<echo:coordinateSystem>{{ doc['CoordinateSystem'] }}</echo:coordinateSystem>
+{% endif %}
+{% if 0 == 1 %}
+<echo:orbitParameters />
+<relevance:score>0.5</relevance:score>
+{% endif %}
+{% for point in doc['Spatial-Point']  %}
+<georss:point>{{ point }}</georss:point>
+{% endfor %}
+{% for line in doc['Spatial-Line']  %}
+<georss:line>{{ line }}</georss:line>
+{% endfor %}
+{% for box in doc['Spatial-Box']  %}
+<georss:box>{{ box }}</georss:box>
+{% endfor %}
+{% for polygon in doc['Spatial-Polygon']  %}
+<georss:polygon>{{ polygon }}</georss:polygon>
+{% endfor %}
+{% if 0 == 1 %}
+<link href="http://gcmd.nasa.gov/getdif.htm?FIFE_STRM_15M" hreflang="en-US" rel="enclosure" title="doi:10.3334/ORNLDAAC/1" type="text/html" />
+{% endif %}
+{% if doc['has-granules'] == 'true'  %}
+<link href="https://api.echo.nasa.gov:443/opensearch/granules.atom?clientId=&amp;shortName={{ doc['ShortName'] }}&amp;versionId={{ doc['VersionId'] }}&amp;dataCenter={{ doc['ArchiveCenter'] }}" hreflang="en-US" rel="search" title="Search for granules" type="application/atom+xml" />
+{% endif %}
+{% if 0 == 1 %}
+<link href="https://api.echo.nasa.gov:443/opensearch/granules/descriptor_document.xml?clientId=&amp;shortName=doi:10.3334/ORNLDAAC/1&amp;versionId=1&amp;dataCenter=ORNL_DAAC" hreflang="en-US" rel="search" title="Custom ECHO Granule Open Search Descriptor Document" type="application/opensearchdescription+xml" />
+{% endif %}
+<link href="{{ serviceUrl }}/ws/metadata/dataset?slcpShortName={{ doc['SlcpShortName'] }}" hreflang="en-US" rel="alternate" title="Product metadata" type="application/xml" />
+{% for dt in doc['BeginningEndingDateTime']  %}
+{% if ' ' not in dt %}
+<dc:date>{{ dt }}/{{ dt }}</dc:date>
+{% else %}
+<dc:date>{{ dt[1:dt.index(' ')] }}/{% if '*' not in dt %}{{ dt[dt.rindex(' ')+1:-1] }}{% endif %}</dc:date>
+{% endif %}
+{% endfor %}
+</entry>
+{% endfor %}
+</feed>

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/slcp/basin/Writer.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/slcp/basin/Writer.py b/src/main/python/plugins/slcp/basin/Writer.py
new file mode 100644
index 0000000..77bfc9a
--- /dev/null
+++ b/src/main/python/plugins/slcp/basin/Writer.py
@@ -0,0 +1,35 @@
+import logging
+import os
+import os.path
+import urllib
+
+from edge.writer.solrtemplateresponsewriter import SolrTemplateResponseWriter
+from edge.response.solrjsontemplateresponse import SolrJsonTemplateResponse
+
+class Writer(SolrTemplateResponseWriter):
+    def __init__(self, configFilePath):
+        super(Writer, self).__init__(configFilePath)
+        
+        self.contentType = 'application/json'
+
+        templatePath = os.path.dirname(configFilePath) + os.sep
+        templatePath += self._configuration.get('service', 'template')
+        self.template = self._readTemplate(templatePath)
+
+    def _generateOpenSearchResponse(self, solrResponse, searchText, searchUrl, searchParams, pretty):
+        response = SolrJsonTemplateResponse()
+        response.setTemplate(self.template)
+
+        return response.generate(solrResponse, pretty=pretty)
+
+    def _constructSolrQuery(self, startIndex, entriesPerPage, parameters, facets):
+        queries = []
+
+        if len(queries) == 0:
+            queries.append('*:*')
+
+        query = 'q='+'+AND+'.join(queries)+'&version=2.2&indent=on&wt=json&start='+str(startIndex)+'&rows='+str(entriesPerPage)
+
+        logging.debug('solr query: '+query)
+
+        return query

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/slcp/basin/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/slcp/basin/__init__.py b/src/main/python/plugins/slcp/basin/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/slcp/basin/plugin.conf
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/slcp/basin/plugin.conf b/src/main/python/plugins/slcp/basin/plugin.conf
new file mode 100644
index 0000000..ea5d9e3
--- /dev/null
+++ b/src/main/python/plugins/slcp/basin/plugin.conf
@@ -0,0 +1,11 @@
+[solr]
+datasetUrl=http://localhost:8983/solr/basin
+entriesPerPage=201
+maxEntriesPerPage=201
+defaultSearchParam=keyword
+parameters=keyword
+facets={}
+
+[service]
+url=http://localhost:8890
+template=template.json

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/slcp/basin/template.json
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/slcp/basin/template.json b/src/main/python/plugins/slcp/basin/template.json
new file mode 100755
index 0000000..8947699
--- /dev/null
+++ b/src/main/python/plugins/slcp/basin/template.json
@@ -0,0 +1,68 @@
+[
+{% for doc in docs %}
+{
+"name": {{ doc['name'] | jsonify }},
+{% set bbox = doc['bbox'][9:-1].split(',') %}
+"bbox": {
+"maxx": {{ bbox[1] }},
+"maxy": {{ bbox[2] }},
+"minx": {{ bbox[0] }},
+"miny": {{ bbox[3] }}
+},
+"polygon": [
+{% if doc['polygon'].startswith('POLYGON') %}
+[
+{% set points = doc['polygon'][9:-2].split(',') %}
+{% for pt in points %}
+[
+{{ pt.replace(' ', ',') }}
+]
+{% if not loop.last %}
+,
+{% endif %}
+{% endfor %}
+]
+{% else %}
+{% set polygons = doc['polygon'][16:-3].split(')), ((') %}
+{% for polygon in polygons %}
+[
+{% set points = polygon.split(', ') %}
+{% for pt in points %}
+[
+{{ pt.replace(' ', ',') }}
+]
+{% if not loop.last %}
+,
+{% endif %}
+{% endfor %}
+]
+{% if not loop.last %}
+,
+{% endif %}
+{% endfor %}
+{% endif %}
+],
+"esri": {{ doc['esri'] | jsonify }},
+"articCode": {{ doc['articCode'] }},
+"subContinentName": {{ doc['subContinentName'] | jsonify }},
+"color": {{ doc['color'] }},
+"seaName": {{ doc['seaName'] | jsonify }},
+"seaBasinCode": {{ doc['seaBasinCode'] }},
+"basinArea": {{ doc['basinArea'] }},
+"basinOrder": {{ doc['basinOrder'] }},
+"oceanCode": {{ doc['oceanCode'] }},
+"symbolFLD": {{ doc['symbolFLD'] | jsonify }},
+"basinLength": {{ doc['basinLength'] }},
+"seaCode": {{ doc['seaCode'] }},
+"oceanName": {{ doc['oceanName'] | jsonify }},
+"up6": {{ doc['up6'] | jsonify }},
+"subContinentCode": {{ doc['subContinentCode'] }},
+"hydro": {{ doc['hydro'] | jsonify }},
+"seaBasinName": {{ doc['seaBasinName'] | jsonify }},
+"id": {{ doc['id'] }}
+}
+{% if not loop.last %}
+,
+{% endif %}
+{% endfor %}
+]



[03/15] incubator-sdap-edge git commit: SDAP-1 Import all code under the SDAP SGA

Posted by le...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/solr/product_type/conf/schema.xml
----------------------------------------------------------------------
diff --git a/src/main/solr/product_type/conf/schema.xml b/src/main/solr/product_type/conf/schema.xml
new file mode 100644
index 0000000..808bce1
--- /dev/null
+++ b/src/main/solr/product_type/conf/schema.xml
@@ -0,0 +1,1262 @@
+<?xml version="1.0" encoding="UTF-8" ?>
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<!--  
+ This is the Solr schema file. This file should be named "schema.xml" and
+ should be in the conf directory under the solr home
+ (i.e. ./solr/conf/schema.xml by default) 
+ or located where the classloader for the Solr webapp can find it.
+
+ This example schema is the recommended starting point for users.
+ It should be kept correct and concise, usable out-of-the-box.
+
+ For more information, on how to customize this file, please see
+ http://wiki.apache.org/solr/SchemaXml
+
+ PERFORMANCE NOTE: this schema includes many optional features and should not
+ be used for benchmarking.  To improve performance one could
+  - set stored="false" for all fields possible (esp large fields) when you
+    only need to search on the field but don't need to return the original
+    value.
+  - set indexed="false" if you don't need to search on the field, but only
+    return the field as a result of searching on other indexed fields.
+  - remove all unneeded copyField statements
+  - for best index size and searching performance, set "index" to false
+    for all general text fields, use copyField to copy them to the
+    catchall "text" field, and use that for searching.
+  - For maximum indexing performance, use the ConcurrentUpdateSolrServer
+    java client.
+  - Remember to run the JVM in server mode, and use a higher logging level
+    that avoids logging every request
+-->
+
+<schema name="product_type-schema" version="1.5">
+  <!-- attribute "name" is the name of this schema and is only used for display purposes.
+       version="x.y" is Solr's version number for the schema syntax and 
+       semantics.  It should not normally be changed by applications.
+
+       1.0: multiValued attribute did not exist, all fields are multiValued 
+            by nature
+       1.1: multiValued attribute introduced, false by default 
+       1.2: omitTermFreqAndPositions attribute introduced, true by default 
+            except for text fields.
+       1.3: removed optional field compress feature
+       1.4: autoGeneratePhraseQueries attribute introduced to drive QueryParser
+            behavior when a single string produces multiple tokens.  Defaults 
+            to off for version >= 1.4
+       1.5: omitNorms defaults to true for primitive field types 
+            (int, float, boolean, string...)
+     -->
+
+    <!-- Valid attributes for fields:
+     name: mandatory - the name for the field
+     type: mandatory - the name of a field type from the 
+       <types> fieldType section
+     indexed: true if this field should be indexed (searchable or sortable)
+     stored: true if this field should be retrievable
+     docValues: true if this field should have doc values. Doc values are
+       useful for faceting, grouping, sorting and function queries. Although not
+       required, doc values will make the index faster to load, more
+       NRT-friendly and more memory-efficient. They however come with some
+       limitations: they are currently only supported by StrField, UUIDField
+       and all Trie*Fields, and depending on the field type, they might
+       require the field to be single-valued, be required or have a default
+       value (check the documentation of the field type you're interested in
+       for more information)
+     multiValued: true if this field may contain multiple values per document
+     omitNorms: (expert) set to true to omit the norms associated with
+       this field (this disables length normalization and index-time
+       boosting for the field, and saves some memory).  Only full-text
+       fields or fields that need an index-time boost need norms.
+       Norms are omitted for primitive (non-analyzed) types by default.
+     termVectors: [false] set to true to store the term vector for a
+       given field.
+       When using MoreLikeThis, fields used for similarity should be
+       stored for best performance.
+     termPositions: Store position information with the term vector.  
+       This will increase storage costs.
+     termOffsets: Store offset information with the term vector. This 
+       will increase storage costs.
+     required: The field is required.  It will throw an error if the
+       value does not exist
+     default: a value that should be used if no value is specified
+       when adding a document.
+   -->
+
+    <!-- field names should consist of alphanumeric or underscore characters only and
+      not start with a digit.  This is not currently strictly enforced,
+      but other field names will not have first class support from all components
+      and back compatibility is not guaranteed.  Names with both leading and
+      trailing underscores (e.g. _version_) are reserved.
+      -->
+
+    <!-- In this data_driven_schema_configs configset, only three fields are pre-declared:
+         id, _version_, and _text_.  All other fields will be type guessed and added via the
+         "add-unknown-fields-to-the-schema" update request processor chain declared
+         in solrconfig.xml.
+
+         Note that many dynamic fields are also defined - you can use them to specify a
+         field's type via field naming conventions - see below.
+ 
+         WARNING: The _text_ catch-all field will significantly increase your index size.
+           If you don't need it, consider removing it and the corresponding copyField directive.
+      -->
+
+    <field name="_version_" type="long" indexed="true" stored="true"/>
+
+    <!-- catchall field, containing all other searchable text fields (implemented
+          via copyField further on in this schema  -->
+
+    <field name="text" type="text_general" indexed="true" stored="false" multiValued="true"/>
+
+    <!--*******************************************************************************************************************************************-->
+    <!-- GIBS: product_type schema                                                                                                                 -->
+    <!--*******************************************************************************************************************************************-->
+    <!--    product_type view                                                                                                                      -->
+    <!--        product_type_dataset_view                                                                                                          -->
+    <!--        product_type_resource_view                                                                                                         -->
+    <!--        product_type_coverage_view                                                                                                         -->
+    <!--        product_type_generation_view                                                                                                       -->
+    <!--        product_type_metadata_view                                                                                                         -->
+    <!--        product_type_policy_view                                                                                                           -->
+    <!--        product_type_location_policy_view                                                                                                  -->
+    <!--        product_type_provider_view                                                                                                         -->
+    <!--        product_type_element_view                                                                                                          -->
+    <!--        product_type_datetime_view                                                                                                         -->
+    <!--        product_type_character_view                                                                                                        -->
+    <!--        product_type_integer_view                                                                                                          -->
+    <!--        product_type_real_view                                                                                                             -->
+    <!--*******************************************************************************************************************************************-->
+
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <!-- product_type_view -->
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <field name="id"                                             type="string"  indexed="true"  stored="true"  required="true"  multiValued="false" />
+    <field name="product_type_id"                                type="string"  indexed="true"  stored="true"  required="true"  multiValued="false" />
+    <field name="product_type_provider_id"                       type="long"    indexed="true"  stored="true"  required="false" multiValued="false" />
+    <field name="product_type_version"                           type="long"    indexed="true"  stored="true"  required="false" multiValued="false" />
+    <field name="product_type_identifier"                        type="string"  indexed="true"  stored="true"  required="true"  multiValued="false" />
+    <field name="product_type_title"                             type="string"  indexed="true"  stored="true"  required="true"  multiValued="false" />
+    <field name="product_type_description"                       type="string"  indexed="true"  stored="true"  required="false" multiValued="false" />
+    <field name="product_type_purgable"                          type="boolean" indexed="true"  stored="true"  required="false" multiValued="false" />
+    <field name="product_type_purge_rate"                        type="int"     indexed="true"  stored="true"  required="false" multiValued="false" />
+    <field name="product_type_last_updated"                      type="long"    indexed="true"  stored="true"  required="false" multiValued="false" />
+    <field name="product_type_last_updated_string"               type="string"  indexed="true"  stored="true"  required="false" multiValued="false" />
+
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <!-- product_type_dataset_view -->
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <field name="product_type_dataset_id_list"                  type="int"     indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_dataset_revision_list"            type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_dataset_description_list"         type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_dataset_long_name_list"           type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_dataset_short_name_list"          type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_dataset_metadata_endpoint_list"   type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_dataset_metadata_registry_list"   type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_dataset_remote_dataset_id_list"   type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <!-- product_type_resource_view  -->
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <field name="product_type_resource_version_list"             type="int"     indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_resource_type_list"                type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_resource_name_list"                type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_resource_path_list"                type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_resource_description_list"         type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <!-- product_type_coverage_view -->
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <field name="product_type_coverage_version_list"             type="int"     indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_coverage_north_latitude_list"      type="float"   indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_coverage_east_longitude_list"      type="float"   indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_coverage_south_latitude_list"      type="float"   indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_coverage_west_longitude_list"      type="float"   indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_coverage_start_time_list"          type="int"     indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_coverage_stop_time_list"           type="int"     indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_coverage_start_time_string_list"   type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_coverage_stop_time_string_list"    type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <!-- product_type_generation_view -->
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <field name="product_type_generation_version_list"               type="int"    indexed="true" stored="true" required="false" multiValued="true" />
+    <field name="product_type_generation_mrf_block_size_list"        type="int"    indexed="true" stored="true" required="false" multiValued="true" />
+    <field name="product_type_generation_output_sizex_list"          type="int"    indexed="true" stored="true" required="false" multiValued="true" />
+    <field name="product_type_generation_output_sizey_list"          type="int"    indexed="true" stored="true" required="false" multiValued="true" />
+    <field name="product_type_generation_overview_levels_list"       type="int"    indexed="true" stored="true" required="false" multiValued="true" />
+    <field name="product_type_generation_overview_resample_list"     type="string" indexed="true" stored="true" required="false" multiValued="true" />
+    <field name="product_type_generation_overview_scale_list"        type="int"    indexed="true" stored="true" required="false" multiValued="true" />
+    <field name="product_type_generation_reprojection_resample_list" type="string" indexed="true" stored="true" required="false" multiValued="true" />
+    <field name="product_type_generation_resize_resample_list"       type="string" indexed="true" stored="true" required="false" multiValued="true" />
+    <field name="product_type_generation_vrt_nodata_list"            type="string" indexed="true" stored="true" required="false" multiValued="true" />
+
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <!-- product_type_metadata_view -->
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <field name="product_type_metadata_version"                  type="int"    indexed="true"  stored="true"  required="false"  multiValued="false" />
+    <field name="product_type_metadata_asc_desc"                 type="string" indexed="true"  stored="true"  required="false"  multiValued="false" />
+    <field name="product_type_metadata_science_parameter"        type="string" indexed="true"  stored="true"  required="false"  multiValued="false" />
+    <field name="product_type_metadata_data_version"             type="string" indexed="true"  stored="true"  required="false"  multiValued="false" />
+    <field name="product_type_metadata_day_night"                type="string" indexed="true"  stored="true"  required="false"  multiValued="false" />
+    <field name="product_type_metadata_instrument"               type="string" indexed="true"  stored="true"  required="false"  multiValued="false" />
+    <field name="product_type_metadata_display_resolution"       type="int"    indexed="true"  stored="true"  required="false"  multiValued="false" />
+    <field name="product_type_metadata_native_resolution"        type="int"    indexed="true"  stored="true"  required="false"  multiValued="false" />
+    <field name="product_type_metadata_platform"                 type="string" indexed="true"  stored="true"  required="false"  multiValued="false" />
+    <field name="product_type_metadata_processing_level"         type="string" indexed="true"  stored="true"  required="false"  multiValued="false" />
+    <field name="product_type_metadata_project"                  type="string" indexed="true"  stored="true"  required="false"  multiValued="false" />
+    <field name="product_type_metadata_source_projection_id"     type="int"    indexed="true"  stored="true"  required="false"  multiValued="false" />
+    <field name="product_type_metadata_target_projection_id"     type="int"    indexed="true"  stored="true"  required="false"  multiValued="false" />
+    <field name="product_type_metadata_region_coverage"          type="string" indexed="true"  stored="true"  required="false"  multiValued="false" />
+
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <!-- product_type_policy_view -->
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <field name="product_type_policy_version_list"               type="int"     indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_policy_access_type_list"           type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_policy_access_constraint_list"     type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_policy_use_constraint_list"        type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_policy_base_path_append_type_list" type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_policy_checksum_type_list"         type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_policy_compress_type_list"         type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_policy_data_class_list"            type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_policy_data_duration_list"         type="int"     indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_policy_data_format_list"           type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_policy_spatial_type_list"          type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_policy_data_frequency_list"        type="int"     indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_policy_data_latency_list"          type="int"     indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_policy_data_volume_list"           type="int"     indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_policy_delivery_rate_list"         type="int"     indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_policy_multi_day_list"             type="int"     indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_policy_multi_day_link_list"        type="boolean" indexed="true"  stored="true"  required="false"  multiValued="true" />
+
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <!-- product_type_location_policy_view -->
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <field name="product_type_location_policy_version_list"      type="int"     indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_location_policy_type_list"         type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_location_policy_base_path_list"    type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <!-- product_type_provider_view -->
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <field name="product_type_provider_version"                  type="int"     indexed="true"  stored="true"  required="false"  multiValued="false" />
+    <field name="product_type_provider_long_name"                type="string"  indexed="true"  stored="true"  required="false"  multiValued="false" />
+    <field name="product_type_provider_short_name"               type="string"  indexed="true"  stored="true"  required="false"  multiValued="false" />
+    <field name="product_type_provider_type"                     type="string"  indexed="true"  stored="true"  required="false"  multiValued="false" />
+
+    <field name="product_type_provider_resource_version_list"     type="int"    indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_provider_resource_description_list" type="string" indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_provider_resource_name_list"        type="string" indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_provider_resource_path_list"        type="string" indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_provider_resource_type_list"        type="string" indexed="true"  stored="true"  required="false"  multiValued="true" />
+
+    <field name="product_type_provider_contact_version_list"     type="int"     indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_provider_contact_role_list"        type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_provider_contact_first_name_list"  type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_provider_contact_last_name_list"   type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_provider_contact_middle_name_list" type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_provider_contact_address_list"     type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_provider_contact_notify_type_list" type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_provider_contact_email_list"       type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_provider_contact_phone_list"       type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_provider_contact_fax_list"         type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <!-- product_type_element_view -->
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <field name="product_type_element_version_list"              type="int"     indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_element_obligation_flag_list"      type="boolean" indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_element_scope_list"                type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+
+    <field name="product_type_element_dd_version_list"           type="int"     indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_element_dd_type_list"              type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_element_dd_description_list"       type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_element_dd_scope_list"             type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_element_dd_long_name_list"         type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_element_dd_short_name_list"        type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_element_dd_max_length_list"        type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <!-- product_type_character_view -->
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <field name="product_type_character_version_list"            type="int"     indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_character_value_list"              type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <!-- product_type_datetime_view -->
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <field name="product_type_datetime_version_list"             type="int"     indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_datetime_value_list"               type="int"     indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_datetime_value_string_list"        type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <!-- product_type_integer_view -->
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <field name="product_type_integer_version_list"              type="int"     indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_integer_value_list"                type="int"     indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_integer_units_list"                type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <!-- product_type_real_view -->
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <field name="product_type_real_version_list"                 type="int"    indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_real_units_list"                   type="float"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_type_real_value_list"                   type="string" indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <!--*******************************************************************************************************************************************-->
+
+    <!-- Dynamic field definitions allow using convention over configuration
+       for fields via the specification of patterns to match field names. 
+       EXAMPLE:  name="*_i" will match any field ending in _i (like myid_i, z_i)
+       RESTRICTION: the glob-like pattern in the name attribute must have
+       a "*" only at the start or the end.  -->
+   
+    <dynamicField name="*_i"  type="int"    indexed="true"  stored="true"/>
+    <dynamicField name="*_is" type="ints"    indexed="true"  stored="true"/>
+    <dynamicField name="*_s"  type="string"  indexed="true"  stored="true" />
+    <dynamicField name="*_ss" type="strings"  indexed="true"  stored="true"/>
+    <dynamicField name="*_l"  type="long"   indexed="true"  stored="true"/>
+    <dynamicField name="*_ls" type="longs"   indexed="true"  stored="true"/>
+    <dynamicField name="*_t"   type="text_general" indexed="true" stored="true"/>
+    <dynamicField name="*_txt" type="text_general" indexed="true" stored="true"/>
+    <dynamicField name="*_b"  type="boolean" indexed="true" stored="true"/>
+    <dynamicField name="*_bs" type="booleans" indexed="true" stored="true"/>
+    <dynamicField name="*_f"  type="float"  indexed="true"  stored="true"/>
+    <dynamicField name="*_fs" type="floats"  indexed="true"  stored="true"/>
+    <dynamicField name="*_d"  type="double" indexed="true"  stored="true"/>
+    <dynamicField name="*_ds" type="doubles" indexed="true"  stored="true"/>
+
+    <!-- Type used to index the lat and lon components for the "location" FieldType -->
+    <dynamicField name="*_coordinate"  type="tdouble" indexed="true"  stored="false" />
+
+    <dynamicField name="*_dt"  type="date"    indexed="true"  stored="true"/>
+    <dynamicField name="*_dts" type="date"    indexed="true"  stored="true" multiValued="true"/>
+    <dynamicField name="*_p"  type="location" indexed="true" stored="true"/>
+    <dynamicField name="*_srpt"  type="location_rpt" indexed="true" stored="true"/>
+
+    <!-- some trie-coded dynamic fields for faster range queries -->
+    <dynamicField name="*_ti" type="tint"    indexed="true"  stored="true"/>
+    <dynamicField name="*_tis" type="tints"    indexed="true"  stored="true"/>
+    <dynamicField name="*_tl" type="tlong"   indexed="true"  stored="true"/>
+    <dynamicField name="*_tls" type="tlongs"   indexed="true"  stored="true"/>
+    <dynamicField name="*_tf" type="tfloat"  indexed="true"  stored="true"/>
+    <dynamicField name="*_tfs" type="tfloats"  indexed="true"  stored="true"/>
+    <dynamicField name="*_td" type="tdouble" indexed="true"  stored="true"/>
+    <dynamicField name="*_tds" type="tdoubles" indexed="true"  stored="true"/>
+    <dynamicField name="*_tdt" type="tdate"  indexed="true"  stored="true"/>
+    <dynamicField name="*_tdts" type="tdates"  indexed="true"  stored="true"/>
+
+    <dynamicField name="*_c"   type="currency" indexed="true"  stored="true"/>
+
+    <dynamicField name="ignored_*" type="ignored" multiValued="true"/>
+    <dynamicField name="attr_*" type="text_general" indexed="true" stored="true" multiValued="true"/>
+
+    <dynamicField name="random_*" type="random" />
+
+    <!-- uncomment the following to ignore any fields that don't already match an existing 
+        field name or dynamic field, rather than reporting them as an error. 
+        alternately, change the type="ignored" to some other type e.g. "text" if you want 
+        unknown fields indexed and/or stored by default 
+        
+        NB: use of "*" dynamic fields will disable field type guessing and adding
+        unknown fields to the schema. --> 
+    <!--dynamicField name="*" type="ignored" multiValued="true" /-->
+   
+
+
+  <!-- Field to use to determine and enforce document uniqueness. 
+      Unless this field is marked with required="false", it will be a required field
+   -->
+  <uniqueKey>id</uniqueKey>
+
+  <!-- copyField commands copy one field to another at the time a document
+       is added to the index.  It's used either to index the same field differently,
+       or to add multiple fields to the same field for easier/faster searching. -->
+
+  <!-- product_type -->
+  <copyField source="id"                                             dest="text"   />
+  <copyField source="product_type_provider_id"                       dest="text"   />
+  <copyField source="product_type_description"                       dest="text"   />
+  <copyField source="product_type_identifier"                        dest="text"   />
+  <copyField source="product_type_purgable"                          dest="text"   />
+  <copyField source="product_type_purge_rate"                        dest="text"   />
+  <copyField source="product_type_title"                             dest="text"   />
+  <copyField source="product_type_last_updated"                      dest="text"   />
+
+  <!-- product_type_resource -->
+
+  <!-- product_type_coverage -->
+
+  <!-- product_type_generation -->
+
+  <!-- product_type_metadata -->
+
+  <!-- product_type_policy -->
+
+  <!-- product_type_location_policy -->
+
+  <!-- product_type_element -->
+
+  <!-- product_type_character -->
+
+  <!-- product_type_datetime -->
+
+  <!-- product_type_integer -->
+
+  <!-- product_type_real -->
+
+  <!--*******************************************************************************************************************************************-->
+	
+  <!-- Above, multiple source fields are copied to the [text] field. 
+	  Another way to map multiple source fields to the same 
+	  destination field is to use the dynamic field syntax. 
+	  copyField also supports a maxChars to copy setting.  -->
+	   
+  <!-- <copyField source="*_t" dest="text" maxChars="3000"/> -->
+
+  <!-- copy name to alphaNameSort, a field designed for sorting by name -->
+  <!-- <copyField source="name" dest="alphaNameSort"/> -->
+ 
+
+    <!-- field type definitions. The "name" attribute is
+       just a label to be used by field definitions.  The "class"
+       attribute and any other attributes determine the real
+       behavior of the fieldType.
+         Class names starting with "solr" refer to java classes in a
+       standard package such as org.apache.solr.analysis
+    -->
+    <fieldType name="html" stored="true" indexed="true" class="solr.TextField">
+      <analyzer type="index">
+        <charFilter class="solr.HTMLStripCharFilterFactory"/>
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <!-- in this example, we will only use synonyms at query time
+        <filter class="solr.SynonymFilterFactory" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
+        -->
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt"/>
+        <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="1" catenateNumbers="1" catenateAll="0" splitOnCaseChange="1"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>
+        <filter class="solr.PorterStemFilterFactory"/>
+        <filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
+      </analyzer>
+      <analyzer type="query">
+        <charFilter class="solr.HTMLStripCharFilterFactory"/>
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt"/>
+        <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="0" catenateNumbers="0" catenateAll="0" splitOnCaseChange="1"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>
+        <filter class="solr.PorterStemFilterFactory"/>
+        <filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
+      </analyzer>
+    </fieldType>
+
+    <!-- The StrField type is not analyzed, but indexed/stored verbatim.
+       It supports doc values but in that case the field needs to be
+       single-valued and either required or have a default value.
+      -->
+    <fieldType name="string" class="solr.StrField" sortMissingLast="true" />
+    <fieldType name="strings" class="solr.StrField" sortMissingLast="true" multiValued="true"/>
+
+    <!-- boolean type: "true" or "false" -->
+    <fieldType name="boolean" class="solr.BoolField" sortMissingLast="true"/>
+    <fieldType name="booleans" class="solr.BoolField" sortMissingLast="true" multiValued="true"/>
+
+    <!-- sortMissingLast and sortMissingFirst attributes are optional attributes are
+         currently supported on types that are sorted internally as strings
+         and on numeric types.
+	     This includes "string","boolean", and, as of 3.5 (and 4.x),
+	     int, float, long, date, double, including the "Trie" variants.
+       - If sortMissingLast="true", then a sort on this field will cause documents
+         without the field to come after documents with the field,
+         regardless of the requested sort order (asc or desc).
+       - If sortMissingFirst="true", then a sort on this field will cause documents
+         without the field to come before documents with the field,
+         regardless of the requested sort order.
+       - If sortMissingLast="false" and sortMissingFirst="false" (the default),
+         then default lucene sorting will be used which places docs without the
+         field first in an ascending sort and last in a descending sort.
+    -->    
+
+    <!--
+      Default numeric field types. For faster range queries, consider the tint/tfloat/tlong/tdouble types.
+
+      These fields support doc values, but they require the field to be
+      single-valued and either be required or have a default value.
+    -->
+    <fieldType name="int" class="solr.TrieIntField" precisionStep="0" positionIncrementGap="0"/>
+    <fieldType name="float" class="solr.TrieFloatField" precisionStep="0" positionIncrementGap="0"/>
+    <fieldType name="long" class="solr.TrieLongField" precisionStep="0" positionIncrementGap="0"/>
+    <fieldType name="double" class="solr.TrieDoubleField" precisionStep="0" positionIncrementGap="0"/>
+
+    <fieldType name="ints" class="solr.TrieIntField" precisionStep="0" positionIncrementGap="0" multiValued="true"/>
+    <fieldType name="floats" class="solr.TrieFloatField" precisionStep="0" positionIncrementGap="0" multiValued="true"/>
+    <fieldType name="longs" class="solr.TrieLongField" precisionStep="0" positionIncrementGap="0" multiValued="true"/>
+    <fieldType name="doubles" class="solr.TrieDoubleField" precisionStep="0" positionIncrementGap="0" multiValued="true"/>
+
+    <!--
+     Numeric field types that index each value at various levels of precision
+     to accelerate range queries when the number of values between the range
+     endpoints is large. See the javadoc for NumericRangeQuery for internal
+     implementation details.
+
+     Smaller precisionStep values (specified in bits) will lead to more tokens
+     indexed per value, slightly larger index size, and faster range queries.
+     A precisionStep of 0 disables indexing at different precision levels.
+    -->
+    <fieldType name="tint" class="solr.TrieIntField" precisionStep="8" positionIncrementGap="0"/>
+    <fieldType name="tfloat" class="solr.TrieFloatField" precisionStep="8" positionIncrementGap="0"/>
+    <fieldType name="tlong" class="solr.TrieLongField" precisionStep="8" positionIncrementGap="0"/>
+    <fieldType name="tdouble" class="solr.TrieDoubleField" precisionStep="8" positionIncrementGap="0"/>
+    
+    <fieldType name="tints" class="solr.TrieIntField" precisionStep="8" positionIncrementGap="0" multiValued="true"/>
+    <fieldType name="tfloats" class="solr.TrieFloatField" precisionStep="8" positionIncrementGap="0" multiValued="true"/>
+    <fieldType name="tlongs" class="solr.TrieLongField" precisionStep="8" positionIncrementGap="0" multiValued="true"/>
+    <fieldType name="tdoubles" class="solr.TrieDoubleField" precisionStep="8" positionIncrementGap="0" multiValued="true"/>
+
+    <!-- The format for this date field is of the form 1995-12-31T23:59:59Z, and
+         is a more restricted form of the canonical representation of dateTime
+         http://www.w3.org/TR/xmlschema-2/#dateTime    
+         The trailing "Z" designates UTC time and is mandatory.
+         Optional fractional seconds are allowed: 1995-12-31T23:59:59.999Z
+         All other components are mandatory.
+
+         Expressions can also be used to denote calculations that should be
+         performed relative to "NOW" to determine the value, ie...
+
+               NOW/HOUR
+                  ... Round to the start of the current hour
+               NOW-1DAY
+                  ... Exactly 1 day prior to now
+               NOW/DAY+6MONTHS+3DAYS
+                  ... 6 months and 3 days in the future from the start of
+                      the current day
+                      
+         Consult the TrieDateField javadocs for more information.
+
+         Note: For faster range queries, consider the tdate type
+      -->
+    <fieldType name="date" class="solr.TrieDateField" precisionStep="0" positionIncrementGap="0"/>
+    <fieldType name="dates" class="solr.TrieDateField" precisionStep="0" positionIncrementGap="0" multiValued="true"/>
+
+    <!-- A Trie based date field for faster date range queries and date faceting. -->
+    <fieldType name="tdate" class="solr.TrieDateField" precisionStep="6" positionIncrementGap="0"/>
+    <fieldType name="tdates" class="solr.TrieDateField" precisionStep="6" positionIncrementGap="0" multiValued="true"/>
+
+    <!--Binary data type. The data should be sent/retrieved in as Base64 encoded Strings -->
+    <fieldType name="binary" class="solr.BinaryField"/>
+
+    <!-- The "RandomSortField" is not used to store or search any
+         data.  You can declare fields of this type it in your schema
+         to generate pseudo-random orderings of your docs for sorting 
+         or function purposes.  The ordering is generated based on the field
+         name and the version of the index. As long as the index version
+         remains unchanged, and the same field name is reused,
+         the ordering of the docs will be consistent.  
+         If you want different psuedo-random orderings of documents,
+         for the same version of the index, use a dynamicField and
+         change the field name in the request.
+     -->
+    <fieldType name="random" class="solr.RandomSortField" indexed="true" />
+
+    <!-- solr.TextField allows the specification of custom text analyzers
+         specified as a tokenizer and a list of token filters. Different
+         analyzers may be specified for indexing and querying.
+
+         The optional positionIncrementGap puts space between multiple fields of
+         this type on the same document, with the purpose of preventing false phrase
+         matching across fields.
+
+         For more info on customizing your analyzer chain, please see
+         http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters
+     -->
+
+    <!-- One can also specify an existing Analyzer class that has a
+         default constructor via the class attribute on the analyzer element.
+         Example:
+    <fieldType name="text_greek" class="solr.TextField">
+      <analyzer class="org.apache.lucene.analysis.el.GreekAnalyzer"/>
+    </fieldType>
+    -->
+
+    <!-- A text field that only splits on whitespace for exact matching of words -->
+  <dynamicField name="*_ws" type="text_ws"  indexed="true"  stored="true"/>
+    <fieldType name="text_ws" class="solr.TextField" positionIncrementGap="100">
+      <analyzer>
+        <tokenizer class="solr.WhitespaceTokenizerFactory"/>
+      </analyzer>
+    </fieldType>
+
+    <!-- A general text field that has reasonable, generic
+         cross-language defaults: it tokenizes with StandardTokenizer,
+	 removes stop words from case-insensitive "stopwords.txt"
+	 (empty by default), and down cases.  At query time only, it
+	 also applies synonyms. -->
+    <fieldType name="text_general" class="solr.TextField" positionIncrementGap="100" multiValued="true">
+      <analyzer type="index">
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />
+        <!-- in this example, we will only use synonyms at query time
+        <filter class="solr.SynonymFilterFactory" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
+        -->
+        <filter class="solr.LowerCaseFilterFactory"/>
+      </analyzer>
+      <analyzer type="query">
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />
+        <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+      </analyzer>
+    </fieldType>
+
+    <!-- A text field with defaults appropriate for English: it
+         tokenizes with StandardTokenizer, removes English stop words
+         (lang/stopwords_en.txt), down cases, protects words from protwords.txt, and
+         finally applies Porter's stemming.  The query time analyzer
+         also applies synonyms from synonyms.txt. -->
+    <dynamicField name="*_txt_en" type="text_en"  indexed="true"  stored="true"/>
+    <fieldType name="text_en" class="solr.TextField" positionIncrementGap="100">
+      <analyzer type="index">
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <!-- in this example, we will only use synonyms at query time
+        <filter class="solr.SynonymFilterFactory" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
+        -->
+        <!-- Case insensitive stop word removal.
+        -->
+        <filter class="solr.StopFilterFactory"
+                ignoreCase="true"
+                words="lang/stopwords_en.txt"
+            />
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.EnglishPossessiveFilterFactory"/>
+        <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>
+        <!-- Optionally you may want to use this less aggressive stemmer instead of PorterStemFilterFactory:
+        <filter class="solr.EnglishMinimalStemFilterFactory"/>
+	-->
+        <filter class="solr.PorterStemFilterFactory"/>
+      </analyzer>
+      <analyzer type="query">
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
+        <filter class="solr.StopFilterFactory"
+                ignoreCase="true"
+                words="lang/stopwords_en.txt"
+            />
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.EnglishPossessiveFilterFactory"/>
+        <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>
+        <!-- Optionally you may want to use this less aggressive stemmer instead of PorterStemFilterFactory:
+        <filter class="solr.EnglishMinimalStemFilterFactory"/>
+	-->
+        <filter class="solr.PorterStemFilterFactory"/>
+      </analyzer>
+    </fieldType>
+
+    <!-- A text field with defaults appropriate for English, plus
+	 aggressive word-splitting and autophrase features enabled.
+	 This field is just like text_en, except it adds
+	 WordDelimiterFilter to enable splitting and matching of
+	 words on case-change, alpha numeric boundaries, and
+	 non-alphanumeric chars.  This means certain compound word
+	 cases will work, for example query "wi fi" will match
+	 document "WiFi" or "wi-fi".
+        -->
+    <dynamicField name="*_txt_en_split" type="text_en_splitting"  indexed="true"  stored="true"/>
+    <fieldType name="text_en_splitting" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="true">
+      <analyzer type="index">
+        <tokenizer class="solr.WhitespaceTokenizerFactory"/>
+        <!-- in this example, we will only use synonyms at query time
+        <filter class="solr.SynonymFilterFactory" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
+        -->
+        <!-- Case insensitive stop word removal.
+        -->
+        <filter class="solr.StopFilterFactory"
+                ignoreCase="true"
+                words="lang/stopwords_en.txt"
+            />
+        <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="1" catenateNumbers="1" catenateAll="0" splitOnCaseChange="1"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>
+        <filter class="solr.PorterStemFilterFactory"/>
+      </analyzer>
+      <analyzer type="query">
+        <tokenizer class="solr.WhitespaceTokenizerFactory"/>
+        <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
+        <filter class="solr.StopFilterFactory"
+                ignoreCase="true"
+                words="lang/stopwords_en.txt"
+            />
+        <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="0" catenateNumbers="0" catenateAll="0" splitOnCaseChange="1"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>
+        <filter class="solr.PorterStemFilterFactory"/>
+      </analyzer>
+    </fieldType>
+
+    <!-- Less flexible matching, but less false matches.  Probably not ideal for product names,
+         but may be good for SKUs.  Can insert dashes in the wrong place and still match. -->
+    <dynamicField name="*_txt_en_split_tight" type="text_en_splitting_tight"  indexed="true"  stored="true"/>
+    <fieldType name="text_en_splitting_tight" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="true">
+      <analyzer>
+        <tokenizer class="solr.WhitespaceTokenizerFactory"/>
+        <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="false"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_en.txt"/>
+        <filter class="solr.WordDelimiterFilterFactory" generateWordParts="0" generateNumberParts="0" catenateWords="1" catenateNumbers="1" catenateAll="0"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>
+        <filter class="solr.EnglishMinimalStemFilterFactory"/>
+        <!-- this filter can remove any duplicate tokens that appear at the same position - sometimes
+             possible with WordDelimiterFilter in conjuncton with stemming. -->
+        <filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
+      </analyzer>
+    </fieldType>
+
+    <!-- Just like text_general except it reverses the characters of
+	 each token, to enable more efficient leading wildcard queries. -->
+  <dynamicField name="*_txt_rev" type="text_general_rev"  indexed="true"  stored="true"/>
+  <fieldType name="text_general_rev" class="solr.TextField" positionIncrementGap="100">
+      <analyzer type="index">
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.ReversedWildcardFilterFactory" withOriginal="true"
+                maxPosAsterisk="3" maxPosQuestion="2" maxFractionAsterisk="0.33"/>
+      </analyzer>
+      <analyzer type="query">
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />
+        <filter class="solr.LowerCaseFilterFactory"/>
+      </analyzer>
+    </fieldType>
+
+  <dynamicField name="*_phon_en" type="phonetic_en"  indexed="true"  stored="true"/>
+  <fieldType name="phonetic_en" stored="false" indexed="true" class="solr.TextField" >
+      <analyzer>
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.DoubleMetaphoneFilterFactory" inject="false"/>
+      </analyzer>
+    </fieldType>
+
+    <!-- lowercases the entire field value, keeping it as a single token.  -->
+    <dynamicField name="*_s_lower" type="lowercase"  indexed="true"  stored="true"/>
+    <fieldType name="lowercase" class="solr.TextField" positionIncrementGap="100">
+      <analyzer>
+        <tokenizer class="solr.KeywordTokenizerFactory"/>
+        <filter class="solr.LowerCaseFilterFactory" />
+      </analyzer>
+    </fieldType>
+
+    <!-- 
+      Example of using PathHierarchyTokenizerFactory at index time, so
+      queries for paths match documents at that path, or in descendent paths
+    -->
+  <dynamicField name="*_descendent_path" type="descendent_path"  indexed="true"  stored="true"/>
+  <fieldType name="descendent_path" class="solr.TextField">
+      <analyzer type="index">
+        <tokenizer class="solr.PathHierarchyTokenizerFactory" delimiter="/" />
+      </analyzer>
+      <analyzer type="query">
+        <tokenizer class="solr.KeywordTokenizerFactory" />
+      </analyzer>
+    </fieldType>
+    <!-- 
+      Example of using PathHierarchyTokenizerFactory at query time, so
+      queries for paths match documents at that path, or in ancestor paths
+    -->
+    <dynamicField name="*_ancestor_path" type="ancestor_path"  indexed="true"  stored="true"/>
+    <fieldType name="ancestor_path" class="solr.TextField">
+      <analyzer type="index">
+        <tokenizer class="solr.KeywordTokenizerFactory" />
+      </analyzer>
+      <analyzer type="query">
+        <tokenizer class="solr.PathHierarchyTokenizerFactory" delimiter="/" />
+      </analyzer>
+    </fieldType>
+
+    <!-- since fields of this type are by default not stored or indexed,
+         any data added to them will be ignored outright.  --> 
+    <fieldType name="ignored" stored="false" indexed="false" multiValued="true" class="solr.StrField" />
+
+    <!-- This point type indexes the coordinates as separate fields (subFields)
+      If subFieldType is defined, it references a type, and a dynamic field
+      definition is created matching *___<typename>.  Alternately, if 
+      subFieldSuffix is defined, that is used to create the subFields.
+      Example: if subFieldType="double", then the coordinates would be
+        indexed in fields myloc_0___double,myloc_1___double.
+      Example: if subFieldSuffix="_d" then the coordinates would be indexed
+        in fields myloc_0_d,myloc_1_d
+      The subFields are an implementation detail of the fieldType, and end
+      users normally should not need to know about them.
+     -->
+  <dynamicField name="*_point" type="point"  indexed="true"  stored="true"/>
+  <fieldType name="point" class="solr.PointType" dimension="2" subFieldSuffix="_d"/>
+
+    <!-- A specialized field for geospatial search. If indexed, this fieldType must not be multivalued. -->
+    <fieldType name="location" class="solr.LatLonType" subFieldSuffix="_coordinate"/>
+
+    <!-- An alternative geospatial field type new to Solr 4.  It supports multiValued and polygon shapes.
+      For more information about this and other Spatial fields new to Solr 4, see:
+      http://wiki.apache.org/solr/SolrAdaptersForLuceneSpatial4
+    -->
+    <fieldType name="location_rpt" class="solr.SpatialRecursivePrefixTreeFieldType"
+               geo="true" distErrPct="0.025" maxDistErr="0.001" distanceUnits="kilometers" />
+
+    <!-- Money/currency field type. See http://wiki.apache.org/solr/MoneyFieldType
+        Parameters:
+          defaultCurrency: Specifies the default currency if none specified. Defaults to "USD"
+          precisionStep:   Specifies the precisionStep for the TrieLong field used for the amount
+          providerClass:   Lets you plug in other exchange provider backend:
+                           solr.FileExchangeRateProvider is the default and takes one parameter:
+                             currencyConfig: name of an xml file holding exchange rates
+                           solr.OpenExchangeRatesOrgProvider uses rates from openexchangerates.org:
+                             ratesFileLocation: URL or path to rates JSON file (default latest.json on the web)
+                             refreshInterval: Number of minutes between each rates fetch (default: 1440, min: 60)
+   -->
+    <fieldType name="currency" class="solr.CurrencyField" precisionStep="8" defaultCurrency="USD" currencyConfig="currency.xml" />
+             
+
+
+    <!-- some examples for different languages (generally ordered by ISO code) -->
+
+    <!-- Arabic -->
+    <dynamicField name="*_txt_ar" type="text_ar"  indexed="true"  stored="true"/>
+    <fieldType name="text_ar" class="solr.TextField" positionIncrementGap="100">
+      <analyzer> 
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <!-- for any non-arabic -->
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_ar.txt" />
+        <!-- normalizes ﻯ to ﻱ, etc -->
+        <filter class="solr.ArabicNormalizationFilterFactory"/>
+        <filter class="solr.ArabicStemFilterFactory"/>
+      </analyzer>
+    </fieldType>
+
+    <!-- Bulgarian -->
+    <dynamicField name="*_txt_bg" type="text_bg"  indexed="true"  stored="true"/>
+    <fieldType name="text_bg" class="solr.TextField" positionIncrementGap="100">
+      <analyzer> 
+        <tokenizer class="solr.StandardTokenizerFactory"/> 
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_bg.txt" /> 
+        <filter class="solr.BulgarianStemFilterFactory"/>       
+      </analyzer>
+    </fieldType>
+    
+    <!-- Catalan -->
+    <dynamicField name="*_txt_ca" type="text_ca"  indexed="true"  stored="true"/>
+    <fieldType name="text_ca" class="solr.TextField" positionIncrementGap="100">
+      <analyzer> 
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <!-- removes l', etc -->
+        <filter class="solr.ElisionFilterFactory" ignoreCase="true" articles="lang/contractions_ca.txt"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_ca.txt" />
+        <filter class="solr.SnowballPorterFilterFactory" language="Catalan"/>       
+      </analyzer>
+    </fieldType>
+    
+    <!-- CJK bigram (see text_ja for a Japanese configuration using morphological analysis) -->
+    <dynamicField name="*_txt_cjk" type="text_cjk"  indexed="true"  stored="true"/>
+    <fieldType name="text_cjk" class="solr.TextField" positionIncrementGap="100">
+      <analyzer>
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <!-- normalize width before bigram, as e.g. half-width dakuten combine  -->
+        <filter class="solr.CJKWidthFilterFactory"/>
+        <!-- for any non-CJK -->
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.CJKBigramFilterFactory"/>
+      </analyzer>
+    </fieldType>
+
+    <!-- Czech -->
+    <dynamicField name="*_txt_cz" type="text_cz"  indexed="true"  stored="true"/>
+    <fieldType name="text_cz" class="solr.TextField" positionIncrementGap="100">
+      <analyzer> 
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_cz.txt" />
+        <filter class="solr.CzechStemFilterFactory"/>       
+      </analyzer>
+    </fieldType>
+    
+    <!-- Danish -->
+    <dynamicField name="*_txt_da" type="text_da"  indexed="true"  stored="true"/>
+    <fieldType name="text_da" class="solr.TextField" positionIncrementGap="100">
+      <analyzer> 
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_da.txt" format="snowball" />
+        <filter class="solr.SnowballPorterFilterFactory" language="Danish"/>       
+      </analyzer>
+    </fieldType>
+    
+    <!-- German -->
+    <dynamicField name="*_txt_de" type="text_de"  indexed="true"  stored="true"/>
+    <fieldType name="text_de" class="solr.TextField" positionIncrementGap="100">
+      <analyzer> 
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_de.txt" format="snowball" />
+        <filter class="solr.GermanNormalizationFilterFactory"/>
+        <filter class="solr.GermanLightStemFilterFactory"/>
+        <!-- less aggressive: <filter class="solr.GermanMinimalStemFilterFactory"/> -->
+        <!-- more aggressive: <filter class="solr.SnowballPorterFilterFactory" language="German2"/> -->
+      </analyzer>
+    </fieldType>
+    
+    <!-- Greek -->
+    <dynamicField name="*_txt_el" type="text_el"  indexed="true"  stored="true"/>
+    <fieldType name="text_el" class="solr.TextField" positionIncrementGap="100">
+      <analyzer> 
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <!-- greek specific lowercase for sigma -->
+        <filter class="solr.GreekLowerCaseFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="false" words="lang/stopwords_el.txt" />
+        <filter class="solr.GreekStemFilterFactory"/>
+      </analyzer>
+    </fieldType>
+    
+    <!-- Spanish -->
+    <dynamicField name="*_txt_es" type="text_es"  indexed="true"  stored="true"/>
+    <fieldType name="text_es" class="solr.TextField" positionIncrementGap="100">
+      <analyzer> 
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_es.txt" format="snowball" />
+        <filter class="solr.SpanishLightStemFilterFactory"/>
+        <!-- more aggressive: <filter class="solr.SnowballPorterFilterFactory" language="Spanish"/> -->
+      </analyzer>
+    </fieldType>
+    
+    <!-- Basque -->
+    <dynamicField name="*_txt_eu" type="text_eu"  indexed="true"  stored="true"/>
+    <fieldType name="text_eu" class="solr.TextField" positionIncrementGap="100">
+      <analyzer> 
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_eu.txt" />
+        <filter class="solr.SnowballPorterFilterFactory" language="Basque"/>
+      </analyzer>
+    </fieldType>
+    
+    <!-- Persian -->
+    <dynamicField name="*_txt_fa" type="text_fa"  indexed="true"  stored="true"/>
+    <fieldType name="text_fa" class="solr.TextField" positionIncrementGap="100">
+      <analyzer>
+        <!-- for ZWNJ -->
+        <charFilter class="solr.PersianCharFilterFactory"/>
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.ArabicNormalizationFilterFactory"/>
+        <filter class="solr.PersianNormalizationFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_fa.txt" />
+      </analyzer>
+    </fieldType>
+    
+    <!-- Finnish -->
+    <dynamicField name="*_txt_fi" type="text_fi"  indexed="true"  stored="true"/>
+    <fieldType name="text_fi" class="solr.TextField" positionIncrementGap="100">
+      <analyzer> 
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_fi.txt" format="snowball" />
+        <filter class="solr.SnowballPorterFilterFactory" language="Finnish"/>
+        <!-- less aggressive: <filter class="solr.FinnishLightStemFilterFactory"/> -->
+      </analyzer>
+    </fieldType>
+    
+    <!-- French -->
+    <dynamicField name="*_txt_fr" type="text_fr"  indexed="true"  stored="true"/>
+    <fieldType name="text_fr" class="solr.TextField" positionIncrementGap="100">
+      <analyzer> 
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <!-- removes l', etc -->
+        <filter class="solr.ElisionFilterFactory" ignoreCase="true" articles="lang/contractions_fr.txt"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_fr.txt" format="snowball" />
+        <filter class="solr.FrenchLightStemFilterFactory"/>
+        <!-- less aggressive: <filter class="solr.FrenchMinimalStemFilterFactory"/> -->
+        <!-- more aggressive: <filter class="solr.SnowballPorterFilterFactory" language="French"/> -->
+      </analyzer>
+    </fieldType>
+    
+    <!-- Irish -->
+    <dynamicField name="*_txt_ga" type="text_ga"  indexed="true"  stored="true"/>
+    <fieldType name="text_ga" class="solr.TextField" positionIncrementGap="100">
+      <analyzer> 
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <!-- removes d', etc -->
+        <filter class="solr.ElisionFilterFactory" ignoreCase="true" articles="lang/contractions_ga.txt"/>
+        <!-- removes n-, etc. position increments is intentionally false! -->
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/hyphenations_ga.txt"/>
+        <filter class="solr.IrishLowerCaseFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_ga.txt"/>
+        <filter class="solr.SnowballPorterFilterFactory" language="Irish"/>
+      </analyzer>
+    </fieldType>
+    
+    <!-- Galician -->
+    <dynamicField name="*_txt_gl" type="text_gl"  indexed="true"  stored="true"/>
+    <fieldType name="text_gl" class="solr.TextField" positionIncrementGap="100">
+      <analyzer> 
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_gl.txt" />
+        <filter class="solr.GalicianStemFilterFactory"/>
+        <!-- less aggressive: <filter class="solr.GalicianMinimalStemFilterFactory"/> -->
+      </analyzer>
+    </fieldType>
+    
+    <!-- Hindi -->
+    <dynamicField name="*_txt_hi" type="text_hi"  indexed="true"  stored="true"/>
+    <fieldType name="text_hi" class="solr.TextField" positionIncrementGap="100">
+      <analyzer> 
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <!-- normalizes unicode representation -->
+        <filter class="solr.IndicNormalizationFilterFactory"/>
+        <!-- normalizes variation in spelling -->
+        <filter class="solr.HindiNormalizationFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_hi.txt" />
+        <filter class="solr.HindiStemFilterFactory"/>
+      </analyzer>
+    </fieldType>
+    
+    <!-- Hungarian -->
+    <dynamicField name="*_txt_hu" type="text_hu"  indexed="true"  stored="true"/>
+    <fieldType name="text_hu" class="solr.TextField" positionIncrementGap="100">
+      <analyzer> 
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_hu.txt" format="snowball" />
+        <filter class="solr.SnowballPorterFilterFactory" language="Hungarian"/>
+        <!-- less aggressive: <filter class="solr.HungarianLightStemFilterFactory"/> -->   
+      </analyzer>
+    </fieldType>
+    
+    <!-- Armenian -->
+    <dynamicField name="*_txt_hy" type="text_hy"  indexed="true"  stored="true"/>
+    <fieldType name="text_hy" class="solr.TextField" positionIncrementGap="100">
+      <analyzer> 
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_hy.txt" />
+        <filter class="solr.SnowballPorterFilterFactory" language="Armenian"/>
+      </analyzer>
+    </fieldType>
+    
+    <!-- Indonesian -->
+    <dynamicField name="*_txt_id" type="text_id"  indexed="true"  stored="true"/>
+    <fieldType name="text_id" class="solr.TextField" positionIncrementGap="100">
+      <analyzer> 
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_id.txt" />
+        <!-- for a less aggressive approach (only inflectional suffixes), set stemDerivational to false -->
+        <filter class="solr.IndonesianStemFilterFactory" stemDerivational="true"/>
+      </analyzer>
+    </fieldType>
+    
+    <!-- Italian -->
+  <dynamicField name="*_txt_it" type="text_it"  indexed="true"  stored="true"/>
+  <fieldType name="text_it" class="solr.TextField" positionIncrementGap="100">
+      <analyzer> 
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <!-- removes l', etc -->
+        <filter class="solr.ElisionFilterFactory" ignoreCase="true" articles="lang/contractions_it.txt"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_it.txt" format="snowball" />
+        <filter class="solr.ItalianLightStemFilterFactory"/>
+        <!-- more aggressive: <filter class="solr.SnowballPorterFilterFactory" language="Italian"/> -->
+      </analyzer>
+    </fieldType>
+    
+    <!-- Japanese using morphological analysis (see text_cjk for a configuration using bigramming)
+
+         NOTE: If you want to optimize search for precision, use default operator AND in your query
+         parser config with <solrQueryParser defaultOperator="AND"/> further down in this file.  Use 
+         OR if you would like to optimize for recall (default).
+    -->
+    <dynamicField name="*_txt_ja" type="text_ja"  indexed="true"  stored="true"/>
+    <fieldType name="text_ja" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="false">
+      <analyzer>
+        <!-- Kuromoji Japanese morphological analyzer/tokenizer (JapaneseTokenizer)
+
+           Kuromoji has a search mode (default) that does segmentation useful for search.  A heuristic
+           is used to segment compounds into its parts and the compound itself is kept as synonym.
+
+           Valid values for attribute mode are:
+              normal: regular segmentation
+              search: segmentation useful for search with synonyms compounds (default)
+            extended: same as search mode, but unigrams unknown words (experimental)
+
+           For some applications it might be good to use search mode for indexing and normal mode for
+           queries to reduce recall and prevent parts of compounds from being matched and highlighted.
+           Use <analyzer type="index"> and <analyzer type="query"> for this and mode normal in query.
+
+           Kuromoji also has a convenient user dictionary feature that allows overriding the statistical
+           model with your own entries for segmentation, part-of-speech tags and readings without a need
+           to specify weights.  Notice that user dictionaries have not been subject to extensive testing.
+
+           User dictionary attributes are:
+                     userDictionary: user dictionary filename
+             userDictionaryEncoding: user dictionary encoding (default is UTF-8)
+
+           See lang/userdict_ja.txt for a sample user dictionary file.
+
+           Punctuation characters are discarded by default.  Use discardPunctuation="false" to keep them.
+
+           See http://wiki.apache.org/solr/JapaneseLanguageSupport for more on Japanese language support.
+        -->
+        <tokenizer class="solr.JapaneseTokenizerFactory" mode="search"/>
+        <!--<tokenizer class="solr.JapaneseTokenizerFactory" mode="search" userDictionary="lang/userdict_ja.txt"/>-->
+        <!-- Reduces inflected verbs and adjectives to their base/dictionary forms (辞書形) -->
+        <filter class="solr.JapaneseBaseFormFilterFactory"/>
+        <!-- Removes tokens with certain part-of-speech tags -->
+        <filter class="solr.JapanesePartOfSpeechStopFilterFactory" tags="lang/stoptags_ja.txt" />
+        <!-- Normalizes full-width romaji to half-width and half-width kana to full-width (Unicode NFKC subset) -->
+        <filter class="solr.CJKWidthFilterFactory"/>
+        <!-- Removes common tokens typically not useful for search, but have a negative effect on ranking -->
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_ja.txt" />
+        <!-- Normalizes common katakana spelling variations by removing any last long sound character (U+30FC) -->
+        <filter class="solr.JapaneseKatakanaStemFilterFactory" minimumLength="4"/>
+        <!-- Lower-cases romaji characters -->
+        <filter class="solr.LowerCaseFilterFactory"/>
+      </analyzer>
+    </fieldType>
+    
+    <!-- Latvian -->
+    <dynamicField name="*_txt_lv" type="text_lv"  indexed="true"  stored="true"/>
+    <fieldType name="text_lv" class="solr.TextField" positionIncrementGap="100">
+      <analyzer> 
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_lv.txt" />
+        <filter class="solr.LatvianStemFilterFactory"/>
+      </analyzer>
+    </fieldType>
+    
+    <!-- Dutch -->
+    <dynamicField name="*_txt_nl" type="text_nl"  indexed="true"  stored="true"/>
+    <fieldType name="text_nl" class="solr.TextField" positionIncrementGap="100">
+      <analyzer> 
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_nl.txt" format="snowball" />
+        <filter class="solr.StemmerOverrideFilterFactory" dictionary="lang/stemdict_nl.txt" ignoreCase="false"/>
+        <filter class="solr.SnowballPorterFilterFactory" language="Dutch"/>
+      </analyzer>
+    </fieldType>
+    
+    <!-- Norwegian -->
+    <dynamicField name="*_txt_no" type="text_no"  indexed="true"  stored="true"/>
+    <fieldType name="text_no" class="solr.TextField" positionIncrementGap="100">
+      <analyzer> 
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_no.txt" format="snowball" />
+        <filter class="solr.SnowballPorterFilterFactory" language="Norwegian"/>
+        <!-- less aggressive: <filter class="solr.NorwegianLightStemFilterFactory"/> -->
+        <!-- singular/plural: <filter class="solr.NorwegianMinimalStemFilterFactory"/> -->
+      </analyzer>
+    </fieldType>
+    
+    <!-- Portuguese -->
+  <dynamicField name="*_txt_pt" type="text_pt"  indexed="true"  stored="true"/>
+  <fieldType name="text_pt" class="solr.TextField" positionIncrementGap="100">
+      <analyzer> 
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_pt.txt" format="snowball" />
+        <filter class="solr.PortugueseLightStemFilterFactory"/>
+        <!-- less aggressive: <filter class="solr.PortugueseMinimalStemFilterFactory"/> -->
+        <!-- more aggressive: <filter class="solr.SnowballPorterFilterFactory" language="Portuguese"/> -->
+        <!-- most aggressive: <filter class="solr.PortugueseStemFilterFactory"/> -->
+      </analyzer>
+    </fieldType>
+    
+    <!-- Romanian -->
+    <dynamicField name="*_txt_ro" type="text_ro"  indexed="true"  stored="true"/>
+    <fieldType name="text_ro" class="solr.TextField" positionIncrementGap="100">
+      <analyzer> 
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_ro.txt" />
+        <filter class="solr.SnowballPorterFilterFactory" language="Romanian"/>
+      </analyzer>
+    </fieldType>
+    
+    <!-- Russian -->
+    <dynamicField name="*_txt_ru" type="text_ru"  indexed="true"  stored="true"/>
+    <fieldType name="text_ru" class="solr.TextField" positionIncrementGap="100">
+      <analyzer> 
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_ru.txt" format="snowball" />
+        <filter class="solr.SnowballPorterFilterFactory" language="Russian"/>
+        <!-- less aggressive: <filter class="solr.RussianLightStemFilterFactory"/> -->
+      </analyzer>
+    </fieldType>
+    
+    <!-- Swedish -->
+    <dynamicField name="*_txt_sv" type="text_sv"  indexed="true"  stored="true"/>
+    <fieldType name="text_sv" class="solr.TextField" positionIncrementGap="100">
+      <analyzer> 
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_sv.txt" format="snowball" />
+        <filter class="solr.SnowballPorterFilterFactory" language="Swedish"/>
+        <!-- less aggressive: <filter class="solr.SwedishLightStemFilterFactory"/> -->
+      </analyzer>
+    </fieldType>
+    
+    <!-- Thai -->
+    <dynamicField name="*_txt_th" type="text_th"  indexed="true"  stored="true"/>
+    <fieldType name="text_th" class="solr.TextField" positionIncrementGap="100">
+      <analyzer>
+        <tokenizer class="solr.ThaiTokenizerFactory"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_th.txt" />
+      </analyzer>
+    </fieldType>
+    
+    <!-- Turkish -->
+    <dynamicField name="*_txt_tr" type="text_tr"  indexed="true"  stored="true"/>
+    <fieldType name="text_tr" class="solr.TextField" positionIncrementGap="100">
+      <analyzer> 
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.TurkishLowerCaseFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="false" words="lang/stopwords_tr.txt" />
+        <filter class="solr.SnowballPorterFilterFactory" language="Turkish"/>
+      </analyzer>
+    </fieldType>
+
+  <!-- Similarity is the scoring routine for each document vs. a query.
+       A custom Similarity or SimilarityFactory may be specified here, but 
+       the default is fine for most applications.  
+       For more info: http://wiki.apache.org/solr/SchemaXml#Similarity
+    -->
+  <!--
+     <similarity class="com.example.solr.CustomSimilarityFactory">
+       <str name="paramkey">param value</str>
+     </similarity>
+    -->
+</schema>


[02/15] incubator-sdap-edge git commit: SDAP-1 Import all code under the SDAP SGA

Posted by le...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/solr/product_type/conf/solrconfig.xml
----------------------------------------------------------------------
diff --git a/src/main/solr/product_type/conf/solrconfig.xml b/src/main/solr/product_type/conf/solrconfig.xml
new file mode 100644
index 0000000..6c190b6
--- /dev/null
+++ b/src/main/solr/product_type/conf/solrconfig.xml
@@ -0,0 +1,1627 @@
+<?xml version="1.0" encoding="UTF-8" ?>
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<!-- 
+     For more details about configurations options that may appear in
+     this file, see http://wiki.apache.org/solr/SolrConfigXml. 
+-->
+<config>
+  <!-- In all configuration below, a prefix of "solr." for class names
+       is an alias that causes solr to search appropriate packages,
+       including org.apache.solr.(search|update|request|core|analysis)
+
+       You may also specify a fully qualified Java classname if you
+       have your own custom plugins.
+    -->
+
+  <!-- Controls what version of Lucene various components of Solr
+       adhere to.  Generally, you want to use the latest version to
+       get all bug fixes and improvements. It is highly recommended
+       that you fully re-index after changing this setting as it can
+       affect both how text is indexed and queried.
+  -->
+  <luceneMatchVersion>5.0.0</luceneMatchVersion>
+
+  <!-- <lib/> directives can be used to instruct Solr to load any Jars
+       identified and use them to resolve any "plugins" specified in
+       your solrconfig.xml or schema.xml (ie: Analyzers, Request
+       Handlers, etc...).
+
+       All directories and paths are resolved relative to the
+       instanceDir.
+
+       Please note that <lib/> directives are processed in the order
+       that they appear in your solrconfig.xml file, and are "stacked" 
+       on top of each other when building a ClassLoader - so if you have 
+       plugin jars with dependencies on other jars, the "lower level" 
+       dependency jars should be loaded first.
+
+       If a "./lib" directory exists in your instanceDir, all files
+       found in it are included as if you had used the following
+       syntax...
+       
+              <lib dir="./lib" />
+    -->
+
+  <!-- A 'dir' option by itself adds any files found in the directory 
+       to the classpath, this is useful for including all jars in a
+       directory.
+
+       When a 'regex' is specified in addition to a 'dir', only the
+       files in that directory which completely match the regex
+       (anchored on both ends) will be included.
+
+       If a 'dir' option (with or without a regex) is used and nothing
+       is found that matches, a warning will be logged.
+
+       The examples below can be used to load some solr-contribs along 
+       with their external dependencies.
+    -->
+  <lib dir="${solr.install.dir:../../../..}/contrib/extraction/lib" regex=".*\.jar" />
+  <lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-cell-\d.*\.jar" />
+
+  <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+  <!-- GIBS: Adding lib path to dataimport jar file  -->
+  <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+
+  <lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-dataimporthandler-\d.*\.jar" />
+
+  <lib dir="${solr.install.dir:../../../..}/contrib/clustering/lib/" regex=".*\.jar" />
+  <lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-clustering-\d.*\.jar" />
+
+  <lib dir="${solr.install.dir:../../../..}/contrib/langid/lib/" regex=".*\.jar" />
+  <lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-langid-\d.*\.jar" />
+
+  <lib dir="${solr.install.dir:../../../..}/contrib/velocity/lib" regex=".*\.jar" />
+  <lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-velocity-\d.*\.jar" />
+
+  <!-- an exact 'path' can be used instead of a 'dir' to specify a 
+       specific jar file.  This will cause a serious error to be logged 
+       if it can't be loaded.
+    -->
+
+  <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+  <!-- GIBS: Adding lib path to postgresql JDBC jar file -->
+  <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+
+  <lib path="./lib/postgresql-9.4-1201.jdbc4.jar" />
+
+  <!-- Data Directory
+
+       Used to specify an alternate directory to hold all index data
+       other than the default ./data under the Solr home.  If
+       replication is in use, this should match the replication
+       configuration.
+    -->
+  <dataDir>${solr.data.dir:}</dataDir>
+
+
+  <!-- The DirectoryFactory to use for indexes.
+       
+       solr.StandardDirectoryFactory is filesystem
+       based and tries to pick the best implementation for the current
+       JVM and platform.  solr.NRTCachingDirectoryFactory, the default,
+       wraps solr.StandardDirectoryFactory and caches small files in memory
+       for better NRT performance.
+
+       One can force a particular implementation via solr.MMapDirectoryFactory,
+       solr.NIOFSDirectoryFactory, or solr.SimpleFSDirectoryFactory.
+
+       solr.RAMDirectoryFactory is memory based, not
+       persistent, and doesn't work with replication.
+    -->
+  <directoryFactory name="DirectoryFactory"
+                    class="${solr.directoryFactory:solr.NRTCachingDirectoryFactory}"/>
+
+  <!-- The CodecFactory for defining the format of the inverted index.
+       The default implementation is SchemaCodecFactory, which is the official Lucene
+       index format, but hooks into the schema to provide per-field customization of
+       the postings lists and per-document values in the fieldType element
+       (postingsFormat/docValuesFormat). Note that most of the alternative implementations
+       are experimental, so if you choose to customize the index format, it's a good
+       idea to convert back to the official format e.g. via IndexWriter.addIndexes(IndexReader)
+       before upgrading to a newer version to avoid unnecessary reindexing.
+  -->
+  <codecFactory class="solr.SchemaCodecFactory"/>
+
+  <!-- To disable dynamic schema REST APIs, use the following for <schemaFactory>:
+  
+       <schemaFactory class="ClassicIndexSchemaFactory"/>
+
+       When ManagedIndexSchemaFactory is specified instead, Solr will load the schema from
+       the resource named in 'managedSchemaResourceName', rather than from schema.xml.
+       Note that the managed schema resource CANNOT be named schema.xml.  If the managed
+       schema does not exist, Solr will create it after reading schema.xml, then rename
+       'schema.xml' to 'schema.xml.bak'. 
+       
+       Do NOT hand edit the managed schema - external modifications will be ignored and
+       overwritten as a result of schema modification REST API calls.
+
+       When ManagedIndexSchemaFactory is specified with mutable = true, schema
+       modification REST API calls will be allowed; otherwise, error responses will be
+       sent back for these requests. 
+
+  <schemaFactory class="ManagedIndexSchemaFactory">
+    <bool name="mutable">true</bool>
+    <str name="managedSchemaResourceName">managed-schema</str>
+  </schemaFactory>
+
+  -->
+
+  <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+  <!-- GIBS: Use classic index chema -->
+  <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+
+  <schemaFactory class="ClassicIndexSchemaFactory"/>
+
+  <!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+       Index Config - These settings control low-level behavior of indexing
+       Most example settings here show the default value, but are commented
+       out, to more easily see where customizations have been made.
+       
+       Note: This replaces <indexDefaults> and <mainIndex> from older versions
+       ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->
+  <indexConfig>
+    <!-- maxFieldLength was removed in 4.0. To get similar behavior, include a 
+         LimitTokenCountFilterFactory in your fieldType definition. E.g. 
+     <filter class="solr.LimitTokenCountFilterFactory" maxTokenCount="10000"/>
+    -->
+    <!-- Maximum time to wait for a write lock (ms) for an IndexWriter. Default: 1000 -->
+    <!-- <writeLockTimeout>1000</writeLockTimeout>  -->
+
+    <!-- The maximum number of simultaneous threads that may be
+         indexing documents at once in IndexWriter; if more than this
+         many threads arrive they will wait for others to finish.
+         Default in Solr/Lucene is 8. -->
+    <!-- <maxIndexingThreads>8</maxIndexingThreads>  -->
+
+    <!-- Expert: Enabling compound file will use less files for the index, 
+         using fewer file descriptors on the expense of performance decrease. 
+         Default in Lucene is "true". Default in Solr is "false" (since 3.6) -->
+    <!-- <useCompoundFile>false</useCompoundFile> -->
+
+    <!-- ramBufferSizeMB sets the amount of RAM that may be used by Lucene
+         indexing for buffering added documents and deletions before they are
+         flushed to the Directory.
+         maxBufferedDocs sets a limit on the number of documents buffered
+         before flushing.
+         If both ramBufferSizeMB and maxBufferedDocs is set, then
+         Lucene will flush based on whichever limit is hit first.  -->
+    <!-- <ramBufferSizeMB>100</ramBufferSizeMB> -->
+    <!-- <maxBufferedDocs>1000</maxBufferedDocs> -->
+
+    <!-- Expert: Merge Policy 
+         The Merge Policy in Lucene controls how merging of segments is done.
+         The default since Solr/Lucene 3.3 is TieredMergePolicy.
+         The default since Lucene 2.3 was the LogByteSizeMergePolicy,
+         Even older versions of Lucene used LogDocMergePolicy.
+      -->
+    <!--
+        <mergePolicy class="org.apache.lucene.index.TieredMergePolicy">
+          <int name="maxMergeAtOnce">10</int>
+          <int name="segmentsPerTier">10</int>
+        </mergePolicy>
+      -->
+
+    <!-- Merge Factor
+         The merge factor controls how many segments will get merged at a time.
+         For TieredMergePolicy, mergeFactor is a convenience parameter which
+         will set both MaxMergeAtOnce and SegmentsPerTier at once.
+         For LogByteSizeMergePolicy, mergeFactor decides how many new segments
+         will be allowed before they are merged into one.
+         Default is 10 for both merge policies.
+      -->
+    <!-- 
+    <mergeFactor>10</mergeFactor>
+      -->
+
+    <!-- Expert: Merge Scheduler
+         The Merge Scheduler in Lucene controls how merges are
+         performed.  The ConcurrentMergeScheduler (Lucene 2.3 default)
+         can perform merges in the background using separate threads.
+         The SerialMergeScheduler (Lucene 2.2 default) does not.
+     -->
+    <!-- 
+       <mergeScheduler class="org.apache.lucene.index.ConcurrentMergeScheduler"/>
+       -->
+
+    <!-- LockFactory 
+
+         This option specifies which Lucene LockFactory implementation
+         to use.
+      
+         single = SingleInstanceLockFactory - suggested for a
+                  read-only index or when there is no possibility of
+                  another process trying to modify the index.
+         native = NativeFSLockFactory - uses OS native file locking.
+                  Do not use when multiple solr webapps in the same
+                  JVM are attempting to share a single index.
+         simple = SimpleFSLockFactory  - uses a plain file for locking
+
+         Defaults: 'native' is default for Solr3.6 and later, otherwise
+                   'simple' is the default
+
+         More details on the nuances of each LockFactory...
+         http://wiki.apache.org/lucene-java/AvailableLockFactories
+    -->
+    <lockType>${solr.lock.type:native}</lockType>
+
+    <!-- Unlock On Startup
+
+         If true, unlock any held write or commit locks on startup.
+         This defeats the locking mechanism that allows multiple
+         processes to safely access a lucene index, and should be used
+         with care. Default is "false".
+
+         This is not needed if lock type is 'single'
+     -->
+    <!--
+    <unlockOnStartup>false</unlockOnStartup>
+      -->
+
+    <!-- Commit Deletion Policy
+         Custom deletion policies can be specified here. The class must
+         implement org.apache.lucene.index.IndexDeletionPolicy.
+
+         The default Solr IndexDeletionPolicy implementation supports
+         deleting index commit points on number of commits, age of
+         commit point and optimized status.
+         
+         The latest commit point should always be preserved regardless
+         of the criteria.
+    -->
+    <!-- 
+    <deletionPolicy class="solr.SolrDeletionPolicy">
+    -->
+    <!-- The number of commit points to be kept -->
+    <!-- <str name="maxCommitsToKeep">1</str> -->
+    <!-- The number of optimized commit points to be kept -->
+    <!-- <str name="maxOptimizedCommitsToKeep">0</str> -->
+    <!--
+        Delete all commit points once they have reached the given age.
+        Supports DateMathParser syntax e.g.
+      -->
+    <!--
+       <str name="maxCommitAge">30MINUTES</str>
+       <str name="maxCommitAge">1DAY</str>
+    -->
+    <!-- 
+    </deletionPolicy>
+    -->
+
+    <!-- Lucene Infostream
+       
+         To aid in advanced debugging, Lucene provides an "InfoStream"
+         of detailed information when indexing.
+
+         Setting The value to true will instruct the underlying Lucene
+         IndexWriter to write its debugging info the specified file
+      -->
+    <!-- <infoStream file="INFOSTREAM.txt">false</infoStream> -->
+  </indexConfig>
+
+
+  <!-- JMX
+       
+       This example enables JMX if and only if an existing MBeanServer
+       is found, use this if you want to configure JMX through JVM
+       parameters. Remove this to disable exposing Solr configuration
+       and statistics to JMX.
+
+       For more details see http://wiki.apache.org/solr/SolrJmx
+    -->
+  <jmx />
+  <!-- If you want to connect to a particular server, specify the
+       agentId 
+    -->
+  <!-- <jmx agentId="myAgent" /> -->
+  <!-- If you want to start a new MBeanServer, specify the serviceUrl -->
+  <!-- <jmx serviceUrl="service:jmx:rmi:///jndi/rmi://localhost:9999/solr"/>
+    -->
+
+  <!-- The default high-performance update handler -->
+  <updateHandler class="solr.DirectUpdateHandler2">
+
+    <!-- Enables a transaction log, used for real-time get, durability, and
+         and solr cloud replica recovery.  The log can grow as big as
+         uncommitted changes to the index, so use of a hard autoCommit
+         is recommended (see below).
+         "dir" - the target directory for transaction logs, defaults to the
+                solr data directory.  -->
+    <updateLog>
+      <str name="dir">${solr.ulog.dir:}</str>
+    </updateLog>
+
+    <!-- AutoCommit
+
+         Perform a hard commit automatically under certain conditions.
+         Instead of enabling autoCommit, consider using "commitWithin"
+         when adding documents. 
+
+         http://wiki.apache.org/solr/UpdateXmlMessages
+
+         maxDocs - Maximum number of documents to add since the last
+                   commit before automatically triggering a new commit.
+
+         maxTime - Maximum amount of time in ms that is allowed to pass
+                   since a document was added before automatically
+                   triggering a new commit. 
+         openSearcher - if false, the commit causes recent index changes
+           to be flushed to stable storage, but does not cause a new
+           searcher to be opened to make those changes visible.
+
+         If the updateLog is enabled, then it's highly recommended to
+         have some sort of hard autoCommit to limit the log size.
+      -->
+    <autoCommit>
+      <maxTime>15000</maxTime>
+      <openSearcher>false</openSearcher>
+    </autoCommit>
+
+    <!-- softAutoCommit is like autoCommit except it causes a
+         'soft' commit which only ensures that changes are visible
+         but does not ensure that data is synced to disk.  This is
+         faster and more near-realtime friendly than a hard commit.
+      -->
+    <!--
+      <autoSoftCommit> 
+        <maxTime>1000</maxTime> 
+      </autoSoftCommit>
+     -->
+
+    <!-- Update Related Event Listeners
+         
+         Various IndexWriter related events can trigger Listeners to
+         take actions.
+
+         postCommit - fired after every commit or optimize command
+         postOptimize - fired after every optimize command
+      -->
+    <!-- The RunExecutableListener executes an external command from a
+         hook such as postCommit or postOptimize.
+         
+         exe - the name of the executable to run
+         dir - dir to use as the current working directory. (default=".")
+         wait - the calling thread waits until the executable returns. 
+                (default="true")
+         args - the arguments to pass to the program.  (default is none)
+         env - environment variables to set.  (default is none)
+      -->
+    <!-- This example shows how RunExecutableListener could be used
+         with the script based replication...
+         http://wiki.apache.org/solr/CollectionDistribution
+      -->
+    <!--
+       <listener event="postCommit" class="solr.RunExecutableListener">
+         <str name="exe">solr/bin/snapshooter</str>
+         <str name="dir">.</str>
+         <bool name="wait">true</bool>
+         <arr name="args"> <str>arg1</str> <str>arg2</str> </arr>
+         <arr name="env"> <str>MYVAR=val1</str> </arr>
+       </listener>
+      -->
+
+  </updateHandler>
+
+  <!-- IndexReaderFactory
+
+       Use the following format to specify a custom IndexReaderFactory,
+       which allows for alternate IndexReader implementations.
+
+       ** Experimental Feature **
+
+       Please note - Using a custom IndexReaderFactory may prevent
+       certain other features from working. The API to
+       IndexReaderFactory may change without warning or may even be
+       removed from future releases if the problems cannot be
+       resolved.
+
+
+       ** Features that may not work with custom IndexReaderFactory **
+
+       The ReplicationHandler assumes a disk-resident index. Using a
+       custom IndexReader implementation may cause incompatibility
+       with ReplicationHandler and may cause replication to not work
+       correctly. See SOLR-1366 for details.
+
+    -->
+  <!--
+  <indexReaderFactory name="IndexReaderFactory" class="package.class">
+    <str name="someArg">Some Value</str>
+  </indexReaderFactory >
+  -->
+
+  <!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+       Query section - these settings control query time things like caches
+       ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->
+  <query>
+    <!-- Max Boolean Clauses
+
+         Maximum number of clauses in each BooleanQuery,  an exception
+         is thrown if exceeded.
+
+         ** WARNING **
+         
+         This option actually modifies a global Lucene property that
+         will affect all SolrCores.  If multiple solrconfig.xml files
+         disagree on this property, the value at any given moment will
+         be based on the last SolrCore to be initialized.
+         
+      -->
+    <maxBooleanClauses>1024</maxBooleanClauses>
+
+
+    <!-- Solr Internal Query Caches
+
+         There are two implementations of cache available for Solr,
+         LRUCache, based on a synchronized LinkedHashMap, and
+         FastLRUCache, based on a ConcurrentHashMap.  
+
+         FastLRUCache has faster gets and slower puts in single
+         threaded operation and thus is generally faster than LRUCache
+         when the hit ratio of the cache is high (> 75%), and may be
+         faster under other scenarios on multi-cpu systems.
+    -->
+
+    <!-- Filter Cache
+
+         Cache used by SolrIndexSearcher for filters (DocSets),
+         unordered sets of *all* documents that match a query.  When a
+         new searcher is opened, its caches may be prepopulated or
+         "autowarmed" using data from caches in the old searcher.
+         autowarmCount is the number of items to prepopulate.  For
+         LRUCache, the autowarmed items will be the most recently
+         accessed items.
+
+         Parameters:
+           class - the SolrCache implementation LRUCache or
+               (LRUCache or FastLRUCache)
+           size - the maximum number of entries in the cache
+           initialSize - the initial capacity (number of entries) of
+               the cache.  (see java.util.HashMap)
+           autowarmCount - the number of entries to prepopulate from
+               and old cache.  
+      -->
+    <filterCache class="solr.FastLRUCache"
+                 size="512"
+                 initialSize="512"
+                 autowarmCount="0"/>
+
+    <!-- Query Result Cache
+         
+         Caches results of searches - ordered lists of document ids
+         (DocList) based on a query, a sort, and the range of documents requested.  
+      -->
+    <queryResultCache class="solr.LRUCache"
+                      size="512"
+                      initialSize="512"
+                      autowarmCount="0"/>
+
+    <!-- Document Cache
+
+         Caches Lucene Document objects (the stored fields for each
+         document).  Since Lucene internal document ids are transient,
+         this cache will not be autowarmed.  
+      -->
+    <documentCache class="solr.LRUCache"
+                   size="512"
+                   initialSize="512"
+                   autowarmCount="0"/>
+
+    <!-- Field Value Cache
+         
+         Cache used to hold field values that are quickly accessible
+         by document id.  The fieldValueCache is created by default
+         even if not configured here.
+      -->
+    <!--
+       <fieldValueCache class="solr.FastLRUCache"
+                        size="512"
+                        autowarmCount="128"
+                        showItems="32" />
+      -->
+
+    <!-- Custom Cache
+
+         Example of a generic cache.  These caches may be accessed by
+         name through SolrIndexSearcher.getCache(),cacheLookup(), and
+         cacheInsert().  The purpose is to enable easy caching of
+         user/application level data.  The regenerator argument should
+         be specified as an implementation of solr.CacheRegenerator 
+         if autowarming is desired.  
+      -->
+    <!--
+       <cache name="myUserCache"
+              class="solr.LRUCache"
+              size="4096"
+              initialSize="1024"
+              autowarmCount="1024"
+              regenerator="com.mycompany.MyRegenerator"
+              />
+      -->
+
+
+    <!-- Lazy Field Loading
+
+         If true, stored fields that are not requested will be loaded
+         lazily.  This can result in a significant speed improvement
+         if the usual case is to not load all stored fields,
+         especially if the skipped fields are large compressed text
+         fields.
+    -->
+    <enableLazyFieldLoading>true</enableLazyFieldLoading>
+
+    <!-- Use Filter For Sorted Query
+ 
+         A possible optimization that attempts to use a filter to
+         satisfy a search.  If the requested sort does not include
+         score, then the filterCache will be checked for a filter
+         matching the query. If found, the filter will be used as the
+         source of document ids, and then the sort will be applied to
+         that.
+ 
+         For most situations, this will not be useful unless you
+         frequently get the same search repeatedly with different sort
+         options, and none of them ever use "score"
+      -->
+    <!--
+       <useFilterForSortedQuery>true</useFilterForSortedQuery>
+      -->
+
+    <!-- Result Window Size
+ 
+         An optimization for use with the queryResultCache.  When a search
+         is requested, a superset of the requested number of document ids
+         are collected.  For example, if a search for a particular query
+         requests matching documents 10 through 19, and queryWindowSize is 50,
+         then documents 0 through 49 will be collected and cached.  Any further
+         requests in that range can be satisfied via the cache.  
+      -->
+    <queryResultWindowSize>20</queryResultWindowSize>
+
+    <!-- Maximum number of documents to cache for any entry in the
+         queryResultCache. 
+      -->
+    <queryResultMaxDocsCached>200</queryResultMaxDocsCached>
+
+    <!-- Query Related Event Listeners
+ 
+         Various IndexSearcher related events can trigger Listeners to
+         take actions.
+ 
+         newSearcher - fired whenever a new searcher is being prepared
+         and there is a current searcher handling requests (aka
+         registered).  It can be used to prime certain caches to
+         prevent long request times for certain requests.
+ 
+         firstSearcher - fired whenever a new searcher is being
+         prepared but there is no current registered searcher to handle
+         requests or to gain autowarming data from.
+ 
+         
+      -->
+    <!-- QuerySenderListener takes an array of NamedList and executes a
+         local query request for each NamedList in sequence. 
+      -->
+    <listener event="newSearcher" class="solr.QuerySenderListener">
+      <arr name="queries">
+        <!--
+           <lst><str name="q">solr</str><str name="sort">price asc</str></lst>
+           <lst><str name="q">rocks</str><str name="sort">weight asc</str></lst>
+          -->
+      </arr>
+    </listener>
+    <listener event="firstSearcher" class="solr.QuerySenderListener">
+      <arr name="queries">
+        <!--
+        <lst>
+          <str name="q">static firstSearcher warming in solrconfig.xml</str>
+        </lst>
+        -->
+      </arr>
+    </listener>
+
+    <!-- Use Cold Searcher
+
+         If a search request comes in and there is no current
+         registered searcher, then immediately register the still
+         warming searcher and use it.  If "false" then all requests
+         will block until the first searcher is done warming.
+      -->
+    <useColdSearcher>false</useColdSearcher>
+
+    <!-- Max Warming Searchers
+         
+         Maximum number of searchers that may be warming in the
+         background concurrently.  An error is returned if this limit
+         is exceeded.
+
+         Recommend values of 1-2 for read-only slaves, higher for
+         masters w/o cache warming.
+      -->
+    <maxWarmingSearchers>2</maxWarmingSearchers>
+
+  </query>
+
+
+  <!-- Request Dispatcher
+
+       This section contains instructions for how the SolrDispatchFilter
+       should behave when processing requests for this SolrCore.
+
+       handleSelect is a legacy option that affects the behavior of requests
+       such as /select?qt=XXX
+
+       handleSelect="true" will cause the SolrDispatchFilter to process
+       the request and dispatch the query to a handler specified by the 
+       "qt" param, assuming "/select" isn't already registered.
+
+       handleSelect="false" will cause the SolrDispatchFilter to
+       ignore "/select" requests, resulting in a 404 unless a handler
+       is explicitly registered with the name "/select"
+
+       handleSelect="true" is not recommended for new users, but is the default
+       for backwards compatibility
+    -->
+  <requestDispatcher handleSelect="false" >
+    <!-- Request Parsing
+
+         These settings indicate how Solr Requests may be parsed, and
+         what restrictions may be placed on the ContentStreams from
+         those requests
+
+         enableRemoteStreaming - enables use of the stream.file
+         and stream.url parameters for specifying remote streams.
+
+         multipartUploadLimitInKB - specifies the max size (in KiB) of
+         Multipart File Uploads that Solr will allow in a Request.
+         
+         formdataUploadLimitInKB - specifies the max size (in KiB) of
+         form data (application/x-www-form-urlencoded) sent via
+         POST. You can use POST to pass request parameters not
+         fitting into the URL.
+         
+         addHttpRequestToContext - if set to true, it will instruct
+         the requestParsers to include the original HttpServletRequest
+         object in the context map of the SolrQueryRequest under the 
+         key "httpRequest". It will not be used by any of the existing
+         Solr components, but may be useful when developing custom 
+         plugins.
+         
+         *** WARNING ***
+         The settings below authorize Solr to fetch remote files, You
+         should make sure your system has some authentication before
+         using enableRemoteStreaming="true"
+
+      -->
+    <requestParsers enableRemoteStreaming="true"
+                    multipartUploadLimitInKB="2048000"
+                    formdataUploadLimitInKB="2048"
+                    addHttpRequestToContext="false"/>
+
+    <!-- HTTP Caching
+
+         Set HTTP caching related parameters (for proxy caches and clients).
+
+         The options below instruct Solr not to output any HTTP Caching
+         related headers
+      -->
+    <httpCaching never304="true" />
+    <!-- If you include a <cacheControl> directive, it will be used to
+         generate a Cache-Control header (as well as an Expires header
+         if the value contains "max-age=")
+         
+         By default, no Cache-Control header is generated.
+         
+         You can use the <cacheControl> option even if you have set
+         never304="true"
+      -->
+    <!--
+       <httpCaching never304="true" >
+         <cacheControl>max-age=30, public</cacheControl> 
+       </httpCaching>
+      -->
+    <!-- To enable Solr to respond with automatically generated HTTP
+         Caching headers, and to response to Cache Validation requests
+         correctly, set the value of never304="false"
+         
+         This will cause Solr to generate Last-Modified and ETag
+         headers based on the properties of the Index.
+
+         The following options can also be specified to affect the
+         values of these headers...
+
+         lastModFrom - the default value is "openTime" which means the
+         Last-Modified value (and validation against If-Modified-Since
+         requests) will all be relative to when the current Searcher
+         was opened.  You can change it to lastModFrom="dirLastMod" if
+         you want the value to exactly correspond to when the physical
+         index was last modified.
+
+         etagSeed="..." is an option you can change to force the ETag
+         header (and validation against If-None-Match requests) to be
+         different even if the index has not changed (ie: when making
+         significant changes to your config file)
+
+         (lastModifiedFrom and etagSeed are both ignored if you use
+         the never304="true" option)
+      -->
+    <!--
+       <httpCaching lastModifiedFrom="openTime"
+                    etagSeed="Solr">
+         <cacheControl>max-age=30, public</cacheControl> 
+       </httpCaching>
+      -->
+  </requestDispatcher>
+
+  <!-- Request Handlers 
+
+       http://wiki.apache.org/solr/SolrRequestHandler
+
+       Incoming queries will be dispatched to a specific handler by name
+       based on the path specified in the request.
+
+       Legacy behavior: If the request path uses "/select" but no Request
+       Handler has that name, and if handleSelect="true" has been specified in
+       the requestDispatcher, then the Request Handler is dispatched based on
+       the qt parameter.  Handlers without a leading '/' are accessed this way
+       like so: http://host/app/[core/]select?qt=name  If no qt is
+       given, then the requestHandler that declares default="true" will be
+       used or the one named "standard".
+
+       If a Request Handler is declared with startup="lazy", then it will
+       not be initialized until the first request that uses it.
+
+    -->
+
+   <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+   <!-- GIBS: Adding dataimport request handler -->
+   <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+
+   <requestHandler name="/dataimport" class="org.apache.solr.handler.dataimport.DataImportHandler">
+     <lst name="defaults">
+       <str name="config">/Applications/solr-5.1.0/server/solr/product_type/conf/data-config.xml</str>
+     </lst>
+   </requestHandler>
+
+   <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+   <!-- GIBS: Adding replication request handler -->
+   <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+
+   <!--
+   <requestHandler name="/replication" class="solr.ReplicationHandler" >
+     <lst name="master">
+       <str name="enable">${enable.master:false}</str>
+       <str name="replicateAfter">commit</str>
+       <str name="confFiles">schema.xml,stopwords.txt</str>
+    </lst>
+    <lst name="slave">
+       <str name="enable">${enable.slave:false}</str>
+      <str name="masterUrl">http://master_host:8983/solr</str>
+      <str name="pollInterval">00:00:60</str>
+    </lst>
+   </requestHandler>
+  -->
+
+
+  <!-- SearchHandler
+
+       http://wiki.apache.org/solr/SearchHandler
+
+       For processing Search Queries, the primary Request Handler
+       provided with Solr is "SearchHandler" It delegates to a sequent
+       of SearchComponents (see below) and supports distributed
+       queries across multiple shards
+    -->
+  <requestHandler name="/select" class="solr.SearchHandler">
+    <!-- default values for query parameters can be specified, these
+         will be overridden by parameters in the request
+      -->
+    <lst name="defaults">
+      <str name="echoParams">explicit</str>
+      <int name="rows">10</int>
+      <str name="df">text</str>
+    </lst>
+    <!-- In addition to defaults, "appends" params can be specified
+         to identify values which should be appended to the list of
+         multi-val params from the query (or the existing "defaults").
+      -->
+    <!-- In this example, the param "fq=instock:true" would be appended to
+         any query time fq params the user may specify, as a mechanism for
+         partitioning the index, independent of any user selected filtering
+         that may also be desired (perhaps as a result of faceted searching).
+
+         NOTE: there is *absolutely* nothing a client can do to prevent these
+         "appends" values from being used, so don't use this mechanism
+         unless you are sure you always want it.
+      -->
+    <!--
+       <lst name="appends">
+         <str name="fq">inStock:true</str>
+       </lst>
+      -->
+    <!-- "invariants" are a way of letting the Solr maintainer lock down
+         the options available to Solr clients.  Any params values
+         specified here are used regardless of what values may be specified
+         in either the query, the "defaults", or the "appends" params.
+
+         In this example, the facet.field and facet.query params would
+         be fixed, limiting the facets clients can use.  Faceting is
+         not turned on by default - but if the client does specify
+         facet=true in the request, these are the only facets they
+         will be able to see counts for; regardless of what other
+         facet.field or facet.query params they may specify.
+
+         NOTE: there is *absolutely* nothing a client can do to prevent these
+         "invariants" values from being used, so don't use this mechanism
+         unless you are sure you always want it.
+      -->
+    <!--
+       <lst name="invariants">
+         <str name="facet.field">cat</str>
+         <str name="facet.field">manu_exact</str>
+         <str name="facet.query">price:[* TO 500]</str>
+         <str name="facet.query">price:[500 TO *]</str>
+       </lst>
+      -->
+    <!-- If the default list of SearchComponents is not desired, that
+         list can either be overridden completely, or components can be
+         prepended or appended to the default list.  (see below)
+      -->
+    <!--
+       <arr name="components">
+         <str>nameOfCustomComponent1</str>
+         <str>nameOfCustomComponent2</str>
+       </arr>
+      -->
+  </requestHandler>
+
+  <!-- A request handler that returns indented JSON by default -->
+  <requestHandler name="/query" class="solr.SearchHandler">
+    <lst name="defaults">
+      <str name="echoParams">explicit</str>
+      <str name="wt">json</str>
+      <str name="indent">true</str>
+    </lst>
+  </requestHandler>
+
+
+  <requestHandler name="/browse" class="solr.SearchHandler" useParams="query,facets,velocity,browse">
+    <lst name="defaults">
+      <str name="echoParams">explicit</str>
+    </lst>
+  </requestHandler>
+
+
+  <initParams path="/update/**,/query,/select,/tvrh,/elevate,/spell,/browse">
+    <lst name="defaults">
+      <str name="df">_text_</str>
+    </lst>
+  </initParams>
+
+  <initParams path="/update/**">
+    <lst name="defaults">
+      <str name="update.chain">add-unknown-fields-to-the-schema</str>
+    </lst>
+  </initParams>
+
+  <!-- Solr Cell Update Request Handler
+
+       http://wiki.apache.org/solr/ExtractingRequestHandler 
+
+    -->
+  <requestHandler name="/update/extract"
+                  startup="lazy"
+                  class="solr.extraction.ExtractingRequestHandler" >
+    <lst name="defaults">
+      <str name="lowernames">true</str>
+      <str name="fmap.meta">ignored_</str>
+      <str name="fmap.content">_text_</str>
+    </lst>
+  </requestHandler>
+
+  <!--
+    The export request handler is used to export full sorted result sets.
+    Do not change these defaults.
+  -->
+
+  <requestHandler name="/export" class="solr.SearchHandler">
+    <lst name="invariants">
+      <str name="rq">{!xport}</str>
+      <str name="wt">xsort</str>
+      <str name="distrib">false</str>
+    </lst>
+
+    <arr name="components">
+      <str>query</str>
+    </arr>
+  </requestHandler>
+
+
+  <!--
+  Distributed Stream processing.
+  -->
+
+  <requestHandler name="/stream" class="solr.StreamHandler">
+    <lst name="invariants">
+      <str name="wt">json</str>
+      <str name="distrib">false</str>
+    </lst>
+  </requestHandler>
+
+
+
+  <!-- Field Analysis Request Handler
+
+       RequestHandler that provides much the same functionality as
+       analysis.jsp. Provides the ability to specify multiple field
+       types and field names in the same request and outputs
+       index-time and query-time analysis for each of them.
+
+       Request parameters are:
+       analysis.fieldname - field name whose analyzers are to be used
+
+       analysis.fieldtype - field type whose analyzers are to be used
+       analysis.fieldvalue - text for index-time analysis
+       q (or analysis.q) - text for query time analysis
+       analysis.showmatch (true|false) - When set to true and when
+           query analysis is performed, the produced tokens of the
+           field value analysis will be marked as "matched" for every
+           token that is produces by the query analysis
+   -->
+  <requestHandler name="/analysis/field"
+                  startup="lazy"
+                  class="solr.FieldAnalysisRequestHandler" />
+
+
+  <!-- Document Analysis Handler
+
+       http://wiki.apache.org/solr/AnalysisRequestHandler
+
+       An analysis handler that provides a breakdown of the analysis
+       process of provided documents. This handler expects a (single)
+       content stream with the following format:
+
+       <docs>
+         <doc>
+           <field name="id">1</field>
+           <field name="name">The Name</field>
+           <field name="text">The Text Value</field>
+         </doc>
+         <doc>...</doc>
+         <doc>...</doc>
+         ...
+       </docs>
+
+    Note: Each document must contain a field which serves as the
+    unique key. This key is used in the returned response to associate
+    an analysis breakdown to the analyzed document.
+
+    Like the FieldAnalysisRequestHandler, this handler also supports
+    query analysis by sending either an "analysis.query" or "q"
+    request parameter that holds the query text to be analyzed. It
+    also supports the "analysis.showmatch" parameter which when set to
+    true, all field tokens that match the query tokens will be marked
+    as a "match". 
+  -->
+  <requestHandler name="/analysis/document"
+                  class="solr.DocumentAnalysisRequestHandler"
+                  startup="lazy" />
+
+  <!-- Echo the request contents back to the client -->
+  <requestHandler name="/debug/dump" class="solr.DumpRequestHandler" >
+    <lst name="defaults">
+      <str name="echoParams">explicit</str>
+      <str name="echoHandler">true</str>
+    </lst>
+  </requestHandler>
+
+  <!-- Search Components
+
+       Search components are registered to SolrCore and used by 
+       instances of SearchHandler (which can access them by name)
+       
+       By default, the following components are available:
+       
+       <searchComponent name="query"     class="solr.QueryComponent" />
+       <searchComponent name="facet"     class="solr.FacetComponent" />
+       <searchComponent name="mlt"       class="solr.MoreLikeThisComponent" />
+       <searchComponent name="highlight" class="solr.HighlightComponent" />
+       <searchComponent name="stats"     class="solr.StatsComponent" />
+       <searchComponent name="debug"     class="solr.DebugComponent" />
+   
+       Default configuration in a requestHandler would look like:
+
+       <arr name="components">
+         <str>query</str>
+         <str>facet</str>
+         <str>mlt</str>
+         <str>highlight</str>
+         <str>stats</str>
+         <str>debug</str>
+       </arr>
+
+       If you register a searchComponent to one of the standard names, 
+       that will be used instead of the default.
+
+       To insert components before or after the 'standard' components, use:
+    
+       <arr name="first-components">
+         <str>myFirstComponentName</str>
+       </arr>
+    
+       <arr name="last-components">
+         <str>myLastComponentName</str>
+       </arr>
+
+       NOTE: The component registered with the name "debug" will
+       always be executed after the "last-components" 
+       
+     -->
+
+  <!-- Spell Check
+
+       The spell check component can return a list of alternative spelling
+       suggestions.  
+
+       http://wiki.apache.org/solr/SpellCheckComponent
+    -->
+  <searchComponent name="spellcheck" class="solr.SpellCheckComponent">
+
+    <str name="queryAnalyzerFieldType">text_general</str>
+
+    <!-- Multiple "Spell Checkers" can be declared and used by this
+         component
+      -->
+
+    <!-- a spellchecker built from a field of the main index -->
+    <lst name="spellchecker">
+      <str name="name">default</str>
+      <str name="field">text</str>
+      <str name="classname">solr.DirectSolrSpellChecker</str>
+      <!-- the spellcheck distance measure used, the default is the internal levenshtein -->
+      <str name="distanceMeasure">internal</str>
+      <!-- minimum accuracy needed to be considered a valid spellcheck suggestion -->
+      <float name="accuracy">0.5</float>
+      <!-- the maximum #edits we consider when enumerating terms: can be 1 or 2 -->
+      <int name="maxEdits">2</int>
+      <!-- the minimum shared prefix when enumerating terms -->
+      <int name="minPrefix">1</int>
+      <!-- maximum number of inspections per result. -->
+      <int name="maxInspections">5</int>
+      <!-- minimum length of a query term to be considered for correction -->
+      <int name="minQueryLength">4</int>
+      <!-- maximum threshold of documents a query term can appear to be considered for correction -->
+      <float name="maxQueryFrequency">0.01</float>
+      <!-- uncomment this to require suggestions to occur in 1% of the documents
+      	<float name="thresholdTokenFrequency">.01</float>
+      -->
+    </lst>
+
+    <!-- a spellchecker that can break or combine words.  See "/spell" handler below for usage -->
+    <lst name="spellchecker">
+      <str name="name">wordbreak</str>
+      <str name="classname">solr.WordBreakSolrSpellChecker</str>
+      <str name="field">name</str>
+      <str name="combineWords">true</str>
+      <str name="breakWords">true</str>
+      <int name="maxChanges">10</int>
+    </lst>
+
+    <!-- a spellchecker that uses a different distance measure -->
+    <!--
+       <lst name="spellchecker">
+         <str name="name">jarowinkler</str>
+         <str name="field">spell</str>
+         <str name="classname">solr.DirectSolrSpellChecker</str>
+         <str name="distanceMeasure">
+           org.apache.lucene.search.spell.JaroWinklerDistance
+         </str>
+       </lst>
+     -->
+
+    <!-- a spellchecker that use an alternate comparator 
+
+         comparatorClass be one of:
+          1. score (default)
+          2. freq (Frequency first, then score)
+          3. A fully qualified class name
+      -->
+    <!--
+       <lst name="spellchecker">
+         <str name="name">freq</str>
+         <str name="field">lowerfilt</str>
+         <str name="classname">solr.DirectSolrSpellChecker</str>
+         <str name="comparatorClass">freq</str>
+      -->
+
+    <!-- A spellchecker that reads the list of words from a file -->
+    <!--
+       <lst name="spellchecker">
+         <str name="classname">solr.FileBasedSpellChecker</str>
+         <str name="name">file</str>
+         <str name="sourceLocation">spellings.txt</str>
+         <str name="characterEncoding">UTF-8</str>
+         <str name="spellcheckIndexDir">spellcheckerFile</str>
+       </lst>
+      -->
+  </searchComponent>
+
+  <!-- A request handler for demonstrating the spellcheck component.  
+
+       NOTE: This is purely as an example.  The whole purpose of the
+       SpellCheckComponent is to hook it into the request handler that
+       handles your normal user queries so that a separate request is
+       not needed to get suggestions.
+
+       IN OTHER WORDS, THERE IS REALLY GOOD CHANCE THE SETUP BELOW IS
+       NOT WHAT YOU WANT FOR YOUR PRODUCTION SYSTEM!
+       
+       See http://wiki.apache.org/solr/SpellCheckComponent for details
+       on the request parameters.
+    -->
+  <requestHandler name="/spell" class="solr.SearchHandler" startup="lazy">
+    <lst name="defaults">
+      <!-- Solr will use suggestions from both the 'default' spellchecker
+           and from the 'wordbreak' spellchecker and combine them.
+           collations (re-written queries) can include a combination of
+           corrections from both spellcheckers -->
+      <str name="spellcheck.dictionary">default</str>
+      <str name="spellcheck.dictionary">wordbreak</str>
+      <str name="spellcheck">on</str>
+      <str name="spellcheck.extendedResults">true</str>
+      <str name="spellcheck.count">10</str>
+      <str name="spellcheck.alternativeTermCount">5</str>
+      <str name="spellcheck.maxResultsForSuggest">5</str>
+      <str name="spellcheck.collate">true</str>
+      <str name="spellcheck.collateExtendedResults">true</str>
+      <str name="spellcheck.maxCollationTries">10</str>
+      <str name="spellcheck.maxCollations">5</str>
+    </lst>
+    <arr name="last-components">
+      <str>spellcheck</str>
+    </arr>
+  </requestHandler>
+
+  <!-- Term Vector Component
+
+       http://wiki.apache.org/solr/TermVectorComponent
+    -->
+  <searchComponent name="tvComponent" class="solr.TermVectorComponent"/>
+
+  <!-- A request handler for demonstrating the term vector component
+
+       This is purely as an example.
+
+       In reality you will likely want to add the component to your 
+       already specified request handlers. 
+    -->
+  <requestHandler name="/tvrh" class="solr.SearchHandler" startup="lazy">
+    <lst name="defaults">
+      <bool name="tv">true</bool>
+    </lst>
+    <arr name="last-components">
+      <str>tvComponent</str>
+    </arr>
+  </requestHandler>
+
+  <!-- Clustering Component. (Omitted here. See the default Solr example for a typical configuration.) -->
+
+  <!-- Terms Component
+
+       http://wiki.apache.org/solr/TermsComponent
+
+       A component to return terms and document frequency of those
+       terms
+    -->
+  <searchComponent name="terms" class="solr.TermsComponent"/>
+
+  <!-- A request handler for demonstrating the terms component -->
+  <requestHandler name="/terms" class="solr.SearchHandler" startup="lazy">
+    <lst name="defaults">
+      <bool name="terms">true</bool>
+      <bool name="distrib">false</bool>
+    </lst>
+    <arr name="components">
+      <str>terms</str>
+    </arr>
+  </requestHandler>
+
+
+  <!-- Query Elevation Component
+
+       http://wiki.apache.org/solr/QueryElevationComponent
+
+       a search component that enables you to configure the top
+       results for a given query regardless of the normal lucene
+       scoring.
+    -->
+  <searchComponent name="elevator" class="solr.QueryElevationComponent" >
+    <!-- pick a fieldType to analyze queries -->
+    <str name="queryFieldType">string</str>
+    <str name="config-file">elevate.xml</str>
+  </searchComponent>
+
+  <!-- A request handler for demonstrating the elevator component -->
+  <requestHandler name="/elevate" class="solr.SearchHandler" startup="lazy">
+    <lst name="defaults">
+      <str name="echoParams">explicit</str>
+    </lst>
+    <arr name="last-components">
+      <str>elevator</str>
+    </arr>
+  </requestHandler>
+
+  <!-- Highlighting Component
+
+       http://wiki.apache.org/solr/HighlightingParameters
+    -->
+  <searchComponent class="solr.HighlightComponent" name="highlight">
+    <highlighting>
+      <!-- Configure the standard fragmenter -->
+      <!-- This could most likely be commented out in the "default" case -->
+      <fragmenter name="gap"
+                  default="true"
+                  class="solr.highlight.GapFragmenter">
+        <lst name="defaults">
+          <int name="hl.fragsize">100</int>
+        </lst>
+      </fragmenter>
+
+      <!-- A regular-expression-based fragmenter 
+           (for sentence extraction) 
+        -->
+      <fragmenter name="regex"
+                  class="solr.highlight.RegexFragmenter">
+        <lst name="defaults">
+          <!-- slightly smaller fragsizes work better because of slop -->
+          <int name="hl.fragsize">70</int>
+          <!-- allow 50% slop on fragment sizes -->
+          <float name="hl.regex.slop">0.5</float>
+          <!-- a basic sentence pattern -->
+          <str name="hl.regex.pattern">[-\w ,/\n\&quot;&apos;]{20,200}</str>
+        </lst>
+      </fragmenter>
+
+      <!-- Configure the standard formatter -->
+      <formatter name="html"
+                 default="true"
+                 class="solr.highlight.HtmlFormatter">
+        <lst name="defaults">
+          <str name="hl.simple.pre"><![CDATA[<em>]]></str>
+          <str name="hl.simple.post"><![CDATA[</em>]]></str>
+        </lst>
+      </formatter>
+
+      <!-- Configure the standard encoder -->
+      <encoder name="html"
+               class="solr.highlight.HtmlEncoder" />
+
+      <!-- Configure the standard fragListBuilder -->
+      <fragListBuilder name="simple"
+                       class="solr.highlight.SimpleFragListBuilder"/>
+
+      <!-- Configure the single fragListBuilder -->
+      <fragListBuilder name="single"
+                       class="solr.highlight.SingleFragListBuilder"/>
+
+      <!-- Configure the weighted fragListBuilder -->
+      <fragListBuilder name="weighted"
+                       default="true"
+                       class="solr.highlight.WeightedFragListBuilder"/>
+
+      <!-- default tag FragmentsBuilder -->
+      <fragmentsBuilder name="default"
+                        default="true"
+                        class="solr.highlight.ScoreOrderFragmentsBuilder">
+        <!-- 
+        <lst name="defaults">
+          <str name="hl.multiValuedSeparatorChar">/</str>
+        </lst>
+        -->
+      </fragmentsBuilder>
+
+      <!-- multi-colored tag FragmentsBuilder -->
+      <fragmentsBuilder name="colored"
+                        class="solr.highlight.ScoreOrderFragmentsBuilder">
+        <lst name="defaults">
+          <str name="hl.tag.pre"><![CDATA[
+               <b style="background:yellow">,<b style="background:lawgreen">,
+               <b style="background:aquamarine">,<b style="background:magenta">,
+               <b style="background:palegreen">,<b style="background:coral">,
+               <b style="background:wheat">,<b style="background:khaki">,
+               <b style="background:lime">,<b style="background:deepskyblue">]]></str>
+          <str name="hl.tag.post"><![CDATA[</b>]]></str>
+        </lst>
+      </fragmentsBuilder>
+
+      <boundaryScanner name="default"
+                       default="true"
+                       class="solr.highlight.SimpleBoundaryScanner">
+        <lst name="defaults">
+          <str name="hl.bs.maxScan">10</str>
+          <str name="hl.bs.chars">.,!? &#9;&#10;&#13;</str>
+        </lst>
+      </boundaryScanner>
+
+      <boundaryScanner name="breakIterator"
+                       class="solr.highlight.BreakIteratorBoundaryScanner">
+        <lst name="defaults">
+          <!-- type should be one of CHARACTER, WORD(default), LINE and SENTENCE -->
+          <str name="hl.bs.type">WORD</str>
+          <!-- language and country are used when constructing Locale object.  -->
+          <!-- And the Locale object will be used when getting instance of BreakIterator -->
+          <str name="hl.bs.language">en</str>
+          <str name="hl.bs.country">US</str>
+        </lst>
+      </boundaryScanner>
+    </highlighting>
+  </searchComponent>
+
+  <!-- Update Processors
+
+       Chains of Update Processor Factories for dealing with Update
+       Requests can be declared, and then used by name in Update
+       Request Processors
+
+       http://wiki.apache.org/solr/UpdateRequestProcessor
+
+    -->
+  
+  <!-- Add unknown fields to the schema 
+  
+       An example field type guessing update processor that will
+       attempt to parse string-typed field values as Booleans, Longs,
+       Doubles, or Dates, and then add schema fields with the guessed
+       field types.  
+       
+       This requires that the schema is both managed and mutable, by
+       declaring schemaFactory as ManagedIndexSchemaFactory, with
+       mutable specified as true. 
+       
+       See http://wiki.apache.org/solr/GuessingFieldTypes
+    -->
+  <updateRequestProcessorChain name="add-unknown-fields-to-the-schema">
+    <!-- UUIDUpdateProcessorFactory will generate an id if none is present in the incoming document -->
+    <processor class="solr.UUIDUpdateProcessorFactory" />
+
+    <processor class="solr.LogUpdateProcessorFactory"/>
+    <processor class="solr.DistributedUpdateProcessorFactory"/>
+    <processor class="solr.RemoveBlankFieldUpdateProcessorFactory"/>
+    <processor class="solr.FieldNameMutatingUpdateProcessorFactory">
+      <str name="pattern">[^\w-\.]</str>
+      <str name="replacement">_</str>
+    </processor>
+    <processor class="solr.ParseBooleanFieldUpdateProcessorFactory"/>
+    <processor class="solr.ParseLongFieldUpdateProcessorFactory"/>
+    <processor class="solr.ParseDoubleFieldUpdateProcessorFactory"/>
+    <processor class="solr.ParseDateFieldUpdateProcessorFactory">
+      <arr name="format">
+        <str>yyyy-MM-dd'T'HH:mm:ss.SSSZ</str>
+        <str>yyyy-MM-dd'T'HH:mm:ss,SSSZ</str>
+        <str>yyyy-MM-dd'T'HH:mm:ss.SSS</str>
+        <str>yyyy-MM-dd'T'HH:mm:ss,SSS</str>
+        <str>yyyy-MM-dd'T'HH:mm:ssZ</str>
+        <str>yyyy-MM-dd'T'HH:mm:ss</str>
+        <str>yyyy-MM-dd'T'HH:mmZ</str>
+        <str>yyyy-MM-dd'T'HH:mm</str>
+        <str>yyyy-MM-dd HH:mm:ss.SSSZ</str>
+        <str>yyyy-MM-dd HH:mm:ss,SSSZ</str>
+        <str>yyyy-MM-dd HH:mm:ss.SSS</str>
+        <str>yyyy-MM-dd HH:mm:ss,SSS</str>
+        <str>yyyy-MM-dd HH:mm:ssZ</str>
+        <str>yyyy-MM-dd HH:mm:ss</str>
+        <str>yyyy-MM-dd HH:mmZ</str>
+        <str>yyyy-MM-dd HH:mm</str>
+        <str>yyyy-MM-dd</str>
+      </arr>
+    </processor>
+    <processor class="solr.AddSchemaFieldsUpdateProcessorFactory">
+      <str name="defaultFieldType">strings</str>
+      <lst name="typeMapping">
+        <str name="valueClass">java.lang.Boolean</str>
+        <str name="fieldType">booleans</str>
+      </lst>
+      <lst name="typeMapping">
+        <str name="valueClass">java.util.Date</str>
+        <str name="fieldType">tdates</str>
+      </lst>
+      <lst name="typeMapping">
+        <str name="valueClass">java.lang.Long</str>
+        <str name="valueClass">java.lang.Integer</str>
+        <str name="fieldType">tlongs</str>
+      </lst>
+      <lst name="typeMapping">
+        <str name="valueClass">java.lang.Number</str>
+        <str name="fieldType">tdoubles</str>
+      </lst>
+    </processor>
+    <processor class="solr.RunUpdateProcessorFactory"/>
+  </updateRequestProcessorChain>
+
+  <!-- Deduplication
+
+       An example dedup update processor that creates the "id" field
+       on the fly based on the hash code of some other fields.  This
+       example has overwriteDupes set to false since we are using the
+       id field as the signatureField and Solr will maintain
+       uniqueness based on that anyway.  
+       
+    -->
+  <!--
+     <updateRequestProcessorChain name="dedupe">
+       <processor class="solr.processor.SignatureUpdateProcessorFactory">
+         <bool name="enabled">true</bool>
+         <str name="signatureField">id</str>
+         <bool name="overwriteDupes">false</bool>
+         <str name="fields">name,features,cat</str>
+         <str name="signatureClass">solr.processor.Lookup3Signature</str>
+       </processor>
+       <processor class="solr.LogUpdateProcessorFactory" />
+       <processor class="solr.RunUpdateProcessorFactory" />
+     </updateRequestProcessorChain>
+    -->
+
+  <!-- Language identification
+
+       This example update chain identifies the language of the incoming
+       documents using the langid contrib. The detected language is
+       written to field language_s. No field name mapping is done.
+       The fields used for detection are text, title, subject and description,
+       making this example suitable for detecting languages form full-text
+       rich documents injected via ExtractingRequestHandler.
+       See more about langId at http://wiki.apache.org/solr/LanguageDetection
+    -->
+  <!--
+   <updateRequestProcessorChain name="langid">
+     <processor class="org.apache.solr.update.processor.TikaLanguageIdentifierUpdateProcessorFactory">
+       <str name="langid.fl">text,title,subject,description</str>
+       <str name="langid.langField">language_s</str>
+       <str name="langid.fallback">en</str>
+     </processor>
+     <processor class="solr.LogUpdateProcessorFactory" />
+     <processor class="solr.RunUpdateProcessorFactory" />
+   </updateRequestProcessorChain>
+  -->
+
+  <!-- Script update processor
+
+    This example hooks in an update processor implemented using JavaScript.
+
+    See more about the script update processor at http://wiki.apache.org/solr/ScriptUpdateProcessor
+  -->
+  <!--
+    <updateRequestProcessorChain name="script">
+      <processor class="solr.StatelessScriptUpdateProcessorFactory">
+        <str name="script">update-script.js</str>
+        <lst name="params">
+          <str name="config_param">example config parameter</str>
+        </lst>
+      </processor>
+      <processor class="solr.RunUpdateProcessorFactory" />
+    </updateRequestProcessorChain>
+  -->
+
+  <!-- Response Writers
+
+       http://wiki.apache.org/solr/QueryResponseWriter
+
+       Request responses will be written using the writer specified by
+       the 'wt' request parameter matching the name of a registered
+       writer.
+
+       The "default" writer is the default and will be used if 'wt' is
+       not specified in the request.
+    -->
+  <!-- The following response writers are implicitly configured unless
+       overridden...
+    -->
+  <!--
+     <queryResponseWriter name="xml" 
+                          default="true"
+                          class="solr.XMLResponseWriter" />
+     <queryResponseWriter name="json" class="solr.JSONResponseWriter"/>
+     <queryResponseWriter name="python" class="solr.PythonResponseWriter"/>
+     <queryResponseWriter name="ruby" class="solr.RubyResponseWriter"/>
+     <queryResponseWriter name="php" class="solr.PHPResponseWriter"/>
+     <queryResponseWriter name="phps" class="solr.PHPSerializedResponseWriter"/>
+     <queryResponseWriter name="csv" class="solr.CSVResponseWriter"/>
+     <queryResponseWriter name="schema.xml" class="solr.SchemaXmlResponseWriter"/>
+    -->
+
+  <queryResponseWriter name="json" class="solr.JSONResponseWriter">
+    <!-- For the purposes of the tutorial, JSON responses are written as
+     plain text so that they are easy to read in *any* browser.
+     If you expect a MIME type of "application/json" just remove this override.
+    -->
+    <str name="content-type">text/plain; charset=UTF-8</str>
+  </queryResponseWriter>
+
+  <!--
+     Custom response writers can be declared as needed...
+    -->
+  <queryResponseWriter name="velocity" class="solr.VelocityResponseWriter" startup="lazy">
+    <str name="template.base.dir">${velocity.template.base.dir:}</str>
+  </queryResponseWriter>
+
+  <!-- XSLT response writer transforms the XML output by any xslt file found
+       in Solr's conf/xslt directory.  Changes to xslt files are checked for
+       every xsltCacheLifetimeSeconds.  
+    -->
+  <queryResponseWriter name="xslt" class="solr.XSLTResponseWriter">
+    <int name="xsltCacheLifetimeSeconds">5</int>
+  </queryResponseWriter>
+
+  <!-- Query Parsers
+
+       http://wiki.apache.org/solr/SolrQuerySyntax
+
+       Multiple QParserPlugins can be registered by name, and then
+       used in either the "defType" param for the QueryComponent (used
+       by SearchHandler) or in LocalParams
+    -->
+  <!-- example of registering a query parser -->
+  <!--
+     <queryParser name="myparser" class="com.mycompany.MyQParserPlugin"/>
+    -->
+
+  <!-- Function Parsers
+
+       http://wiki.apache.org/solr/FunctionQuery
+
+       Multiple ValueSourceParsers can be registered by name, and then
+       used as function names when using the "func" QParser.
+    -->
+  <!-- example of registering a custom function parser  -->
+  <!--
+     <valueSourceParser name="myfunc" 
+                        class="com.mycompany.MyValueSourceParser" />
+    -->
+
+
+  <!-- Document Transformers
+       http://wiki.apache.org/solr/DocTransformers
+    -->
+  <!--
+     Could be something like:
+     <transformer name="db" class="com.mycompany.LoadFromDatabaseTransformer" >
+       <int name="connection">jdbc://....</int>
+     </transformer>
+     
+     To add a constant value to all docs, use:
+     <transformer name="mytrans2" class="org.apache.solr.response.transform.ValueAugmenterFactory" >
+       <int name="value">5</int>
+     </transformer>
+     
+     If you want the user to still be able to change it with _value:something_ use this:
+     <transformer name="mytrans3" class="org.apache.solr.response.transform.ValueAugmenterFactory" >
+       <double name="defaultValue">5</double>
+     </transformer>
+
+      If you are using the QueryElevationComponent, you may wish to mark documents that get boosted.  The
+      EditorialMarkerFactory will do exactly that:
+     <transformer name="qecBooster" class="org.apache.solr.response.transform.EditorialMarkerFactory" />
+    -->
+
+
+  <!-- Legacy config for the admin interface -->
+  <admin>
+    <defaultQuery>*:*</defaultQuery>
+  </admin>
+
+</config>

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/site/apt/index.apt
----------------------------------------------------------------------
diff --git a/src/site/apt/index.apt b/src/site/apt/index.apt
new file mode 100644
index 0000000..5a22169
--- /dev/null
+++ b/src/site/apt/index.apt
@@ -0,0 +1,14 @@
+~~ Copyright 2010, by the California Institute of Technology.
+~~ ALL RIGHTS RESERVED. United States Government Sponsorship acknowledged.
+~~
+~~ $Id: $
+
+  ---
+  About OCSI
+  ---
+  Atsuya Takagi
+  ---
+  
+About Oceanographic Common Search Interface
+
+  The OCSI Program Set is the program set for building and accessing oceangraphic data through RESTful services.

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/site/apt/install/index.apt
----------------------------------------------------------------------
diff --git a/src/site/apt/install/index.apt b/src/site/apt/install/index.apt
new file mode 100644
index 0000000..bc85151
--- /dev/null
+++ b/src/site/apt/install/index.apt
@@ -0,0 +1,144 @@
+~~ Copyright 2010, by the California Institute of Technology.
+~~ ALL RIGHTS RESERVED. United States Government Sponsorship acknowledged.
+~~
+~~ $Id: $
+
+  ---
+  Installation
+  ---
+  Atsuya Takagi
+  ---
+
+{Installation}
+
+   This document describes how to install and configure the OCSI program set. The following sections can be found in this document:
+
+   * {{{Preparations}Preparations}}
+
+   * {{{Configurations}Configurations}}
+
+
+{Preparations}
+
+   The OCSI program set is using Tornado and it is given that Tornado is installed for a target environment. It is assumed that $TORNADO_HOME is where Tornado is installed.
+
+   Before running the OCSI program set, bunch of python files need to be copied. It is assumed that $OCSI_HOME is where the files are copied to.
+
++--
+% cp -r src/main/python/* $OCSI_HOME
++--
+         
+
+{Configurations}
+
+* OCSI
+
+   $OCSI_HOME/config.conf contains server level configuration.
+
++--
+[server]
+port=8890
+host=localhost
++--
+
+   "port" is the port number to bind to, and "host" is hostname to bind to.
+
+   $OCSI_HOME/logging.conf contains logging related configuration.
+
++--
+[handler_timedRotatingFileHandler]
+class=handlers.TimedRotatingFileHandler
+level=DEBUG
+formatter=simpleFormatter
+args=(r'./tornado.log', 'midnight', 1, 30)
++--
+
+   './tornado.log' can be updated to different log file name if needed. Change 'level=DEBUG' to appropriate logging level that you are looking for. Setting it to be 'ERROR' will reduce significant amount of log messages.
+
+
+* OpenSearch Module for Dataset
+
+   $OCSI_HOME/plugins/dataset/[rss|atom|iso|gcmd]/plugin.conf contains a configuration for opensearch module for dataset.
+
++--
+[solr]
+datasetUrl=http://localhost:8983/solr.war/dataset
+entriesPerPage=7
+
+[portal]
+datasetUrl=http://seastorm:8000/drupal2/dataset
+
+[service]
+url=http://localhost:8890
++--
+
+   "datasetUrl" under "solr" section points to where solr is running for dataset. "entriesPerPage" under "solr" section specifies how many entries are included in each opensearch result. "datasetUrl" under "portal" section points to the url for each dataset. This is a part of url from drupal. "url" under "service" section points to url of itself. This is used to link an entry in opensearch result.
+
+
+* OpenSearch Module for Granule
+
+   $OCSI_HOME/plugins/granule/[rss|atom|iso|fgdc]/plugin.conf contains a configuration for opensearch module for granule.
+
++--
+[solr]
+datasetUrl=http://localhost:8983/solr.war/dataset
+granuleUrl=http://localhost:8983/solr.war/granule
+entriesPerPage=7
+
+[service]
+url=http://localhost:8890
+linkToGranule=LOCAL-FTP,REMOTE-FTP
+database=username/password
++--
+
+   "datasetUrl" under "solr" section points to where solr is running for dataset. "granuleUrl" under "solr" section points to where solr is running for granule. "entriesPerPage" under "solr" section specifies how many entries are included in each opensearch result. "url" under "service" section points to url of itself. This is used to link an entry in opensearch result. "linkToGraule" is list of comma-separated link types of where granule is located.  Links will be checked in same order as specified in "linkToGraule" and search result will show link that is ONLINE.  "database" is the credential to connect to Oracle database. It is used to perform spatial search.
+   
+   $OCSI_HOME/plugins/granule/[atom|rss]/plugin.conf requires "l2" under "service" section, which points to where PO.DAAC L2 Search Service is running. "bbox" under "service" can be set to "database" or "l2" depending on whether L2 granule spatial search is to be performed by Oracle or L2 Search Service respectively.
+   
+   $OCSI_HOME/plugins/granule/datacasting/plugin.conf requires "archivedWithin" under "solr" section. "archivedWithin" is specified in number of hours. As an example, if "archivedWithin" is set to 24, then the Datacasting feed will publish granules that have been archived within the last 24 hours.
+
+
+* Passthrough Module
+
+   $OCSI_HOME/plugins/passthrough/pt/plugin.conf contains a configuration for passthrough module.
+
++--
+[service]
+allow=localhost,seatorm:8000
++--
+
+   "allow" specifies a list of domain name and port number that this module is allowed to access to.
+
+
+{Libraries}
+
+   OCSI depends on libraries.
+
+
+* Tornado
+
+   Go to http://www.tornadoweb.org/ and install it by following installation documentation.
+   
+   The easiest way to install Tornado is to download the file, unarchive it, then copy it to $OCSI_HOME/libraries. Do not forget to add the path to PYTHONPATH if you installed it this way.
+
+
+* Jinja2
+
+   Go to http://jinja.pocoo.org/docs and install it by following installation documentation.
+
+   The easiest way to install Jinja2 is to download the file, unarchive it, then copy it to $OCSI_HOME/libraries. Do not forget to add the path to PYTHONPATH if you installed it this way.
+
+
+* cx_Oracle
+
+   Go to http://cx-oracle.sourceforge.net/ and install it by following installation documentation.
+
+   cx_Oracle consists of C code that actually accesses to Oracle database and Python code that provides an API for developers to use to access Oracle database. Thus, to install cx_Oracle, you need to compile. However, the site provides RPM and SRPM for CentOS, which is binary level compatible Linux distribution, so we might be able to compile the SRPM and install the resulted RPM for our RHEL. Even better, cx_Oracle could be installed with yum if it is already in the list of packages that RedHat provides.
+
+* python-dateutil
+
+   Go to http://pypi.python.org/pypi/python-dateutil and install it by following installation documentation.
+
+* PycURL
+
+   Go to http://pycurl.sourceforge.net/ and install version 7.19.0 by following installation documentation.

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/site/apt/operate/index.apt
----------------------------------------------------------------------
diff --git a/src/site/apt/operate/index.apt b/src/site/apt/operate/index.apt
new file mode 100644
index 0000000..e94d03e
--- /dev/null
+++ b/src/site/apt/operate/index.apt
@@ -0,0 +1,344 @@
+~~ Copyright 2010, by the California Institute of Technology.
+~~ ALL RIGHTS RESERVED. United States Government Sponsorship acknowledged.
+~~
+~~ $Id: $
+
+  ---
+  Operation
+  ---
+  Atsuya Takagi
+  ---
+  
+Overview
+
+   Operation for the OCSI program set.
+
+
+* OCSI
+
+   To run the OCSI program set, execute following command. It is assumed that $TORNADO_HOME is pointing to where Tornado is installed, $JINJA_HOME is where jinja2 is installed, and $OCSI_HOME is pointing to where the OCSI program set is installed.
+
++--
+% cd $OCSI_HOME
+% setenv PYTHONPATH $PYTHONPATH:libraries:$TORNADO_HOME:$JINJA_HOME
+% python server.py
++--
+
+   The command above starts up Tornado with the opensearch module. Use Ctrl+C or kill command to terminate the server.
+
+
+* OpenSearch Module for Dataset
+
+   Once the server is started, try to access "http://localhost:8890/ws/search/dataset?format=[rss|atom]&search=ocean" for example to see if it returns anything. Note that to get RSS feed, access it with "format=rss" and "format=atom" for Atom feed.
+
+   * Parameters
+
+      Parameters supported for opensearch are:
+
+      * keyword
+      
+         keyword specifies search text to search for datasets.
+
+      * startTime
+
+         start time in the format of YYYY-MM-DDTHH:mm:ssZ. 'Z' is the time-offset, where 'Z' signifies UTC or an actual offset can be used.
+
+      * endTime
+
+         stop time in the format of YYYY-MM-DDTHH:mm:ssZ. 'Z' is the time-offset, where 'Z' signifies UTC or an actual offset can be used.
+
+      * startIndex
+
+         start index of entries found for search.
+
+      * itemsPerPage
+
+         number of results per page for opensearch result.
+
+      * datasetId
+
+         dataset persistent ID.
+
+      * shortName
+
+         dataset shortname.
+
+      * instrument
+
+         dataset instrument.
+
+      * satellite
+
+         dataset satellite.
+
+      * fileFormat
+
+         dataset data format e.g. HDF, NetCDF.
+
+      * status
+
+         dataset status; only OPEN, PREVIEW, SIMULATED, REMOTE is supported.
+
+      * processLevel
+
+         dataset processing level e.g. 1, 2P.
+
+      * pretty
+
+         "true" to enable pretty output for xml.
+
+      * format
+
+         response format. Possible values are: atom, rss. If format is not specified, default response format is atom.
+
+      * sortBy
+
+         determines ordering of response. Possible values are: timeAsc, timeDesc, popularityAsc, popularityDesc. If sortBy is not specified, default sort order is by Apache Solr's score (most relevant dataset first).
+
+      * bbox
+
+        bounding box for spatial search. format should look like "bbox=0.0,-45.0,180.0,40.0" which is in order of west, south, east, north. Longitude values needs to be in range of [-180.0,180.0].
+
+      * full
+
+        "true" to return response with complete PO.DAAC metadata per entry.
+
+   * Examples
+   
+      * To search dataset that matches a term ocean (RSS)
+      
+         * curl -X GET "http://localhost:8890/ws/search/dataset?format=rss&keyword=ocean"
+
+      * To search dataset that matches a term ocean (Atom)
+      
+         * curl -X GET "http://localhost:8890/ws/search/dataset?format=atom&keyword=ocean"
+
+
+* OpenSearch Module for Granule
+
+   Once the server is started, try to access "http://localhost:8890/ws/search/granule?format=[rss|atom]&datasetId=PODAAC-GH16G-2PE01" for example to see if it returns anything. Note that to get RSS feed, access it with "format=rss" and "format=atom" for Atom feed.
+
+   * Parameters
+
+      Parameters supported for opensearch are:
+
+      * keyword
+      
+         search specifies search text to search for granules.
+
+      * startTime
+
+         lower bound for the granule start time in the format of YYYY-MM-DDTHH:mm:ssZ. 'Z' is the time-offset, where 'Z' signifies UTC or an actual offset can be used.
+
+      * endTime
+
+         upper bound for the granule start time in the format of YYYY-MM-DDTHH:mm:ssZ. 'Z' is the time-offset, where 'Z' signifies UTC or an actual offset can be used.
+
+      * startIndex
+
+         start index of entries found for search.
+
+      * itemsPerPage
+
+         number of results per page for opensearch result.
+
+      * datasetId
+
+         dataset persistent ID.
+
+      * shortName
+
+         dataset shortname.
+
+      * granuleName
+
+         granule name.
+
+      * pretty
+
+        "true" to enable pretty output for xml.
+
+      * bbox
+
+        bounding box for spatial search. format should look like "bbox=0.0,-45.0,190.0,40.0" which is in order of west, south, east, north. Longitude values needs to be in range of [-180.0,180.0].
+
+      * format
+
+         response format. Possible values are: atom, rss. If format is not specified, default response format is atom.
+
+      * sortBy
+
+         determines ordering of response. Possible values are: timeAsc, timeDesc. If sortBy is not specified, default sort order is timeDesc.
+
+      * full
+
+        "true" to return response with complete PO.DAAC metadata per entry.
+
+      Note that either datasetId or shortName must be specified.
+
+   * Examples
+   
+      * To search granules in dataset with persistent ID "PODAAC-GHMG2-2PO01" from start time of Dec 1 00:00:00 UTC 2010 to start time of Dec 2 23:59:59 UTC 2010 (RSS)
+      
+         * curl -X GET "http://localhost:8890/ws/search/granule?format=rss&datasetId=PODAAC-GHMG2-2PO01&startTime=2010-12-01T00:00:00Z&endTime=2010-12-02T23:59:59Z"
+
+      * To search granules in dataset with persistent ID "PODAAC-GHMG2-2PO01" from start time of Dec 1 00:00:00 UTC 2010 to start time of Dec 2 23:59:59 UTC 2010 (Atom)
+      
+         * curl -X GET "http://localhost:8890/ws/search/granule?format=atom&datasetId=PODAAC-GHMG2-2PO01&startTime=2010-12-01T00:00:00Z&endTime=2010-12-02T23:59:59Z"
+
+      * To search granules in dataset with persistent ID "PODAAC-GHAMS-2PE01" which intersects bounding box of 0.0, -45.0, 180.0, 40.0 (RSS) 
+      
+         * curl -X GET "http://localhost:8890/ws/search/granule?format=rss&datasetId=PODAAC-GHAMS-2PE01&bbox=0.0,-45.0,180.0,40.0"
+
+      * To search granules in dataset with persistent ID "PODAAC-GHAMS-2PE01" which intersects bounding box of 0.0, -45.0, 180.0, 40.0 (Atom) 
+      
+         * curl -X GET "http://localhost:8890/ws/search/granule?format=atom&datasetId=PODAAC-GHAMS-2PE01&boundingBox=0.0,-45.0,180.0,40.0"
+
+
+* ISO 19115 and GCMD DIF for Dataset
+
+   To retrieve an ISO 19115/GCMD DIF record for a given dataset, access "http://localhost:8890/ws/metadata/dataset?format=[iso|gcmd]&datasetId=[dataset persistent ID]" Note that to get ISO format, access it with "format=iso" and "format=gcmd" for GCMD DIF format.
+
+   * Parameters
+
+      Parameters supported for ISO 19115/GCMD DIF are:
+
+      * datasetId
+
+         dataset persistent ID.
+
+      * shortName
+
+         dataset shortname.
+
+      * allowNone
+
+         Parameter supported for GCMD DIF only. "true" to allow DIF elements with n/a, none or null values. Default is "false".
+
+   * Examples
+
+      * To retrieve the GCMD DIF record for dataset with persistent ID "PODAAC-GHMG2-2PO01"
+
+         * curl -X GET "http://localhost:8890/ws/metadata/dataset?format=gcmd&datasetId=PODAAC-GHMG2-2PO01"
+
+      * To retrieve the GCMD DIF record for dataset with shortname "OSDPD-L2P-MSG02"
+      
+         * curl -X GET "http://localhost:8890/ws/metadata/dataset?format=gcmd&shortName=OSDPD-L2P-MSG02"
+
+      * To retrieve the ISO 19115 record for dataset with persistent ID "PODAAC-GHMG2-2PO01"
+
+         * curl -X GET "http://localhost:8890/ws/metadata/dataset?format=iso&datasetId=PODAAC-GHMG2-2PO01"
+
+      * To retrieve the ISO 19115 record for dataset with shortname "OSDPD-L2P-MSG02"
+      
+         * curl -X GET "http://localhost:8890/ws/metadata/dataset?format=iso&shortName=OSDPD-L2P-MSG02"
+
+
+* ISO 19115 for Granule
+
+   To retrieve an ISO 19115 record for a given granule, access "http://localhost:8890/ws/metadata/granule?format=iso&name=[granule name]"
+
+   * Parameters
+
+      Parameters supported for ISO 19115 are:
+
+      * datasetId
+
+         dataset persistent ID.
+
+      * shortName
+
+         dataset shortname.
+
+      * granuleName
+
+         granule name.
+
+   * Examples
+   
+      * To retrieve the ISO 19115 record for granule "20100309-MODIS_A-JPL-L2P-A2010068015500.L2_LAC_GHRSST_N-v01.nc" belonging to dataset with persistent ID "PODAAC-GHMDA-2PJ01"
+      
+         * curl -X GET "http://localhost:8890/ws/metadata/granule?format=iso&datasetId=PODAAC-GHMDA-2PJ01&granuleName=20100309-MODIS_A-JPL-L2P-A2010068015500.L2_LAC_GHRSST_N-v01.nc"
+
+
+* FGDC for Granule
+
+   To retrieve an FGDC record, access "http://localhost:8890/ws/metadata/granule?format=fgdc" and in addition provide the following parameters:
+
+   * Parameters
+
+      Parameters supported for FGDC are:
+
+      * itemsPerPage
+
+         number of granules to return for FGDC record.
+
+      * datasetId
+
+         dataset persistent ID.
+
+      * shortName
+
+         dataset shortname.
+
+      * granuleName
+
+         granule name.
+
+      * startTime
+
+         lower bound for the granule start time in the format of YYYY-MM-DDTHH:mm:ssZ. 'Z' is the time-offset, where 'Z' signifies UTC or an actual offset can be used.
+
+      * endTime
+
+         upper bound for the granule start time time in the format of YYYY-MM-DDTHH:mm:ssZ. 'Z' is the time-offset, where 'Z' signifies UTC or an actual offset can be used.
+
+      * sortBy
+
+         determines ordering of response. Possible values are: timeAsc, timeDesc. If sortBy is not specified, default sort order is timeDesc.
+
+   * Examples
+
+      * To retrieve the FGDC record for granules with start time between Feb 1 00:00:00 UTC 2011 to Feb 1 23:59:59 UTC 2011 for dataset with shortname "JPL-L2P-MODIS_A"
+
+         * curl -X GET "http://localhost:8890/ws/metadata/granule?format=fgdc&shortName=JPL-L2P-MODIS_A&startTime=2011-02-01T00:00:00Z&endTime=2011-02-01T23:59:59Z&itemsPerPage=500"
+
+           Note that numberOfResults parameter is set to 500, which means at most 500 granules will be returned for the single FGDC record.
+
+* Datacasting for Granule
+
+   To retrieve a Datacasting feed, access "http://localhost:8890/ws/metadata/granule?format=datacasting" and in addition provide the following parameters:
+
+   * Parameters
+
+      Parameters supported for Datacasting are:
+
+      * datasetId
+
+         dataset persistent ID.
+
+      * shortName
+
+         dataset shortname.
+
+      * pretty
+
+        "true" to enable pretty output for xml. Default is "true".
+
+   * Examples
+
+      * To retrieve the Datacasting feed of granules belonging to dataset with shortname "JPL-L2P-MODIS_A"
+
+         * curl -X GET "http://localhost:8890/ws/metadata/granule?format=datacasting&shortName=JPL-L2P-MODIS_A"
+
+* Passthrough Module
+
+   Once the server is started, try to access "http://localhost:8890/passthrough/p.pt?url=http://google.com/" for example to see if it returns anything.
+
+   * Parameters
+
+      Parameters supported for opensearch are:
+
+      * url
+
+         url to fetch the response from.

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/site/apt/release/index-2.2.1.apt
----------------------------------------------------------------------
diff --git a/src/site/apt/release/index-2.2.1.apt b/src/site/apt/release/index-2.2.1.apt
new file mode 100644
index 0000000..4e8a2f3
--- /dev/null
+++ b/src/site/apt/release/index-2.2.1.apt
@@ -0,0 +1,73 @@
+~~ Copyright 2010, by the California Institute of Technology.
+~~ ALL RIGHTS RESERVED. United States Government Sponsorship acknowledged.
+~~
+~~ $Id:$
+
+  ---
+  Release Description 2.2.1
+  ---
+  Thomas Huang
+  ---
+  
+Release Description 2.2.1
+
+  This release of the initial release of the OCSI Program Set. This release is NOT intended as an operational release. The following sections can be found in this document:
+
+  * {{{Capabilities}Capabilities}}
+
+  * {{{Liens}Liens}}
+
+  * {{{System_Requirements}System Requirements}}
+
+
+{Capabilities}
+
+  This section details the new, modified and corrected capabilities that comprise this release of the OCSI Program Set. In lieu of listing the capabilities by requirement they have been summarized. For a complete list of the requirements and their support status, see {{{Appendix_A_-_Requirements_Support}Appendix A}}.
+
+* New Capabilities
+
+   * A framework that makes it easy to implements a web service is developed. The framework is written in Python and using Tornado.
+
+   * An OpenSearch implementation using the framework is developed. The OpenSearch implementation allows users to search for datasets.
+
+* Modified Capabilities
+
+   * None.
+
+* Corrected Capabilities
+
+   * None 
+
+
+{Liens}
+
+  This section details the liens against the capabilities that have yet to be implemented or are partially implemented. In lieu of listing the liens by requirement they have been summarized. For a complete list of the requirements and their support status, see {{{Appendix_A_-_Requirements_Support}Appendix A}}.
+
+  There are no liens for this release.
+
+
+{System Requirements}
+
+  The software contained in this program set was written in Python and will run on any platform with Python enabled environment. The software was specifically developed under Python 2.6.x or 2.7.x and has only been tested with this version.
+
+  * PO.DAAC Standard Computing Environment
+
+  This will eventually be documented on the Wiki and linked to in future releases. For now, the following software and tools should be available in the current environment:
+
+    * Apache Maven 2.2.x
+
+    * Subversion Client 1.4.X
+    
+    * Python 2.6.x or 2.7.x
+
+    * Tornado 1.1
+
+    []
+
+    See the system-level {{{../../cm/release/index-2.2.1.html}release}} document for specific version numbers.
+
+
+{Appendix A - Requirements Support}
+
+  TBD
+

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/site/apt/release/index-3.0.0.apt
----------------------------------------------------------------------
diff --git a/src/site/apt/release/index-3.0.0.apt b/src/site/apt/release/index-3.0.0.apt
new file mode 100644
index 0000000..891b7d4
--- /dev/null
+++ b/src/site/apt/release/index-3.0.0.apt
@@ -0,0 +1,71 @@
+~~ Copyright 2010, by the California Institute of Technology.
+~~ ALL RIGHTS RESERVED. United States Government Sponsorship acknowledged.
+~~
+~~ $Id:$
+
+  ---
+  Release Description 3.0.0 
+  ---
+  Thomas Huang
+  ---
+  
+Release Description 3.0.0
+
+  This release of the initial release of the OCSI Program Set. This release is NOT intended as an operational release. The following sections can be found in this document:
+
+  * {{{Capabilities}Capabilities}}
+
+  * {{{Liens}Liens}}
+
+  * {{{System_Requirements}System Requirements}}
+
+
+{Capabilities}
+
+  This section details the new, modified and corrected capabilities that comprise this release of the OCSI Program Set. In lieu of listing the capabilities by requirement they have been summarized. For a complete list of the requirements and their support status, see {{{Appendix_A_-_Requirements_Support}Appendix A}}.
+
+* New Capabilities
+
+   * Open search for granule.
+
+* Modified Capabilities
+
+   * None.
+
+* Corrected Capabilities
+
+   * None 
+
+
+{Liens}
+
+  This section details the liens against the capabilities that have yet to be implemented or are partially implemented. In lieu of listing the liens by requirement they have been summarized. For a complete list of the requirements and their support status, see {{{Appendix_A_-_Requirements_Support}Appendix A}}.
+
+  There are no liens for this release.
+
+
+{System Requirements}
+
+  The software contained in this program set was written in Python and will run on any platform with Python enabled environment. The software was specifically developed under Python 2.6.x or 2.7.x and has only been tested with this version.
+
+  * PO.DAAC Standard Computing Environment
+
+  This will eventually be documented on the Wiki and linked to in future releases. For now, the following software and tools should be available in the current environment:
+
+    * Apache Maven 2.2.x
+
+    * Subversion Client 1.4.X
+    
+    * Python 2.6.x or 2.7.x
+
+    * Tornado 1.1
+
+    []
+
+    See the system-level {{{../../cm/release/index-3.0.0.html}release}} document for specific version numbers.
+
+
+{Appendix A - Requirements Support}
+
+  TBD
+



[09/15] incubator-sdap-edge git commit: SDAP-1 Import all code under the SDAP SGA

Posted by le...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/product/iso/template.xml
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/product/iso/template.xml b/src/main/python/plugins/product/iso/template.xml
new file mode 100644
index 0000000..eb83846
--- /dev/null
+++ b/src/main/python/plugins/product/iso/template.xml
@@ -0,0 +1,726 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<gmd:DS_Series xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+	xsi:schemaLocation="http://www.isotc211.org/2005/gmd http://www.ngdc.noaa.gov/metadata/published/xsd/schema.xsd"
+	xmlns:gmd="http://www.isotc211.org/2005/gmd" xmlns:gco="http://www.isotc211.org/2005/gco"
+	xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:gml="http://www.opengis.net/gml"
+	xmlns:gsr="http://www.isotc211.org/2005/gsr" xmlns:gss="http://www.isotc211.org/2005/gss"
+	xmlns:gts="http://www.isotc211.org/2005/gts" xmlns:gmx="http://www.isotc211.org/2005/gmx"
+	xmlns:gmi="http://www.isotc211.org/2005/gmi">
+	<gmd:composedOf gco:nilReason="inapplicable" />
+	<gmd:seriesMetadata>
+		<gmi:MI_Metadata id="{{ docs[0]['id'] }}">
+			<gmd:fileIdentifier>
+				<gco:CharacterString>{{ docs[0]['product_name'] }}</gco:CharacterString>
+			</gmd:fileIdentifier>                
+			<gmd:language>
+				<gco:CharacterString>eng</gco:CharacterString>
+			</gmd:language>
+			<gmd:characterSet>
+				<gmd:MD_CharacterSetCode
+					codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_CharacterSetCode"
+					codeListValue="UTF8">UTF8
+				</gmd:MD_CharacterSetCode>
+			</gmd:characterSet>
+			<gmd:hierarchyLevel>
+				<gmd:MD_ScopeCode
+					codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_ScopeCode"
+					codeListValue="series">series
+				</gmd:MD_ScopeCode>
+			</gmd:hierarchyLevel>
+			{% if docs[0].__contains__('product_contact_role_list') and
+				docs[0]['product_contact_role_list'].__contains__('POINT_OF_CONTACT') %}
+				<gmd:contact>
+					{% for i in range(docs[0]['product_contact_role_list'].__len__()) %}
+						{% if docs[0]['product_contact_role_list'][i] == 'POINT_OF_CONTACT' %}
+							<gmd:CI_ResponsibleParty id="seriesMetadataContact">
+								<gmd:individualName>                                            
+									<gco:CharacterString>
+										{% if docs[0]['product_contact_first_name_list'][i] != 'null' %} 
+											{{ docs[0]['product_contact_first_name_list'][i] }}
+										{% endif %}
+										{% if docs[0]['product_contact_middle_name_list'][i] != 'null' %} 
+											{{ docs[0]['product_contact_middle_name_list'][i] }}
+											{% endif %} 
+										{{ docs[0]['product_contact_last_name_list'][i] }}
+									</gco:CharacterString>
+								</gmd:individualName>
+								<gmd:organisationName>
+									<gco:CharacterString>{{ docs[0]['product_contact_provider_short_name_list'][i] }} &gt; {{ docs[0]['product_contact_provider_long_name_list'][i] }}</gco:CharacterString>
+								</gmd:organisationName>
+								<gmd:positionName>
+									<gco:CharacterString>
+									{% if docs[0].__contains__('product_contact_role_list') %} 
+										{{ docs[0]['product_contact_role_list'][i] }}
+									{% endif %} 
+									</gco:CharacterString>
+								</gmd:positionName>
+								<gmd:contactInfo>
+									<gmd:CI_Contact>
+										<gmd:phone>
+											<gmd:CI_Telephone>
+												{% if docs[0]['product_contact_phone_list'][i] != 'null' %}
+													<gmd:voice>
+														<gco:CharacterString>{{ docs[0]['product_contact_phone_list'][i] }}</gco:CharacterString>
+													</gmd:voice>
+												{% else %}
+													<gmd:voice gco:nilReason="missing" />
+												{% endif %}
+												{% if docs[0]['product_contact_fax_list'][i] != 'null' %}
+													<gmd:facsimile>
+														<gco:CharacterString>{{ docs[0]['product_contact_fax_list'][i] }}</gco:CharacterString>
+													</gmd:facsimile>
+												{% else %}
+													<gmd:facsimile gco:nilReason="missing" />
+												{% endif %}
+											</gmd:CI_Telephone>
+										</gmd:phone>
+										<gmd:address>
+					                                                <gmd:CI_Address>
+												{% if docs[0]['product_contact_address_list'][i] != 'null' %}
+													<gmd:electronicMailAddress>
+														<gco:CharacterString>{{ docs[0]['product_contact_address_list'][i] }}</gco:CharacterString>
+													</gmd:electronicMailAddress>
+												{% else %}
+													<gmd:electronicMailAddress gco:nilReason="missing" />
+												{% endif %}
+								                       </gmd:CI_Address>
+										</gmd:address>
+										<gmd:contactInstructions>
+											<gco:CharacterString>Phone/FAX/E-mail</gco:CharacterString>
+										</gmd:contactInstructions>
+									</gmd:CI_Contact>
+								</gmd:contactInfo>
+								<gmd:role>
+									<gmd:CI_RoleCode
+										codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_RoleCode"
+										codeListValue="pointOfContact">pointOfContact
+									</gmd:CI_RoleCode>
+								</gmd:role>
+							</gmd:CI_ResponsibleParty>
+						{% endif %}
+					{% endfor %}
+				</gmd:contact>
+			{% else %}
+				<gmd:contact gco:nilReason="missing" />
+			{% endif %}
+			<gmd:dateStamp>
+				<gco:Date>{{ docs[0]['product_meta_history_last_revision_date_string'] }}</gco:Date>
+			</gmd:dateStamp>
+			<gmd:metadataStandardName>
+				<gco:CharacterString>ISO 19115-2 Geographic information — Metadata —
+					Part 2: Extensions for imagery and gridded data
+				</gco:CharacterString>
+			</gmd:metadataStandardName>
+			<gmd:metadataStandardVersion>
+				<gco:CharacterString>ISO 19115-2:2009-02-15</gco:CharacterString>
+			</gmd:metadataStandardVersion>
+			<gmd:locale>
+				<gmd:PT_Locale>
+					<gmd:languageCode>
+						<gmd:LanguageCode
+							codeList="http://www.loc.gov/standards/iso639-2/php/English_list.php"
+							codeListValue="eng">eng</gmd:LanguageCode>
+					</gmd:languageCode>
+					<gmd:country>
+						<gmd:Country codeList="http://www.iso.org/iso/iso_3166-1_list_en.zip"
+							codeListValue="US">US</gmd:Country>
+					</gmd:country>
+					<gmd:characterEncoding>
+						<gmd:MD_CharacterSetCode
+							codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_CharacterSetCode"
+							codeListValue="UTF8">UTF8</gmd:MD_CharacterSetCode>
+					</gmd:characterEncoding>
+				</gmd:PT_Locale>
+			</gmd:locale>
+			<gmd:metadataExtensionInfo>
+				<gmd:MD_MetadataExtensionInformation>
+					<gmd:extensionOnLineResource>
+						<gmd:CI_OnlineResource>
+							<gmd:linkage>
+								<gmd:URL>http://www.ngdc.noaa.gov/metadata/published/19115/GHRSST/ISO/CoverageExtensions.xml</gmd:URL>
+							</gmd:linkage>
+							<gmd:applicationProfile>
+								<gco:CharacterString>Web Browser</gco:CharacterString>
+							</gmd:applicationProfile>
+							<gmd:description>
+								<gco:CharacterString>A description of extensions developed at NGDC to classify coverages.</gco:CharacterString>
+							</gmd:description>
+							<gmd:function>
+								<gmd:CI_OnLineFunctionCode
+									codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_OnlineFunctionCode"
+									codeListValue="information">information
+								</gmd:CI_OnLineFunctionCode>
+							</gmd:function>
+						</gmd:CI_OnlineResource>
+					</gmd:extensionOnLineResource>
+				</gmd:MD_MetadataExtensionInformation>
+			</gmd:metadataExtensionInfo>
+			<gmd:identificationInfo>
+				<gmd:MD_DataIdentification id="seriesIdentification">
+					<gmd:citation>
+						<gmd:CI_Citation>
+							{% if docs[0].__contains__('product_name') %}
+								<gmd:title>
+									<gco:CharacterString>{{ docs[0]['product_name'] }}</gco:CharacterString>
+								</gmd:title> 
+							{% else %} 
+								<gmd:title gco:nilReason="missing" />
+							{% endif %}
+							{% if docs[0].__contains__('product_name') %}
+								<gmd:alternateTitle>
+									<gco:CharacterString>{{ docs[0]['product_name'] }}</gco:CharacterString>
+								</gmd:alternateTitle> 
+							{% else %} 
+								<gmd:alternateTitle gco:nilReason="missing" />
+							{% endif %}
+							<gmd:date>
+								<gmd:CI_Date>
+									<gmd:date>					
+										<gco:Date>{{ docs[0]['product_create_time_string'] }}</gco:Date>
+									</gmd:date>
+									<gmd:dateType>
+										<gmd:CI_DateTypeCode
+											codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_DateTypeCode"
+											codeListValue="creation">creation
+										</gmd:CI_DateTypeCode>
+									</gmd:dateType>
+								</gmd:CI_Date>
+							</gmd:date>
+							{% if docs[0].__contains__('product_revision') %}
+								<gmd:edition>
+									<gco:CharacterString>{{ docs[0]['product_revision'] }}</gco:CharacterString>
+								</gmd:edition> 
+							{% else %} 
+							    <gmd:edition gco:nilReason="missing" />
+							{% endif %}           
+							<gmd:citedResponsibleParty>
+								<gmd:CI_ResponsibleParty id="seriesMetadataContact">
+									<gmd:individualName gco:nilReason="missing" />
+									<gmd:contactInfo gco:nilReason="missing" />
+									<gmd:role>
+										<gmd:CI_RoleCode
+											codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_RoleCode"
+											codeListValue="originator">originator
+										</gmd:CI_RoleCode>
+									</gmd:role>
+								</gmd:CI_ResponsibleParty>
+							</gmd:citedResponsibleParty>        
+							<gmd:citedResponsibleParty>
+								<gmd:CI_ResponsibleParty id="seriesMetadataContact">
+									<gmd:individualName gco:nilReason="missing" />
+									<gmd:contactInfo gco:nilReason="missing" />
+									<gmd:role>
+										<gmd:CI_RoleCode
+											codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_RoleCode"
+											codeListValue="publisher">publisher
+										</gmd:CI_RoleCode>
+									</gmd:role>
+								</gmd:CI_ResponsibleParty>
+							</gmd:citedResponsibleParty>         
+						</gmd:CI_Citation>
+					</gmd:citation>
+					{% if docs[0].__contains__('product_name') %}
+						<gmd:abstract>
+							<gco:CharacterString>{{ docs[0]['product_name'] }}</gco:CharacterString>
+						</gmd:abstract>
+					{% else %}
+						<gmd:abstract  gco:nilReason="missing" />
+					{% endif %}
+					<gmd:credit gco:nilReason="missing" />
+					<gmd:status>
+						<gmd:MD_ProgressCode
+							codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_ProgressCode"
+							codeListValue="onGoing">onGoing
+						</gmd:MD_ProgressCode>
+					</gmd:status>
+					{% if docs[0].__contains__('product_contact_role_list') and docs[0]['product_contact_role_list'].__contains__('POINT_OF_CONTACT') %}
+						<gmd:pointOfContact>
+							{% for i in range(docs[0]['product_contact_role_list'].__len__()) %}
+								{% if docs[0]['product_contact_role_list'][i] == 'POINT_OF_CONTACT' %}
+									<gmd:CI_ResponsibleParty id="seriesMetadataContact">
+										<gmd:individualName>                                            
+											<gco:CharacterString>
+												{% if docs[0]['product_contact_first_name_list'][i] != 'null' %} 
+													{{ docs[0]['product_contact_first_name_list'][i] }}
+												{% endif %}
+												{% if docs[0]['product_contact_middle_name_list'][i] != 'null' %} 
+													{{ docs[0]['product_contact_middle_name_list'][i] }}
+												{% endif %} 
+												{{ docs[0]['product_contact_last_name_list'][i] }}
+											</gco:CharacterString>
+										</gmd:individualName>
+										<gmd:organisationName>
+											<gco:CharacterString>{{ docs[0]['product_contact_provider_short_name_list'][i] }} &gt; {{ docs[0]['product_contact_provider_long_name_list'][i] }}</gco:CharacterString>
+										</gmd:organisationName>     
+										<gmd:contactInfo>
+											<gmd:CI_Contact>
+												<gmd:phone>
+													<gmd:CI_Telephone>
+														{% if docs[0]['product_contact_phone_list'][i] != 'null' %}
+															<gmd:voice>
+																<gco:CharacterString>{{ docs[0]['product_contact_phone_list'][i] }}</gco:CharacterString>
+															</gmd:voice>
+														{% else %}
+															<gmd:voice gco:nilReason="missing" />
+														{% endif %}
+														{% if docs[0]['product_contact_fax_list'][i] != 'null' %}
+															<gmd:facsimile>
+																<gco:CharacterString>{{ docs[0]['product_contact_fax_list'][i] }}</gco:CharacterString>
+															</gmd:facsimile>
+														{% else %}
+															<gmd:facsimile gco:nilReason="missing" />
+														{% endif %}
+													</gmd:CI_Telephone>
+												</gmd:phone>
+												<gmd:address>
+													<gmd:CI_Address>
+														{% if docs[0]['product_contact_address_list'][i] != 'null' %}
+															<gmd:electronicMailAddress>
+																<gco:CharacterString>{{ docs[0]['product_contact_address_list'][i] }}</gco:CharacterString>
+															</gmd:electronicMailAddress>
+														{% else %}
+															<gmd:electronicMailAddress gco:nilReason="missing" />
+														{% endif %}
+													</gmd:CI_Address>
+												</gmd:address>
+												<gmd:contactInstructions>
+													<gco:CharacterString>Phone/FAX/E-mail</gco:CharacterString>
+												</gmd:contactInstructions>
+												{% if docs[0]['product_contact_provider_resource_paths_list'][i] != 'null' %}
+													<gmd:onlineResource>
+														<gmd:CI_OnlineResource>
+															<gmd:linkage>
+																<gmd:URL>{{ docs[0]['product_contact_provider_resource_paths_list'][i] }}</gmd:URL>
+															</gmd:linkage>
+														</gmd:CI_OnlineResource>
+													</gmd:onlineResource>
+												{% else %}
+													<gmd:onlineResource gco:nilReason="missing" />
+												{% endif %}
+											</gmd:CI_Contact>
+										</gmd:contactInfo>
+										<gmd:role>
+										    <gmd:CI_RoleCode
+											codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_RoleCode"
+											codeListValue="pointOfContact">pointOfContact
+										    </gmd:CI_RoleCode>
+										</gmd:role>
+									</gmd:CI_ResponsibleParty>
+								{% endif %}
+							{% endfor %}
+						</gmd:pointOfContact>
+					{% else %}
+						<gmd:pointOfContact gco:nilReason="missing" />
+					{% endif %}
+					{% if docs[0].__contains__('product_reference_name_list') %}    
+						<gmd:resourceFormat>
+							{% for i in range(docs[0]['product_reference_name_list'].__len__()) %}
+								<gmd:MD_Format id="resourceFormat">
+									{% if docs[0].__contains__('product_reference_name_list') %}
+										<gmd:name>
+											<gco:CharacterString>{{ docs[0]['product_reference_name_list'][i] }}</gco:CharacterString>
+										</gmd:name>
+									{% else %}
+										<gmd:name gco:nilReason="missing" />
+									{% endif %}
+									{% if docs[0].__contains__('product_reference_version_list') %}
+										<gmd:version>
+											<gco:CharacterString>{{ docs[0]['product_reference_version_list'][i] }}</gco:CharacterString>
+										</gmd:version>
+									{% else %}
+										<gmd:version gco:nilReason="missing" />
+									{% endif %}
+									<gmd:fileDecompressionTechnique gco:nilReason="missing" />
+								</gmd:MD_Format>
+							{% endfor %}
+						</gmd:resourceFormat>
+					{% else %}
+						<gmd:resourceFormat gco:nilReason="missing" /> 
+					{% endif %}
+					<gmd:descriptiveKeywords>
+						<gmd:MD_Keywords>
+							<gmd:keyword gco:nilReason="missing" />
+							<gmd:type>
+								<gmd:MD_KeywordTypeCode
+									codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_KeywordTypeCode"
+									codeListValue="theme">theme
+								</gmd:MD_KeywordTypeCode>
+							</gmd:type>
+							<gmd:thesaurusName>
+								<gmd:CI_Citation>
+									<gmd:title>
+										<gco:CharacterString>NASA/GCMD Earth Science Keywords
+										</gco:CharacterString>
+									</gmd:title>
+									<gmd:date gco:nilReason="unknown" />
+								</gmd:CI_Citation>
+							</gmd:thesaurusName>
+						</gmd:MD_Keywords>
+					</gmd:descriptiveKeywords>
+					{% if docs[0].__contains__('product_operation_list') %}
+						{% for i in range(docs[0]['product_operation_list'].__len__()) %}
+							<gmd:descriptiveKeywords>
+								<gmd:MD_Keywords>
+									<gmd:keyword >
+										<gco:CharacterString>
+											Operation: {{ docs[0]['product_operation_list'][i] }} &gt; Agent: {{ docs[0]['product_operation_agent_list'][i] }}
+										</gco:CharacterString>
+									</gmd:keyword>
+									<gmd:type>
+										<gmd:MD_KeywordTypeCode
+											codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_KeywordTypeCode"
+											codeListValue="place">place
+										</gmd:MD_KeywordTypeCode>
+									</gmd:type>
+									<gmd:thesaurusName>
+										<gmd:CI_Citation>
+											<gmd:title>
+												<gco:CharacterString>NASA/GCMD Location Keywords</gco:CharacterString>
+											</gmd:title>
+											<gmd:date gco:nilReason="unknown" />
+										</gmd:CI_Citation>
+									</gmd:thesaurusName>
+								</gmd:MD_Keywords>
+							</gmd:descriptiveKeywords>
+						{% endfor %}
+					{% else %}
+						<gmd:descriptiveKeywords>
+							<gmd:MD_Keywords>
+								<gmd:keyword gco:nilReason="missing" />
+								<gmd:type>
+									<gmd:MD_KeywordTypeCode
+										codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_KeywordTypeCode"
+										codeListValue="place">place
+									</gmd:MD_KeywordTypeCode>
+								</gmd:type>
+								<gmd:thesaurusName>
+									<gmd:CI_Citation>
+										<gmd:title>
+											<gco:CharacterString>NASA/GCMD Location Keywords</gco:CharacterString>
+										</gmd:title>
+										<gmd:date gco:nilReason="unknown" />
+									</gmd:CI_Citation>
+								</gmd:thesaurusName>
+							</gmd:MD_Keywords>							
+						</gmd:descriptiveKeywords>		
+					{% endif %}
+					<gmd:resourceConstraints>
+						<gmd:MD_LegalConstraints>
+							<gmd:useLimitation gco:nilReason="missing" />
+							<gmd:otherConstraints gco:nilReason="missing" />
+						</gmd:MD_LegalConstraints>
+					</gmd:resourceConstraints>
+					<gmd:spatialRepresentationType>
+						<gmd:MD_SpatialRepresentationTypeCode
+							codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_SpatialRepresentationTypeCode"
+							codeListValue="grid">grid
+						</gmd:MD_SpatialRepresentationTypeCode>
+					</gmd:spatialRepresentationType>
+					<gmd:language>
+						<gco:CharacterString>eng</gco:CharacterString>
+					</gmd:language>
+					<gmd:characterSet>
+						<gmd:MD_CharacterSetCode
+							codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_CharacterSetCode"
+							codeListValue="UTF8">UTF8
+						</gmd:MD_CharacterSetCode>
+					</gmd:characterSet>
+					<gmd:extent>
+						<gmd:EX_Extent id="boundingExtent">
+							<gmd:geographicElement>
+								<gmd:EX_GeographicBoundingBox id="boundingBox">
+									<gmd:extentTypeCode>
+										<gco:Boolean>true</gco:Boolean>
+									</gmd:extentTypeCode>
+									<gmd:westBoundLongitude gco:nilReason="missing" />
+									<gmd:eastBoundLongitude gco:nilReason="missing" />
+									<gmd:southBoundLatitude gco:nilReason="missing" />
+									<gmd:northBoundLatitude gco:nilReason="missing" />
+								</gmd:EX_GeographicBoundingBox>
+							</gmd:geographicElement>
+							<gmd:geographicElement>
+								<gmd:EX_GeographicDescription>
+									<gmd:extentTypeCode>
+										<gco:Boolean>true</gco:Boolean>
+									</gmd:extentTypeCode>
+									<gmd:geographicIdentifier>
+										<gmd:MD_Identifier>
+											<gmd:code />
+										</gmd:MD_Identifier>
+									</gmd:geographicIdentifier>
+								</gmd:EX_GeographicDescription>
+							</gmd:geographicElement>
+							<gmd:temporalElement>
+								<gmd:EX_TemporalExtent id="temporalExtent">
+									<gmd:extent>
+										<TimePeriod xmlns="http://www.opengis.net/gml/3.2"
+											xmlns:ns1="http://www.opengis.net/gml/3.2" ns1:id="timePeriod">
+											<beginPosition>{{ docs[0]['product_start_time_string'] }}</beginPosition>
+											<endPosition>{{ docs[0]['product_stop_time_string'] }}</endPosition>
+										</TimePeriod>
+									</gmd:extent>
+								</gmd:EX_TemporalExtent>
+							</gmd:temporalElement>
+							<gmd:verticalElement gco:nilReason="inapplicable" />
+						</gmd:EX_Extent>
+					</gmd:extent>
+				</gmd:MD_DataIdentification>
+			</gmd:identificationInfo>
+			<gmd:contentInfo>
+				<gmi:MI_CoverageDescription id="referenceInformation">
+					<gmd:attributeDescription>
+						<gco:RecordType xlink:href="http://www.ghrsst.org/documents.htm?parent=475" />
+					</gmd:attributeDescription>
+					<gmd:contentType>
+						<gmd:MD_CoverageContentTypeCode
+							codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_CoverageContentTypeCode"
+							codeListValue="referenceInformation">referenceInformation
+						</gmd:MD_CoverageContentTypeCode>
+					</gmd:contentType>
+					<gmd:dimension>
+						<gmd:MD_Band>
+							<gmd:sequenceIdentifier>
+								<gco:MemberName>
+									<gco:aName>
+										<gco:CharacterString>lat</gco:CharacterString>
+									</gco:aName>
+									<gco:attributeType>
+										<gco:TypeName>
+											<gco:aName>
+												<gco:CharacterString>float</gco:CharacterString>
+											</gco:aName>
+										</gco:TypeName>
+									</gco:attributeType>
+								</gco:MemberName>
+							</gmd:sequenceIdentifier>
+						</gmd:MD_Band>
+					</gmd:dimension>
+					<gmd:dimension>
+						<gmd:MD_Band>
+							<gmd:sequenceIdentifier>
+								<gco:MemberName>
+									<gco:aName>
+										<gco:CharacterString>lon</gco:CharacterString>
+									</gco:aName>
+									<gco:attributeType>
+										<gco:TypeName>
+											<gco:aName>
+												<gco:CharacterString>float</gco:CharacterString>
+											</gco:aName>
+										</gco:TypeName>
+									</gco:attributeType>
+								</gco:MemberName>
+							</gmd:sequenceIdentifier>
+						</gmd:MD_Band>
+					</gmd:dimension>
+					<gmd:dimension>
+						<gmd:MD_Band>
+							<gmd:sequenceIdentifier>
+								<gco:MemberName>
+									<gco:aName>
+										<gco:CharacterString>time</gco:CharacterString>
+									</gco:aName>
+									<gco:attributeType>
+										<gco:TypeName>
+											<gco:aName>
+												<gco:CharacterString>int</gco:CharacterString>
+											</gco:aName>
+										</gco:TypeName>
+									</gco:attributeType>
+								</gco:MemberName>
+							</gmd:sequenceIdentifier>
+						</gmd:MD_Band>
+					</gmd:dimension>
+				</gmi:MI_CoverageDescription>
+			</gmd:contentInfo>
+			<gmd:distributionInfo>
+				<gmd:MD_Distribution>
+					<gmd:distributionFormat xlink:href="#resourceFormat" />
+					<gmd:distributor>
+						<gmd:MD_Distributor>
+							<gmd:distributorContact>
+								<gmd:CI_ResponsibleParty>
+									<gmd:individualName>
+										<gco:CharacterString>GIBS User Services</gco:CharacterString>
+									</gmd:individualName>
+									<gmd:organisationName>
+										<gco:CharacterString>NASA/JPL/GIBS &gt; Global Imagery Browse Services, Jet Propulsion Laboratory, NASA</gco:CharacterString>
+									</gmd:organisationName>
+									<gmd:contactInfo>
+										<gmd:CI_Contact>
+											<gmd:address>
+												<gmd:CI_Address>
+													<gmd:deliveryPoint>
+														<gco:CharacterString>4800 Oak Grove Drive</gco:CharacterString>
+													</gmd:deliveryPoint>
+													<gmd:city>
+														<gco:CharacterString>Pasadena</gco:CharacterString>
+													</gmd:city>
+													<gmd:administrativeArea>
+														<gco:CharacterString>CA</gco:CharacterString>
+													</gmd:administrativeArea>
+													<gmd:postalCode>
+														<gco:CharacterString>91109-8099</gco:CharacterString>
+													</gmd:postalCode>
+													<gmd:country>
+														<gco:CharacterString>USA</gco:CharacterString>
+													</gmd:country>
+													<gmd:electronicMailAddress>
+														<gco:CharacterString>gibs@gibs.jpl.nasa.gov</gco:CharacterString>
+													</gmd:electronicMailAddress>
+												</gmd:CI_Address>
+											</gmd:address>
+											<gmd:onlineResource>
+												<gmd:CI_OnlineResource>
+													<gmd:linkage>
+														<gmd:URL>http://gibs.jpl.nasa.gov</gmd:URL>
+													</gmd:linkage>
+												</gmd:CI_OnlineResource>
+											</gmd:onlineResource>
+										</gmd:CI_Contact>
+									</gmd:contactInfo>
+									<gmd:role>
+										<gmd:CI_RoleCode
+											codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_RoleCode"
+											codeListValue="distributor">distributor
+										</gmd:CI_RoleCode>
+									</gmd:role>
+								</gmd:CI_ResponsibleParty>
+							</gmd:distributorContact>
+						</gmd:MD_Distributor>
+					</gmd:distributor>
+					{% if docs[0].__contains__('product_reference_path_list') and docs[0].__contains__('product_reference_type_list') %} 
+						{% for i in range(docs[0]['product_reference_path_list'].__len__()) %}
+							if docs[0]['product_reference_type_list'][i] != 'Thumbnail' %}
+							<gmd:transferOptions>
+								<gmd:MD_DigitalTransferOptions>
+									<gmd:onLine>
+										<gmd:CI_OnlineResource>
+											{% if docs[0].__contains__('product_reference_path_list') %}
+												<gmd:linkage>		
+													<gmd:URL>{{ docs[0]['product_reference_path_list'][i] }}</gmd:URL>
+												</gmd:linkage>
+											{% else %}
+												<gmd:linkage gco:nilReason="missing" />
+											{% endif %}
+											{% if docs[0].__contains__('product_reference_name_list') %}
+												<gmd:name>		
+													<gmd:CharacterString>{{ docs[0]['product_reference_name_list'][i] }}</gmd:CharacterString>
+												</gmd:name>
+											{% else %}
+												<gmd:name gco:nilReason="missing" />
+											{% endif %}
+											{% if docs[0].__contains__('product_type_resource_description_list') %}
+												<gmd:description>		
+													<gmd:CharacterString>{{ docs[0]['product_type_resource_description_list'][i] }}</gmd:CharacterString>
+												</gmd:description>
+											{% else %}
+												<gmd:description gco:nilReason="missing" />
+											{% endif %}
+											<gmd:function>
+												<gmd:CI_OnLineFunctionCode
+													codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_OnlineFunctionCode"
+													codeListValue="information">information
+												</gmd:CI_OnLineFunctionCode>
+											</gmd:function>
+										</gmd:CI_OnlineResource>
+									</gmd:onLine>
+								</gmd:MD_DigitalTransferOptions>
+							</gmd:transferOptions>
+						{% endfor %}	
+					{% else %} 
+						<gmd:transferOptions gco:nilReason="missing" />
+					{% endif %}
+				</gmd:MD_Distribution>
+			</gmd:distributionInfo>
+			<gmd:metadataMaintenance>
+				<gmd:MD_MaintenanceInformation>
+					<gmd:maintenanceAndUpdateFrequency>
+						<gmd:MD_MaintenanceFrequencyCode
+							codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_MaintenanceFrequencyCode"
+							codeListValue="asNeeded">asNeeded</gmd:MD_MaintenanceFrequencyCode>
+					</gmd:maintenanceAndUpdateFrequency>
+					<gmd:maintenanceNote>
+						<gco:CharacterString>Translated from GCMD DIF
+						</gco:CharacterString>
+					</gmd:maintenanceNote>
+				</gmd:MD_MaintenanceInformation>
+			</gmd:metadataMaintenance>
+			<gmi:acquisitionInformation>
+				<gmi:MI_AcquisitionInformation>
+					<gmi:instrument>
+						<gmi:MI_Instrument>
+							<gmi:identifier gco:nilReason="missing" />
+							<gmi:description gco:nilReason="missing" />
+						</gmi:MI_Instrument>
+					</gmi:instrument>
+					<gmi:platform>
+						<gmi:MI_Platform>
+							<gmi:identifier gco:nilReason="missing" />
+							<gmi:description gco:nilReason="missing" />
+							<gmi:sponsor>
+								<gmd:CI_ResponsibleParty>
+									<gmd:organisationName>
+										<gco:CharacterString>NASA/JPL/GIBS &gt; Global Imagery Browse Services, Jet Propulsion Laboratory, NASA</gco:CharacterString>
+									</gmd:organisationName>             
+									<gmd:contactInfo>
+										<gmd:CI_Contact>
+											<gmd:onlineResource>
+												<gmd:CI_OnlineResource>
+													<gmd:linkage>
+														<gmd:URL>http://gibs.jpl.nasa.gov</gmd:URL>
+													</gmd:linkage>
+													<gmd:function>
+														<gmd:CI_OnLineFunctionCode
+															codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_OnLineFunctionCode"
+															codeListValue="information" />
+													</gmd:function>
+												</gmd:CI_OnlineResource>
+											</gmd:onlineResource>
+										</gmd:CI_Contact>
+									</gmd:contactInfo>
+									<gmd:role>
+										<gmd:CI_RoleCode
+											codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_RoleCode"
+											codeListValue="sponsor" />
+									</gmd:role>
+								</gmd:CI_ResponsibleParty>
+							</gmi:sponsor>
+							<gmi:sponsor>
+								<gmd:CI_ResponsibleParty>
+									<gmd:organisationName>
+										<gco:CharacterString>NASA/JPL/PODAAC &gt; Physical
+											Oceanography Distributed Active Archive Center, Jet
+											Propulsion Laboratory, NASA</gco:CharacterString>
+									</gmd:organisationName>
+									<gmd:contactInfo>
+										<gmd:CI_Contact>
+											<gmd:onlineResource>
+												<gmd:CI_OnlineResource>
+													<gmd:linkage>
+														<gmd:URL>http://podaac.jpl.nasa.gov</gmd:URL>
+													</gmd:linkage>
+													<gmd:function>
+														<gmd:CI_OnLineFunctionCode
+															codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_OnLineFunctionCode"
+															codeListValue="information" />
+													</gmd:function>
+												</gmd:CI_OnlineResource>
+											</gmd:onlineResource>
+										</gmd:CI_Contact>
+									</gmd:contactInfo>
+									<gmd:role>
+										<gmd:CI_RoleCode
+											codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_RoleCode"
+											codeListValue="sponsor" />
+									</gmd:role>
+								</gmd:CI_ResponsibleParty>
+							</gmi:sponsor>
+							<gmi:instrument xlink:href="{{ docs[0]['product_name'] }}" />
+						</gmi:MI_Platform>
+					</gmi:platform>
+				</gmi:MI_AcquisitionInformation>
+			</gmi:acquisitionInformation>
+		</gmi:MI_Metadata>
+	</gmd:seriesMetadata>
+</gmd:DS_Series>

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/productType/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/productType/__init__.py b/src/main/python/plugins/productType/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/productType/atom/AtomWriter.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/productType/atom/AtomWriter.py b/src/main/python/plugins/productType/atom/AtomWriter.py
new file mode 100644
index 0000000..916b89c
--- /dev/null
+++ b/src/main/python/plugins/productType/atom/AtomWriter.py
@@ -0,0 +1,26 @@
+import logging
+import datetime
+
+from edge.elasticsearch.opensearch.datasetatomresponse import DatasetAtomResponse
+from edge.elasticsearch.datasetwriter import DatasetWriter
+
+class AtomWriter(DatasetWriter):
+    def __init__(self, configFilePath):
+        super(AtomWriter, self).__init__(configFilePath)
+
+    def _generateOpenSearchResponse(self, solrResponse, searchText, searchUrl, searchParams, pretty):
+        response = DatasetAtomResponse(
+            self._configuration.get('portal', 'datasetUrl'),
+            self._configuration.get('service', 'host'),
+            self._configuration.get('service', 'url'),
+            self.datasets
+        )
+
+        response.title = 'GIBS Imagery Search Results'
+        response.link = searchUrl
+        response.authors.append('GIBS Imagery Search Service')
+        response.updated = datetime.datetime.utcnow().isoformat()+'Z'
+        response.id = 'tag:'+self._configuration.get('service', 'host')+','+datetime.datetime.utcnow().date().isoformat()
+        response.parameters = searchParams
+
+        return response.generate(solrResponse, pretty)

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/productType/atom/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/productType/atom/__init__.py b/src/main/python/plugins/productType/atom/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/productType/atom/plugin.conf
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/productType/atom/plugin.conf b/src/main/python/plugins/productType/atom/plugin.conf
new file mode 100644
index 0000000..66e51b6
--- /dev/null
+++ b/src/main/python/plugins/productType/atom/plugin.conf
@@ -0,0 +1,11 @@
+[solr]
+datasetUrl=http://localhost:9200/gibs/productType
+granuleUrl=http://localhost:9200/gibs/product
+entriesPerPage=7
+
+[portal]
+datasetUrl=http://localhost:8000/drupal/dataset
+
+[service]
+url=http://localhost:8890
+host=localhost:8890

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/product_type/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/product_type/__init__.py b/src/main/python/plugins/product_type/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/product_type/atom/Writer.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/product_type/atom/Writer.py b/src/main/python/plugins/product_type/atom/Writer.py
new file mode 100644
index 0000000..b381f57
--- /dev/null
+++ b/src/main/python/plugins/product_type/atom/Writer.py
@@ -0,0 +1,74 @@
+import logging
+import os
+import os.path
+import urllib
+
+from edge.writer.solrtemplateresponsewriter import SolrTemplateResponseWriter
+from edge.opensearch.solrcmrtemplateresponse import SolrCmrTemplateResponse
+
+class Writer(SolrTemplateResponseWriter):
+    def __init__(self, configFilePath):
+        super(Writer, self).__init__(configFilePath)
+
+        templatePath = os.path.dirname(configFilePath) + os.sep
+        templatePath += self._configuration.get('service', 'template')
+        self.template = self._readTemplate(templatePath)
+
+    def _generateOpenSearchResponse(self, solrResponse, searchText, searchUrl, searchParams, pretty):
+        print "product_type:seachParams = [%s]\n" %searchParams
+        response = SolrCmrTemplateResponse(self._configuration, searchUrl, searchParams)
+        response.setTemplate(self.template)
+        response.variables['serviceUrl'] = self._configuration.get('service', 'url')
+
+        return response.generate(solrResponse, pretty=pretty)
+
+    def _constructSolrQuery(self, startIndex, entriesPerPage, parameters, facets):
+        queries = []
+        filterQueries = []
+
+        for key, value in parameters.iteritems():
+            if key == 'keyword':
+                queries.append(urllib.quote(value))
+            elif key == 'layers' and value == 'true':
+                filterQueries.append('-product_type_identifier:*_SRC')
+            elif key == 'layers' and value == 'false':
+                filterQueries.append('product_type_identifier:*_SRC')
+            elif key == 'product_type_identifier':
+                filterQueries.append('product_type_identifier:'+value)
+            elif key == 'startTime':
+                queries.append('product_type_last_updated:['+value+'%20TO%20*]')
+            elif key == 'endTime':
+                queries.append('product_type_last_updated:[*%20TO%20'+value+']')
+            elif key == 'bbox':
+                coordinates = value.split(",")
+                filterQueries.append('Spatial-Geometry:[' + coordinates[1] + ',' + coordinates[0] + '%20TO%20' + coordinates[3] + ',' + coordinates[2] + ']')
+            elif key == 'id':
+                queries.append('id:' + self._urlEncodeSolrQueryValue(value))
+
+        for key, value in facets.iteritems():
+            if type(value) is list:
+                if (len(value) == 1):
+                    filterQueries.append(key + ':' + self._urlEncodeSolrQueryValue(value[0]))
+                else:
+                    filterQueries.append(key + ':(' + '+OR+'.join([ self._urlEncodeSolrQueryValue(x) for x in value ]) + ")")
+            else:    
+                filterQueries.append(key + ':' + self._urlEncodeSolrQueryValue(value))
+
+        if len(queries) == 0:
+            queries.append('*:*')
+
+        query = 'q='+'+AND+'.join(queries)+'&version=2.2&indent=on&wt=json'
+
+        if len(filterQueries) > 0:
+            query += '&fq='+'+AND+'.join(filterQueries)
+        
+        if self.facet:
+            query += '&rows=0&facet=true&facet.limit=-1&facet.mincount=1&'
+            query += '&'.join(['facet.field=' + facet for facet in self._configuration.get('solr', 'facets').split(',')])
+        else:
+            query += '&start='+str(startIndex)+'&rows='+str(entriesPerPage)
+            query += '&sort=' + self._configuration.get('solr', 'sort')
+
+        logging.debug('solr query: '+query)
+
+        return query

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/product_type/atom/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/product_type/atom/__init__.py b/src/main/python/plugins/product_type/atom/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/product_type/atom/plugin.conf
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/product_type/atom/plugin.conf b/src/main/python/plugins/product_type/atom/plugin.conf
new file mode 100644
index 0000000..617e1d4
--- /dev/null
+++ b/src/main/python/plugins/product_type/atom/plugin.conf
@@ -0,0 +1,12 @@
+[solr]
+datasetUrl=http://localhost:8983/solr/product_type
+entriesPerPage=10
+maxEntriesPerPage=2000
+defaultSearchParam=keyword
+parameters=keyword,layers,startTime,endTime,bbox,id
+facets={}
+sort=product_type_title+asc
+
+[service]
+url=http://localhost:8890
+template=template.xml

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/product_type/atom/template.xml
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/product_type/atom/template.xml b/src/main/python/plugins/product_type/atom/template.xml
new file mode 100644
index 0000000..b02803e
--- /dev/null
+++ b/src/main/python/plugins/product_type/atom/template.xml
@@ -0,0 +1,116 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<feed esipdiscovery:version="1.2"
+      xmlns="http://www.w3.org/2005/Atom"
+      xmlns:dc="http://purl.org/dc/terms/"
+      xmlns:echo="http://www.echo.nasa.gov/esip"
+      xmlns:gibs="http://gibs.jpl.nasa.gov/esip" 
+      xmlns:esipdiscovery="http://commons.esipfed.org/ns/discovery/1.2/"
+      xmlns:georss="http://www.georss.org/georss/10"
+      xmlns:gml="http://www.opengis.net/gml"
+      xmlns:os="http://a9.com/-/spec/opensearch/1.1/"
+      xmlns:time="http://a9.com/-/opensearch/extensions/time/1.0/">
+   <updated>{{ updated }}</updated>
+   <id>https://api.echo.nasa.gov:443/opensearch/datasets.atom</id>
+   <author>
+      <name>GIBS</name>
+      <email>support@echo.nasa.gov</email>
+   </author>
+   <title type="text">GIBS Product Type Metadata</title>
+   <os:totalResults>{{ numFound }}</os:totalResults>
+   <os:itemsPerPage>{{ itemsPerPage }}</os:itemsPerPage>
+   <os:startIndex>{{ startIndex }}</os:startIndex>
+   <os:Query role="request"
+             xmlns:echo="http://www.echo.nasa.gov/esip"
+             xmlns:geo="http://a9.com/-/opensearch/extensions/geo/1.0/"
+             xmlns:time="http://a9.com/-/opensearch/extensions/time/1.0/" />
+   <subtitle type="text">Search parameters: None</subtitle>
+   <link href="{{ myself }}" hreflang="en-US" rel="self" type="application/atom+xml" />
+   {% if last %}<link href="{{ last }}" hreflang="en-US" rel="last" type="application/atom+xml" />{% endif %}
+   {% if prev %}<link href="{{ prev }}" hreflang="en-US" rel="previous" type="application/atom+xml" />{% endif %}
+   {% if next %}<link href="{{ next }}" hreflang="en-US" rel="next" type="application/atom+xml" />{% endif %}
+   {% if first %}<link href="{{ first }}" hreflang="en-US" rel="first" type="application/atom+xml" />{% endif %}
+   <link href="https://wiki.earthdata.nasa.gov/display/echo/Open+Search+API+release+information"
+         hreflang="en-US" rel="describedBy" title="Release Notes" type="text/html" />
+   {% for doc in docs %}
+      <entry>
+         <id>{{ link }}?id={{ doc['product_type_id'] }}</id>
+         <dc:identifier>{{ doc['product_type_identifier'] }}</dc:identifier>
+         <author>
+            <name>GIBS</name>
+            <email>support@echo.nasa.gov</email>
+         </author>
+         <title type="text">{{ doc['product_type_title'] }}</title>
+         {% if doc['product_type_description'] and doc['product_type_description'] != 'null' %}
+            <summary type="text">{{ 'summary: ' + doc['product_type_description'] }}</summary>
+         {% elif doc['product_type_description'] == 'null' %}
+            <summary type="text">{{ 'summary: product_type_description is set to \'null\' in the database for product_type_id:[' + doc['product_type_id'] + ']' }} </summary>
+         {% else %}
+            <summary type="text">{{ 'summary: product_type_description is not in the database for product_type_id:[' + doc['product_type_id'] + ']' }} </summary>
+         {% endif %}
+         <updated>{{ doc['product_type_last_updated_string'] }}</updated>
+         {% for i in range(doc['product_type_dataset_id_list']|count)  %}
+             <gibs:cmr>
+               {% if doc['cmr_dataset_id'] %}<echo:datasetId>{{ doc['cmr_dataset_id'][i] }}</echo:datasetId>
+               {% else %}<echo:datasetId />{% endif %}
+               {% if doc['cmr_summary'] %}<echo:description>{{ doc['cmr_summary'][i] }}</echo:description>
+               {% else %}<echo:description />{% endif %}
+               {% if doc['cmr_short_name'] %}<echo:shortName>{{ doc['cmr_short_name'][i] }}</echo:shortName>
+               {% else %}<echo:shortName />{% endif %}
+               {% if doc['cmr_title'] %}<echo:longName>{{ doc['cmr_title'][i] }}</echo:longName>
+               {% else %}<echo:longName />{% endif %}
+               {% if doc['cmr_version_id'] %}<echo:versionId>{{ doc['cmr_version_id'][i] }}</echo:versionId>
+               {% else %}<echo:versionId />{% endif %}
+               {% if doc['cmr_updated'] %}<echo:lastUpdate>{{ doc['cmr_updated'][i] }}</echo:lastUpdate>
+               {% else %}<echo:lastUpdate />{% endif %}
+               {% if doc['cmr_data_center'] %}<echo:dataCenter>{{ doc['cmr_data_center'][i] }}</echo:dataCenter>
+               {% else %}<echo:dataCenter />{% endif %}
+               {% if doc['cmr_original_format'] %}<echo:originalFormat>{{ doc['cmr_original_format'][i] }}</echo:originalFormat>
+               {% else %}<echo:originalFormat />{% endif %}
+               {% if doc['cmr_orbit_parameters'] %}
+                  {% if doc['cmr_orbit_parameters'][i] != {} %}<echo:orbitParameters>{{ doc['cmr_orbit_parameters'][i] }}</echo:orbitParameters>
+                  {% else %}<echo:orbitParameters />{% endif %}
+               {% else %}<echo:orbitParameters />{% endif %}
+               {% if doc['cmr_score'] %}<echo:score>{{ doc['cmr_score'][i] }}</echo:score>{% else %}<echo:score />{% endif %}
+               {% if doc['cmr_processing_level_id'] %}<echo:processingLevelId>{{ doc['cmr_processing_level_id'][i] }}</echo:processingLevelId>
+               {% else %}<echo:processingLevelId />{% endif %}
+               {% if doc['cmr_coordinate_system'] %}<echo:coordinateSystem>{{ doc['cmr_coordinate_system'][i] }}</echo:coordinateSystem>
+               {% else %}<echo:coordinateSystem />{% endif %}
+               {% if doc['cmr_online_access_flag'] %}<echo:onlineAccessFlag>{{ doc['cmr_online_access_flag'][i] }}</echo:onlineAccessFlag>
+               {% else %}<echo:onlineAccessFlag />{% endif %}
+               {% if doc['cmr_browse_flag'] %}<echo:browseFlag>{{ doc['cmr_browse_flag'][i] }}</echo:browseFlag>
+               {% else %}<echo:browseFlag />{% endif %}
+               {% if doc['cmr_boxes'] %}
+                  {% for box in doc['cmr_boxes'][i] %}
+                     <georss:box>{{ box }}</georss:box>
+                  {% endfor %}
+               {% else %}<georss:box />{% endif %} 
+               {% if doc['cmr_links'] %}
+                  {% for cmr_link in doc['cmr_links'][i] %}
+                     <link href="{{ cmr_link['href'] }}" hreflang="en-US" rel="enclosure" title="{{ cmr_link['title'] }}" />
+                  {% endfor %}
+               {% endif %} 
+             </gibs:cmr>
+         {% endfor %}
+         {% if doc['product_type_metadata_platform'] %}
+            <echo:platform>{{ doc['product_type_metadata_platform'] }}</echo:platform>
+         {% else %}
+            <echo:platform>{{ 'platform: product_type_metadata_platform is not in the database' }}</echo:platform>
+         {% endif %}
+         {% if doc['product_type_metadata_instrument'] %}
+            <echo:instrument>{{ doc['product_type_metadata_instrument'] }}</echo:instrument>
+         {% else %}
+            <echo:instrument>{{ 'instrument: product_type_metadata_instrument is not in the database' }}</echo:instrument>
+         {% endif %}
+         {% if doc['product_type_policy_data_format'] %}
+            <echo:dataFormat>{{ doc['product_type_policy_data_format'] }}</echo:dataFormat>
+         {% else %}
+            <echo:dataFormat>{{ 'dataFormat: product_type_policy_data_format is not in the database' }}</echo:dataFormat>
+         {% endif %}
+         <link href="{{ serviceUrl }}/ws/metadata/product_type?id={{ doc['id'] }}&amp;pretty=true"
+               hreflang="en-US" rel="alternate" title="Product Type metadata" type="application/xml" />
+         <link href="{{ serviceUrl }}/ws/search/product?product_pt_id={{ doc['id'] }}&amp;pretty=true"
+               hreflang="en-US" rel="enclosure" title="Product Search" type="application/xml" pretty="true" />
+         <dc:date>{{ doc['product_type_last_updated_string'] }}</dc:date>
+      </entry>
+   {% endfor %}
+</feed>

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/product_type/iso/Writer.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/product_type/iso/Writer.py b/src/main/python/plugins/product_type/iso/Writer.py
new file mode 100644
index 0000000..84295cf
--- /dev/null
+++ b/src/main/python/plugins/product_type/iso/Writer.py
@@ -0,0 +1,35 @@
+import logging
+import os
+import os.path
+
+from edge.writer.solrtemplateresponsewriter import SolrTemplateResponseWriter
+from edge.opensearch.solrtemplateresponse import SolrTemplateResponse
+
+class Writer(SolrTemplateResponseWriter):
+    def __init__(self, configFilePath):
+        super(Writer, self).__init__(configFilePath)
+
+        templatePath = os.path.dirname(configFilePath) + os.sep
+        templatePath += self._configuration.get('service', 'template')
+        self.template = self._readTemplate(templatePath)
+
+    def _generateOpenSearchResponse(self, solrResponse, searchText, searchUrl, searchParams, pretty):
+        response = SolrTemplateResponse(self._configuration, searchUrl, searchParams)
+        response.setTemplate(self.template)
+
+        return response.generate(solrResponse, pretty=pretty)
+
+    def _constructSolrQuery(self, startIndex, entriesPerPage, parameters, facets):
+        queries = []
+
+        for key, value in parameters.iteritems():
+            if key == 'id':
+                queries.append('id:' + self._urlEncodeSolrQueryValue(value))
+            elif key == 'title':
+                queries.append('product_type_title:' + self._urlEncodeSolrQueryValue(value))
+
+        query = 'q='+'+AND+'.join(queries)+'&version=2.2&indent=on&wt=json'+'&rows='+str(entriesPerPage)
+
+        logging.debug('solr query: '+query)
+
+        return query

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/product_type/iso/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/product_type/iso/__init__.py b/src/main/python/plugins/product_type/iso/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/product_type/iso/plugin.conf
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/product_type/iso/plugin.conf b/src/main/python/plugins/product_type/iso/plugin.conf
new file mode 100644
index 0000000..fbe4c31
--- /dev/null
+++ b/src/main/python/plugins/product_type/iso/plugin.conf
@@ -0,0 +1,8 @@
+[solr]
+datasetUrl=http://localhost:8983/solr/product_type
+entriesPerPage=1
+parameters=id,title
+
+[service]
+url=http://localhost:8890
+template=template.xml


[15/15] incubator-sdap-edge git commit: SDAP-1 Import all code under the SDAP SGA

Posted by le...@apache.org.
SDAP-1 Import all code under the SDAP SGA


Project: http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/commit/53351bf3
Tree: http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/tree/53351bf3
Diff: http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/diff/53351bf3

Branch: refs/heads/master
Commit: 53351bf3a48e9f844c21e175ed13ae1b22dd69f6
Parents: 
Author: Lewis John McGibbney <le...@gmail.com>
Authored: Fri Oct 27 15:41:26 2017 -0700
Committer: Lewis John McGibbney <le...@gmail.com>
Committed: Fri Oct 27 15:41:26 2017 -0700

----------------------------------------------------------------------
 LICENSE                                         |  201 +++
 README.md                                       |   59 +
 pom.xml                                         |  103 ++
 .../database/src/create_imagery_provider.sql    |  195 +++
 .../database/src/create_product_type_view.sql   |  739 ++++++++
 src/main/database/src/create_product_view.sql   |  635 +++++++
 src/main/python/config.conf                     |    3 +
 src/main/python/edge-env.bash                   |    7 +
 src/main/python/edge-env.csh                    |    7 +
 src/main/python/libraries/edge/__init__.py      |    0
 src/main/python/libraries/edge/dateutility.py   |   57 +
 .../libraries/edge/elasticsearch/__init__.py    |    0
 .../edge/elasticsearch/datasetwriter.py         |  192 +++
 .../edge/elasticsearch/granulewriter.py         |  142 ++
 .../edge/elasticsearch/opensearch/__init__.py   |    0
 .../opensearch/atomresponsebyelasticsearch.py   |   87 +
 .../opensearch/datasetatomresponse.py           |   79 +
 .../opensearch/granuleatomresponse.py           |   78 +
 src/main/python/libraries/edge/httputility.py   |   13 +
 .../libraries/edge/opensearch/__init__.py       |    0
 .../libraries/edge/opensearch/atomresponse.py   |  145 ++
 .../edge/opensearch/atomresponsebysolr.py       |  134 ++
 .../opensearch/datacastingresponsebysolr.py     |   71 +
 .../edge/opensearch/datasetatomresponse.py      |   85 +
 .../edge/opensearch/datasetgcmdresponse.py      |   11 +
 .../edge/opensearch/datasetgranulewriter.py     |  233 +++
 .../edge/opensearch/datasetisoresponse.py       |   11 +
 .../edge/opensearch/datasetresponsebysolr.py    |   14 +
 .../edge/opensearch/datasetrssresponse.py       |   85 +
 .../libraries/edge/opensearch/datasetwriter.py  |  192 +++
 .../libraries/edge/opensearch/fgdcresponse.py   |   56 +
 .../edge/opensearch/fgdcresponsebysolr.py       |  141 ++
 .../edge/opensearch/gcmdresponsebysolr.py       |  123 ++
 .../edge/opensearch/granuleatomresponse.py      |  110 ++
 .../opensearch/granuledatacastingresponse.py    |   41 +
 .../edge/opensearch/granulefgdcresponse.py      |   13 +
 .../edge/opensearch/granuleisoresponse.py       |   33 +
 .../edge/opensearch/granuleresponsebysolr.py    |   37 +
 .../edge/opensearch/granulerssresponse.py       |   96 ++
 .../libraries/edge/opensearch/granulewriter.py  |  251 +++
 .../libraries/edge/opensearch/isoresponse.py    |   38 +
 .../edge/opensearch/isoresponsebysolr.py        |  121 ++
 .../libraries/edge/opensearch/response.py       |   12 +
 .../libraries/edge/opensearch/responsebysolr.py |   67 +
 .../libraries/edge/opensearch/responsewriter.py |  142 ++
 .../libraries/edge/opensearch/rssresponse.py    |  126 ++
 .../edge/opensearch/rssresponsebysolr.py        |  134 ++
 .../edge/opensearch/solrcmrtemplateresponse.py  |  243 +++
 .../edge/opensearch/solrtemplateresponse.py     |   65 +
 .../edge/opensearch/templateresponse.py         |   33 +
 .../python/libraries/edge/response/__init__.py  |    0
 .../edge/response/estemplateresponse.py         |   54 +
 .../edge/response/jsontemplateresponse.py       |   33 +
 .../edge/response/solrfacettemplateresponse.py  |   23 +
 .../edge/response/solrjsontemplateresponse.py   |   60 +
 src/main/python/libraries/edge/spatialsearch.py |   66 +
 .../python/libraries/edge/writer/__init__.py    |    0
 .../edge/writer/estemplateresponsewriter.py     |  116 ++
 .../libraries/edge/writer/genericproxywriter.py |   14 +
 .../python/libraries/edge/writer/proxywriter.py |   32 +
 .../edge/writer/solrtemplateresponsewriter.py   |  115 ++
 .../edge/writer/templateresponsewriter.py       |   40 +
 src/main/python/logging.conf                    |   28 +
 src/main/python/pluginhandler.py                |   58 +
 src/main/python/plugins/TestPlugin.py           |    5 +
 src/main/python/plugins/__init__.py             |    0
 src/main/python/plugins/dataset/__init__.py     |    0
 .../python/plugins/dataset/atom/AtomWriter.py   |   26 +
 .../python/plugins/dataset/atom/__init__.py     |    0
 .../python/plugins/dataset/atom/plugin.conf     |   11 +
 .../python/plugins/dataset/gcmd/DifWriter.py    |   32 +
 .../python/plugins/dataset/gcmd/__init__.py     |    0
 .../plugins/dataset/gcmd/dif_template.xml       |  216 +++
 .../python/plugins/dataset/gcmd/plugin.conf     |   32 +
 .../python/plugins/dataset/iso/IsoWriter.py     |   28 +
 src/main/python/plugins/dataset/iso/__init__.py |    0
 .../python/plugins/dataset/iso/iso_template.xml |  587 +++++++
 src/main/python/plugins/dataset/iso/plugin.conf |   11 +
 .../python/plugins/dataset/rss/RssWriter.py     |   18 +
 src/main/python/plugins/dataset/rss/__init__.py |    0
 src/main/python/plugins/dataset/rss/plugin.conf |   10 +
 src/main/python/plugins/example/__init__.py     |    0
 .../python/plugins/example/elastic/Writer.py    |   45 +
 .../python/plugins/example/elastic/__init__.py  |    0
 .../python/plugins/example/elastic/plugin.conf  |   12 +
 .../python/plugins/example/elastic/template.xml |   35 +
 .../python/plugins/example/json/JsonWriter.py   |    6 +
 .../python/plugins/example/json/__init__.py     |    0
 src/main/python/plugins/granule/__init__.py     |    0
 .../python/plugins/granule/atom/AtomWriter.py   |   27 +
 .../python/plugins/granule/atom/__init__.py     |    0
 .../python/plugins/granule/atom/plugin.conf     |   12 +
 .../granule/datacasting/DatacastingWriter.py    |   39 +
 .../plugins/granule/datacasting/__init__.py     |    0
 .../datacasting/datacasting_template.xml        |   58 +
 .../plugins/granule/datacasting/plugin.conf     |   13 +
 .../python/plugins/granule/fgdc/FgdcWriter.py   |   21 +
 .../python/plugins/granule/fgdc/__init__.py     |    0
 .../plugins/granule/fgdc/fgdc_template.xml      |  510 ++++++
 .../python/plugins/granule/fgdc/plugin.conf     |   10 +
 .../python/plugins/granule/iso/IsoWriter.py     |   23 +
 src/main/python/plugins/granule/iso/__init__.py |    0
 .../python/plugins/granule/iso/iso_template.xml |  674 ++++++++
 src/main/python/plugins/granule/iso/plugin.conf |   10 +
 .../python/plugins/granule/rss/RssWriter.py     |   21 +
 src/main/python/plugins/granule/rss/__init__.py |    0
 src/main/python/plugins/granule/rss/plugin.conf |   12 +
 src/main/python/plugins/heartbeat/__init__.py   |    0
 .../python/plugins/heartbeat/json/Writer.py     |   30 +
 .../python/plugins/heartbeat/json/__init__.py   |    0
 .../python/plugins/heartbeat/json/plugin.conf   |    2 +
 src/main/python/plugins/icoads/__init__.py      |    0
 src/main/python/plugins/icoads/json/Writer.py   |   89 +
 src/main/python/plugins/icoads/json/__init__.py |    0
 src/main/python/plugins/icoads/json/plugin.conf |   11 +
 .../python/plugins/icoads/json/template.json    |   64 +
 src/main/python/plugins/nexus/__init__.py       |    0
 .../python/plugins/nexus/climatology/Writer.py  |    8 +
 .../plugins/nexus/climatology/__init__.py       |    0
 .../plugins/nexus/climatology/plugin.conf       |    2 +
 src/main/python/plugins/nexus/solr/Writer.py    |    8 +
 src/main/python/plugins/nexus/solr/__init__.py  |    0
 src/main/python/plugins/nexus/solr/plugin.conf  |    2 +
 .../python/plugins/nexus/subsetter/Writer.py    |    8 +
 .../python/plugins/nexus/subsetter/__init__.py  |    0
 .../python/plugins/nexus/subsetter/plugin.conf  |    2 +
 .../python/plugins/oceanxtremes/__init__.py     |    0
 .../plugins/oceanxtremes/datacasting/Writer.py  |   62 +
 .../oceanxtremes/datacasting/__init__.py        |    0
 .../oceanxtremes/datacasting/plugin.conf        |   12 +
 .../oceanxtremes/datacasting/template.xml       |   43 +
 .../python/plugins/oceanxtremes/post/Writer.py  |   44 +
 .../plugins/oceanxtremes/post/__init__.py       |    0
 .../plugins/oceanxtremes/post/plugin.conf       |    3 +
 src/main/python/plugins/oiip/__init__.py        |    0
 src/main/python/plugins/oiip/json/Writer.py     |   46 +
 src/main/python/plugins/oiip/json/__init__.py   |    0
 src/main/python/plugins/oiip/json/plugin.conf   |   11 +
 src/main/python/plugins/oiip/json/template.json |   22 +
 src/main/python/plugins/oiip/xml/Writer.py      |   44 +
 src/main/python/plugins/oiip/xml/__init__.py    |    0
 src/main/python/plugins/oiip/xml/plugin.conf    |   11 +
 src/main/python/plugins/oiip/xml/template.xml   |   14 +
 src/main/python/plugins/passthrough/__init__.py |    0
 .../plugins/passthrough/pt/PassThroughWriter.py |  105 ++
 .../python/plugins/passthrough/pt/__init__.py   |    0
 .../python/plugins/passthrough/pt/plugin.conf   |    2 +
 src/main/python/plugins/product/__init__.py     |    0
 .../python/plugins/product/atom/AtomWriter.py   |   27 +
 src/main/python/plugins/product/atom/Writer.py  |   71 +
 .../python/plugins/product/atom/__init__.py     |    0
 .../python/plugins/product/atom/plugin.conf     |   12 +
 .../python/plugins/product/atom/template.xml    |   85 +
 src/main/python/plugins/product/iso/Writer.py   |   38 +
 src/main/python/plugins/product/iso/__init__.py |    0
 src/main/python/plugins/product/iso/plugin.conf |    8 +
 .../python/plugins/product/iso/template.xml     |  726 ++++++++
 src/main/python/plugins/productType/__init__.py |    0
 .../plugins/productType/atom/AtomWriter.py      |   26 +
 .../python/plugins/productType/atom/__init__.py |    0
 .../python/plugins/productType/atom/plugin.conf |   11 +
 .../python/plugins/product_type/__init__.py     |    0
 .../python/plugins/product_type/atom/Writer.py  |   74 +
 .../plugins/product_type/atom/__init__.py       |    0
 .../plugins/product_type/atom/plugin.conf       |   12 +
 .../plugins/product_type/atom/template.xml      |  116 ++
 .../python/plugins/product_type/iso/Writer.py   |   35 +
 .../python/plugins/product_type/iso/__init__.py |    0
 .../python/plugins/product_type/iso/plugin.conf |    8 +
 .../plugins/product_type/iso/template.xml       |  914 ++++++++++
 src/main/python/plugins/samos/__init__.py       |    0
 src/main/python/plugins/samos/json/Writer.py    |   89 +
 src/main/python/plugins/samos/json/__init__.py  |    0
 src/main/python/plugins/samos/json/plugin.conf  |   11 +
 .../python/plugins/samos/json/template.json     |   63 +
 src/main/python/plugins/slcp/__init__.py        |    0
 src/main/python/plugins/slcp/atom/Writer.py     |   86 +
 src/main/python/plugins/slcp/atom/__init__.py   |    0
 src/main/python/plugins/slcp/atom/plugin.conf   |   12 +
 src/main/python/plugins/slcp/atom/template.xml  |  148 ++
 src/main/python/plugins/slcp/basin/Writer.py    |   35 +
 src/main/python/plugins/slcp/basin/__init__.py  |    0
 src/main/python/plugins/slcp/basin/plugin.conf  |   11 +
 .../python/plugins/slcp/basin/template.json     |   68 +
 src/main/python/plugins/slcp/content/Writer.py  |   76 +
 .../python/plugins/slcp/content/__init__.py     |    0
 .../python/plugins/slcp/content/plugin.conf     |   11 +
 .../python/plugins/slcp/content/template.xml    |  158 ++
 src/main/python/plugins/slcp/dat/Writer.py      |   53 +
 src/main/python/plugins/slcp/dat/__init__.py    |    0
 src/main/python/plugins/slcp/dat/plugin.conf    |    9 +
 src/main/python/plugins/slcp/dat/template.json  |   33 +
 src/main/python/plugins/slcp/echo10/Writer.py   |   39 +
 src/main/python/plugins/slcp/echo10/__init__.py |    0
 src/main/python/plugins/slcp/echo10/plugin.conf |    8 +
 .../python/plugins/slcp/echo10/template.xml     |  190 ++
 src/main/python/plugins/slcp/facet/Writer.py    |   70 +
 src/main/python/plugins/slcp/facet/__init__.py  |    0
 src/main/python/plugins/slcp/facet/plugin.conf  |   11 +
 src/main/python/plugins/slcp/facet/template.xml |   41 +
 src/main/python/plugins/slcp/granule/Writer.py  |   79 +
 .../python/plugins/slcp/granule/__init__.py     |    0
 .../python/plugins/slcp/granule/plugin.conf     |   12 +
 .../python/plugins/slcp/granule/template.xml    |   49 +
 .../python/plugins/slcp/indicator/Writer.py     |   64 +
 .../python/plugins/slcp/indicator/__init__.py   |    0
 .../python/plugins/slcp/indicator/plugin.conf   |    4 +
 src/main/python/plugins/slcp/stats/Writer.py    |   57 +
 src/main/python/plugins/slcp/stats/__init__.py  |    0
 src/main/python/plugins/slcp/stats/plugin.conf  |   11 +
 .../python/plugins/slcp/stats/template.json     |   40 +
 src/main/python/plugins/slcp/suggest/Writer.py  |   22 +
 .../python/plugins/slcp/suggest/__init__.py     |    0
 .../python/plugins/slcp/suggest/plugin.conf     |    2 +
 src/main/python/plugins/slcp/umm-json/Writer.py |   39 +
 .../python/plugins/slcp/umm-json/__init__.py    |    0
 .../python/plugins/slcp/umm-json/plugin.conf    |    8 +
 .../python/plugins/slcp/umm-json/template.json  |  274 +++
 src/main/python/plugins/spurs/__init__.py       |    0
 src/main/python/plugins/spurs/json/Writer.py    |   76 +
 src/main/python/plugins/spurs/json/__init__.py  |    0
 src/main/python/plugins/spurs/json/plugin.conf  |   12 +
 .../python/plugins/spurs/json/template.json     |   63 +
 src/main/python/plugins/spurs2/__init__.py      |    0
 src/main/python/plugins/spurs2/json/Writer.py   |   76 +
 src/main/python/plugins/spurs2/json/__init__.py |    0
 src/main/python/plugins/spurs2/json/plugin.conf |   12 +
 .../python/plugins/spurs2/json/template.json    |   64 +
 src/main/python/plugins/tie/__init__.py         |    0
 .../python/plugins/tie/collection/Writer.py     |   54 +
 .../python/plugins/tie/collection/__init__.py   |    0
 .../python/plugins/tie/collection/plugin.conf   |    2 +
 src/main/python/requestresponder.py             |   24 +
 src/main/python/requirements.txt                |    3 +
 src/main/python/server.py                       |  102 ++
 .../python/templates/podaac-dataset-osd.xml     |   16 +
 .../python/templates/podaac-granule-osd.xml     |   16 +
 src/main/solr/product/conf/data-config.xml      |  106 ++
 .../solr/product/conf/dataimport.properties     |    3 +
 src/main/solr/product/conf/schema.xml           | 1201 +++++++++++++
 src/main/solr/product/conf/solrconfig.xml       | 1626 +++++++++++++++++
 src/main/solr/product_type/conf/data-config.xml |  124 ++
 .../product_type/conf/dataimport.properties     |    3 +
 src/main/solr/product_type/conf/schema.xml      | 1262 ++++++++++++++
 src/main/solr/product_type/conf/solrconfig.xml  | 1627 ++++++++++++++++++
 src/site/apt/index.apt                          |   14 +
 src/site/apt/install/index.apt                  |  144 ++
 src/site/apt/operate/index.apt                  |  344 ++++
 src/site/apt/release/index-2.2.1.apt            |   73 +
 src/site/apt/release/index-3.0.0.apt            |   71 +
 src/site/apt/release/index-3.1.0.apt            |   81 +
 src/site/apt/release/index-3.1.1.apt            |   89 +
 src/site/apt/release/index-3.2.0.apt            |   79 +
 src/site/apt/release/index-3.2.1.apt            |   79 +
 src/site/apt/release/index-3.2.2.apt            |   81 +
 src/site/apt/release/index-3.3.0.apt            |   81 +
 src/site/apt/release/index-4.0.0.apt            |   79 +
 src/site/apt/release/index-4.1.0.apt            |   79 +
 src/site/apt/release/index-4.2.0.apt            |   79 +
 src/site/apt/release/index-4.3.0.apt            |   77 +
 src/site/apt/release/index-4.4.0.apt            |   78 +
 src/site/apt/release/index-4.4.1.apt            |   79 +
 src/site/apt/release/index.apt                  |  101 ++
 src/site/resources/images/podaac_logo.jpg       |  Bin 0 -> 5151 bytes
 src/site/site.xml                               |   36 +
 265 files changed, 21510 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/LICENSE
----------------------------------------------------------------------
diff --git a/LICENSE b/LICENSE
new file mode 100644
index 0000000..8dada3e
--- /dev/null
+++ b/LICENSE
@@ -0,0 +1,201 @@
+                                 Apache License
+                           Version 2.0, January 2004
+                        http://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+      "License" shall mean the terms and conditions for use, reproduction,
+      and distribution as defined by Sections 1 through 9 of this document.
+
+      "Licensor" shall mean the copyright owner or entity authorized by
+      the copyright owner that is granting the License.
+
+      "Legal Entity" shall mean the union of the acting entity and all
+      other entities that control, are controlled by, or are under common
+      control with that entity. For the purposes of this definition,
+      "control" means (i) the power, direct or indirect, to cause the
+      direction or management of such entity, whether by contract or
+      otherwise, or (ii) ownership of fifty percent (50%) or more of the
+      outstanding shares, or (iii) beneficial ownership of such entity.
+
+      "You" (or "Your") shall mean an individual or Legal Entity
+      exercising permissions granted by this License.
+
+      "Source" form shall mean the preferred form for making modifications,
+      including but not limited to software source code, documentation
+      source, and configuration files.
+
+      "Object" form shall mean any form resulting from mechanical
+      transformation or translation of a Source form, including but
+      not limited to compiled object code, generated documentation,
+      and conversions to other media types.
+
+      "Work" shall mean the work of authorship, whether in Source or
+      Object form, made available under the License, as indicated by a
+      copyright notice that is included in or attached to the work
+      (an example is provided in the Appendix below).
+
+      "Derivative Works" shall mean any work, whether in Source or Object
+      form, that is based on (or derived from) the Work and for which the
+      editorial revisions, annotations, elaborations, or other modifications
+      represent, as a whole, an original work of authorship. For the purposes
+      of this License, Derivative Works shall not include works that remain
+      separable from, or merely link (or bind by name) to the interfaces of,
+      the Work and Derivative Works thereof.
+
+      "Contribution" shall mean any work of authorship, including
+      the original version of the Work and any modifications or additions
+      to that Work or Derivative Works thereof, that is intentionally
+      submitted to Licensor for inclusion in the Work by the copyright owner
+      or by an individual or Legal Entity authorized to submit on behalf of
+      the copyright owner. For the purposes of this definition, "submitted"
+      means any form of electronic, verbal, or written communication sent
+      to the Licensor or its representatives, including but not limited to
+      communication on electronic mailing lists, source code control systems,
+      and issue tracking systems that are managed by, or on behalf of, the
+      Licensor for the purpose of discussing and improving the Work, but
+      excluding communication that is conspicuously marked or otherwise
+      designated in writing by the copyright owner as "Not a Contribution."
+
+      "Contributor" shall mean Licensor and any individual or Legal Entity
+      on behalf of whom a Contribution has been received by Licensor and
+      subsequently incorporated within the Work.
+
+   2. Grant of Copyright License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      copyright license to reproduce, prepare Derivative Works of,
+      publicly display, publicly perform, sublicense, and distribute the
+      Work and such Derivative Works in Source or Object form.
+
+   3. Grant of Patent License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      (except as stated in this section) patent license to make, have made,
+      use, offer to sell, sell, import, and otherwise transfer the Work,
+      where such license applies only to those patent claims licensable
+      by such Contributor that are necessarily infringed by their
+      Contribution(s) alone or by combination of their Contribution(s)
+      with the Work to which such Contribution(s) was submitted. If You
+      institute patent litigation against any entity (including a
+      cross-claim or counterclaim in a lawsuit) alleging that the Work
+      or a Contribution incorporated within the Work constitutes direct
+      or contributory patent infringement, then any patent licenses
+      granted to You under this License for that Work shall terminate
+      as of the date such litigation is filed.
+
+   4. Redistribution. You may reproduce and distribute copies of the
+      Work or Derivative Works thereof in any medium, with or without
+      modifications, and in Source or Object form, provided that You
+      meet the following conditions:
+
+      (a) You must give any other recipients of the Work or
+          Derivative Works a copy of this License; and
+
+      (b) You must cause any modified files to carry prominent notices
+          stating that You changed the files; and
+
+      (c) You must retain, in the Source form of any Derivative Works
+          that You distribute, all copyright, patent, trademark, and
+          attribution notices from the Source form of the Work,
+          excluding those notices that do not pertain to any part of
+          the Derivative Works; and
+
+      (d) If the Work includes a "NOTICE" text file as part of its
+          distribution, then any Derivative Works that You distribute must
+          include a readable copy of the attribution notices contained
+          within such NOTICE file, excluding those notices that do not
+          pertain to any part of the Derivative Works, in at least one
+          of the following places: within a NOTICE text file distributed
+          as part of the Derivative Works; within the Source form or
+          documentation, if provided along with the Derivative Works; or,
+          within a display generated by the Derivative Works, if and
+          wherever such third-party notices normally appear. The contents
+          of the NOTICE file are for informational purposes only and
+          do not modify the License. You may add Your own attribution
+          notices within Derivative Works that You distribute, alongside
+          or as an addendum to the NOTICE text from the Work, provided
+          that such additional attribution notices cannot be construed
+          as modifying the License.
+
+      You may add Your own copyright statement to Your modifications and
+      may provide additional or different license terms and conditions
+      for use, reproduction, or distribution of Your modifications, or
+      for any such Derivative Works as a whole, provided Your use,
+      reproduction, and distribution of the Work otherwise complies with
+      the conditions stated in this License.
+
+   5. Submission of Contributions. Unless You explicitly state otherwise,
+      any Contribution intentionally submitted for inclusion in the Work
+      by You to the Licensor shall be under the terms and conditions of
+      this License, without any additional terms or conditions.
+      Notwithstanding the above, nothing herein shall supersede or modify
+      the terms of any separate license agreement you may have executed
+      with Licensor regarding such Contributions.
+
+   6. Trademarks. This License does not grant permission to use the trade
+      names, trademarks, service marks, or product names of the Licensor,
+      except as required for reasonable and customary use in describing the
+      origin of the Work and reproducing the content of the NOTICE file.
+
+   7. Disclaimer of Warranty. Unless required by applicable law or
+      agreed to in writing, Licensor provides the Work (and each
+      Contributor provides its Contributions) on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+      implied, including, without limitation, any warranties or conditions
+      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+      PARTICULAR PURPOSE. You are solely responsible for determining the
+      appropriateness of using or redistributing the Work and assume any
+      risks associated with Your exercise of permissions under this License.
+
+   8. Limitation of Liability. In no event and under no legal theory,
+      whether in tort (including negligence), contract, or otherwise,
+      unless required by applicable law (such as deliberate and grossly
+      negligent acts) or agreed to in writing, shall any Contributor be
+      liable to You for damages, including any direct, indirect, special,
+      incidental, or consequential damages of any character arising as a
+      result of this License or out of the use or inability to use the
+      Work (including but not limited to damages for loss of goodwill,
+      work stoppage, computer failure or malfunction, or any and all
+      other commercial damages or losses), even if such Contributor
+      has been advised of the possibility of such damages.
+
+   9. Accepting Warranty or Additional Liability. While redistributing
+      the Work or Derivative Works thereof, You may choose to offer,
+      and charge a fee for, acceptance of support, warranty, indemnity,
+      or other liability obligations and/or rights consistent with this
+      License. However, in accepting such obligations, You may act only
+      on Your own behalf and on Your sole responsibility, not on behalf
+      of any other Contributor, and only if You agree to indemnify,
+      defend, and hold each Contributor harmless for any liability
+      incurred by, or claims asserted against, such Contributor by reason
+      of your accepting any such warranty or additional liability.
+
+   END OF TERMS AND CONDITIONS
+
+   APPENDIX: How to apply the Apache License to your work.
+
+      To apply the Apache License to your work, attach the following
+      boilerplate notice, with the fields enclosed by brackets "{}"
+      replaced with your own identifying information. (Don't include
+      the brackets!)  The text should be enclosed in the appropriate
+      comment syntax for the file format. We also recommend that a
+      file or class name and description of purpose be included on the
+      same "printed page" as the copyright notice for easier
+      identification within third-party archives.
+
+   Copyright {yyyy} {name of copyright owner}
+
+   Licensed under the Apache License, Version 2.0 (the "License");
+   you may not use this file except in compliance with the License.
+   You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/README.md
----------------------------------------------------------------------
diff --git a/README.md b/README.md
new file mode 100644
index 0000000..8be6037
--- /dev/null
+++ b/README.md
@@ -0,0 +1,59 @@
+# The Extensible Data Gateway Environment (EDGE)
+The Extensible Data Gateway Environment (EDGE) is a data integration platform designed to facilitate high-performance geospatial data discovery and access with the ability to support multimetadata standard specifications. EDGE is designed with two main building blocks: data aggregation service and enterprise geospatial indexed search cluster. The data aggregation service provides web service interfaces for searches, metadata packaging, and data access. Aggregation often involves retrieving data from two or more sources and packaging the resulting sets into a single response to the requestor. It could also serve as a proxy to other local/remote services to reduce the number of interfaces a requestor has to access. The enterprise geospatial indexed search cluster, which currently supports Apache Solr (http://lucene.apache.org/solr/) and ElasticSearch (http://elasticsearch.org), is a horizontal scale cluster for faceted search with geospatial support.
+
+# Setup
+
+1. Setup and activate a conda environment
+
+    ````
+    conda create --name edge python
+    source activate edge
+    ````
+
+2. Install dependencies
+
+    ````
+    cd edge/src/main/python
+    pip install -r requirements.txt
+    ````
+
+3. Update pythonpath
+
+    ````
+    source edge-env.bash
+    ````
+
+4. Launch EDGE service
+
+    ````
+    python server.py
+    ````
+# Adding Custom Plugin
+
+You will need to customize EDGE to work with your existing Apache Solr or ElasticSearch metadata endpoints.
+
+## ElasticSearch
+
+For an ElasticSearch example plugin, see [plugins/example/elastic](src/main/python/plugins/example/elastic)
+
+1. Copy the plugins/example/elastic plugin into a new directory, for example, plugins/myproject/elastic.
+
+2. Update [plugin.conf](src/main/python/plugins/example/elastic/plugin.conf) datasetUrl to point to an ElasticSarch index endpoint.
+
+3. Update [template.xml](src/main/python/plugins/example/elastic/template.xml) to modify the response XML. Metadata values for each document returned are stored in the doc variable dictionary, for example, doc['ShortName'].
+
+    To handle additional search parameters, update [plugin.conf](src/main/python/plugins/example/elastic/plugin.conf) parameters to include additional parameters, for example,
+
+    ````
+    parameters=keyword,bbox,startTime,endTime
+    ````
+
+    Update [Writer.py](src/main/python/plugins/example/elastic/Writer.py) to handle these additional parameters by modifying the resulting query sent to ElasticSearch endpoint.
+
+4. Update [server.py](src/main/python/server.py) to add a new endpoint that will invoke the newly created plugin, for example,
+
+    ````
+    (r"/myplugin/es", GenericHandler, dict(pluginName='myplugin', format=['elastic'])),
+    ````
+
+5. Restart EDGE and access the new endpoint at [http://localhost:8890/myplugin/es](http://localhost:8890/myplugin/es).

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/pom.xml
----------------------------------------------------------------------
diff --git a/pom.xml b/pom.xml
new file mode 100644
index 0000000..5fcf32b
--- /dev/null
+++ b/pom.xml
@@ -0,0 +1,103 @@
+<?xml version="1.0" encoding="UTF-8"?>
+
+<!-- 
+   Copyright 2009, by the California Institute of Technology.
+   ALL RIGHTS RESERVED. United States Government Sponsorship acknowledged.
+   
+   View Parent POM
+   
+   @author Thomas Huang {Thomas.Huang@jpl.nasa.gov}
+   @version $Id: $
+-->
+
+<project xmlns="http://maven.apache.org/POM/4.0.0"
+   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+   xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
+   http://maven.apache.org/xsd/maven-4.0.0.xsd">
+   <modelVersion>4.0.0</modelVersion>
+   <parent>
+      <groupId>gov.nasa.podaac</groupId>
+      <artifactId>podaac</artifactId>
+      <version>0.0.0</version>
+   </parent>
+
+   <groupId>gov.nasa.podaac</groupId>
+   <artifactId>ocsi</artifactId>
+   <version>4.4.1</version>
+   <packaging>pom</packaging>
+
+   <name>OCSI Program Set</name>
+   <description> The OCSI Program Set captures presentation layer
+      implementations.</description>
+   <url>http://podaac-cm.jpl.nasa.gov/docs/ocsi/</url>
+
+   <modules>
+      <!--
+        <module>ghrsst-web</module>
+        <module>portal</module>
+        <module>vodc</module>
+        -->
+   </modules>
+
+   <repositories>
+      <repository>
+         <id>podaac-repo</id>
+         <name>PO.DAAC Repository</name>
+         <url>http://podaac-cm.jpl.nasa.gov/maven2</url>
+         <releases>
+            <updatePolicy>always</updatePolicy>
+            <checksumPolicy>warn</checksumPolicy>
+         </releases>
+         <snapshots>
+            <enabled>false</enabled>
+         </snapshots>
+      </repository>
+   </repositories>
+
+   <build>
+      <plugins>
+         <plugin>
+            <groupId>org.apache.maven.plugins</groupId>
+            <artifactId>maven-site-plugin</artifactId>
+            <version>2.1.1</version>
+            <dependencies>
+               <dependency>
+                  <groupId>commons-httpclient</groupId>
+                  <artifactId>commons-httpclient</artifactId>
+                  <version>3.1</version>
+                  <exclusions>
+                     <exclusion>
+                        <groupId>commons-logging</groupId>
+                        <artifactId>commons-logging</artifactId>
+                     </exclusion>
+                  </exclusions>
+               </dependency>
+            </dependencies>
+         </plugin>
+      </plugins>
+   </build>
+
+   <reporting>
+      <plugins>
+         <plugin>
+            <groupId>org.apache.maven.plugins</groupId>
+            <artifactId>maven-project-info-reports-plugin</artifactId>
+            <reportSets>
+               <reportSet>
+                  <reports>
+                     <report>dependencies</report>
+                     <report>summary</report>
+                  </reports>
+               </reportSet>
+            </reportSets>
+         </plugin>
+         <plugin>
+            <artifactId>maven-javadoc-plugin</artifactId>
+         </plugin>
+         <plugin>
+            <groupId>org.apache.maven.plugins</groupId>
+            <artifactId>maven-surefire-report-plugin</artifactId>
+         </plugin>
+      </plugins>
+   </reporting>
+</project>

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/database/src/create_imagery_provider.sql
----------------------------------------------------------------------
diff --git a/src/main/database/src/create_imagery_provider.sql b/src/main/database/src/create_imagery_provider.sql
new file mode 100644
index 0000000..97a2b06
--- /dev/null
+++ b/src/main/database/src/create_imagery_provider.sql
@@ -0,0 +1,195 @@
+--*********************************************************************************************
+--**  Imagery Provider Model 
+--**
+--**  The imagery provider model is comprised of the following data models:
+--**
+--**     Imagery Provider Model
+--**        - product_contact_view
+--**             - provider_view (intermediate view)
+--**             - contact_view (intermediate view)
+--**
+--*********************************************************************************************
+
+--*********************************************************************************************
+-- Imagery Provider Model 
+--*********************************************************************************************
+
+--------------------------------------------------
+-- provider_resource_view
+--------------------------------------------------
+DROP VIEW IF EXISTS provider_resource_view CASCADE;
+CREATE VIEW provider_resource_view AS
+SELECT
+   
+   -- provider
+   provider.id         as provider_id,  
+
+   -- provider_resource
+   string_agg(provider_resource.version::int8::text, 
+              ',' order by provider_resource.id) as provider_resource_version_list,
+   string_agg(provider_resource.description, 
+              ',' order by provider_resource.id) as provider_resource_description_list,
+   string_agg(provider_resource.name,        
+              ',' order by provider_resource.id) as provider_resource_name_list,
+   string_agg(provider_resource.path,        
+              ',' order by provider_resource.id) as provider_resource_path_list,
+   string_agg(provider_resource.type,        
+              ',' order by provider_resource.id) as provider_resource_type_list
+
+FROM provider 
+LEFT JOIN provider_resource ON provider_resource.provider_id = provider.id
+GROUP BY provider.id;
+SELECT COUNT(*) AS provider_resource_view_count FROM provider_resource_view;
+SELECT * FROM provider_resource_view LIMIT 5;
+
+--------------------------------------------------
+-- provider_contact_view
+--------------------------------------------------
+DROP VIEW IF EXISTS provider_contact_view CASCADE;
+CREATE VIEW provider_contact_view AS
+SELECT
+
+   -- provider
+   provider.id as provider_id,
+
+   -- contact
+   string_agg(contact.version::int8::text, ',' order by contact.id) as provider_contact_version_list,
+   string_agg(contact.role,                ',' order by contact.id) as provider_contact_role_list,
+   string_agg(contact.first_name,          ',' order by contact.id) as provider_contact_first_name_list,
+   string_agg(contact.last_name,           ',' order by contact.id) as provider_contact_last_name_list,
+   string_agg(contact.middle_name,         ',' order by contact.id) as provider_contact_middle_name_list,
+   string_agg(contact.address,             ',' order by contact.id) as provider_contact_address_list,
+   string_agg(contact.notify_type,         ',' order by contact.id) as provider_contact_notify_type_list,
+   string_agg(contact.email,               ',' order by contact.id) as provider_contact_email_list,
+   string_agg(contact.phone,               ',' order by contact.id) as provider_contact_phone_list,
+   string_agg(contact.fax,                 ',' order by contact.id) as provider_contact_fax_list
+
+FROM provider
+LEFT JOIN contact ON contact.provider_id = provider.id
+GROUP BY provider.id;
+SELECT COUNT(*) AS provider_contact_view_count FROM provider_contact_view;
+SELECT * FROM provider_contact_view LIMIT 5;
+
+--------------------------------------------------
+-- provider_view
+--------------------------------------------------
+DROP VIEW IF EXISTS provider_view CASCADE;
+CREATE VIEW provider_view AS
+SELECT
+
+   -- provider
+   provider.id          as provider_id,
+   provider.version     as provider_version,
+   provider.long_name   as provider_long_name,
+   provider.short_name  as provider_short_name,
+   provider.type        as provider_type,
+
+   -- provider_resource_view
+   provider_resource_version_list,
+   provider_resource_description_list,
+   provider_resource_name_list,
+   provider_resource_path_list,
+   provider_resource_type_list,
+
+   -- provider_contact_view
+   provider_contact_version_list,
+   provider_contact_role_list,
+   provider_contact_first_name_list,
+   provider_contact_last_name_list,
+   provider_contact_middle_name_list,
+   provider_contact_address_list,
+   provider_contact_notify_type_list,
+   provider_contact_email_list,
+   provider_contact_phone_list,
+   provider_contact_fax_list
+
+FROM provider, 
+     provider_resource_view,
+     provider_contact_view
+WHERE 
+     provider.id = provider_resource_view.provider_id AND
+     provider.id = provider_contact_view.provider_id;
+SELECT COUNT(*) AS provider_view_count FROM provider_view;
+SELECT * FROM provider_view LIMIT 5;
+
+--------------------------------------------------
+-- contact_provider_view (used for product)
+--------------------------------------------------
+DROP VIEW IF EXISTS contact_provider_view CASCADE;
+CREATE VIEW contact_provider_view AS
+SELECT
+
+   -- contact
+   contact.provider_id,
+   contact.id           as contact_id,
+   contact.version      as contact_version,
+   contact.role         as contact_role,
+   contact.first_name   as contact_first_name,
+   contact.last_name    as contact_last_name,
+   contact.middle_name  as contact_middle_name,
+   contact.address      as contact_address,
+   contact.notify_type  as contact_notify_type,
+   contact.email        as contact_email,
+   contact.phone        as contact_phone,
+   contact.fax          as contact_fax,
+
+   -- provider
+   provider.type        as provider_type,
+   provider.version     as provider_version,
+   provider.long_name   as provider_long_name,
+   provider.short_name  as provider_short_name,
+
+   -- provider_resource_view
+   provider_resource_view.provider_resource_version_list,
+   provider_resource_view.provider_resource_description_list,
+   provider_resource_view.provider_resource_name_list,
+   provider_resource_view.provider_resource_path_list,
+   provider_resource_view.provider_resource_type_list
+FROM contact,
+     provider,
+     provider_resource_view
+WHERE contact.provider_id = provider.id
+AND   contact.provider_id = provider_resource_view.provider_id;
+SELECT COUNT(*) AS contact_provider_view_count FROM contact_provider_view;
+SELECT * FROM contact_provider_view LIMIT 5;
+
+--------------------------------------------------
+-- dataset_provider_view (no provider id in the dataset )
+--------------------------------------------------
+DROP VIEW IF EXISTS dataset_provider_view CASCADE;
+CREATE VIEW dataset_provider_view AS
+SELECT
+
+   -- dataset
+   dataset.provider_id,
+   dataset.id                as dataset_id,
+   dataset.version           as dataset_version,
+   dataset.long_name         as dataset_long_name,
+   dataset.short_name        as dataset_short_name,
+   dataset.metadata_endpoint as dataset_metadata_endpoint,
+   dataset.metadata_registry as dataset_metadata_registry,
+   dataset.remote_dataset_id as dataset_remote_dataset_id,
+
+   -- provider
+   provider.version          as provider_version,
+   provider.type             as provider_type,
+   provider.long_name        as provider_long_name,
+   provider.short_name       as provider_short_name,
+
+   -- provider_resource_view
+   provider_resource_view.provider_resource_version_list,
+   provider_resource_view.provider_resource_description_list,
+   provider_resource_view.provider_resource_name_list,
+   provider_resource_view.provider_resource_path_list,
+   provider_resource_view.provider_resource_type_list
+
+FROM dataset,
+     provider,
+     provider_resource_view
+WHERE dataset.provider_id = provider.id
+AND   dataset.provider_id = provider_resource_view.provider_id;
+SELECT COUNT(*) AS dataset_provider_view_count FROM dataset_provider_view;
+SELECT * FROM dataset_provider_view LIMIT 5;
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/database/src/create_product_type_view.sql
----------------------------------------------------------------------
diff --git a/src/main/database/src/create_product_type_view.sql b/src/main/database/src/create_product_type_view.sql
new file mode 100644
index 0000000..01cadfc
--- /dev/null
+++ b/src/main/database/src/create_product_type_view.sql
@@ -0,0 +1,739 @@
+--*********************************************************************************************
+--**  Product Type Model
+--**
+--**  The product type model is comprised of the following data models:
+--**
+--**     Product Type Model
+--**
+--**        - product_type_dataset_view
+--**            - dataset_imagery_view
+--**                - dataset_imagery
+--**                - dataset
+--**
+--**        - product_type_resource_view
+--**            - product_type
+--**            - product_type_resource
+--**
+--**        - product_type_coverage_view
+--**            - product_type
+--**            - product_type_coverage
+--**
+--**        - product_type_generation_view
+--**            - product_type
+--**            - product_type_generation
+--**
+--**        - product_type_metadata_view
+--**            - product_type
+--**            - product_type_metadata
+--**
+--**        - product_type_policy_view
+--**            - product_type
+--**            - product_type_policy
+--**
+--**        - product_type_location_policy_view
+--**            - product_type
+--**            - product_type_location_policy_view
+--**
+--**        - product_type_provider_view
+--**            - product_type
+--**            - provider_view (see create_imagery_provider.sql)
+--**                 - provider
+--**                 - provider_resource_view
+--**                     - provider
+--**                     - provider_resource
+--**                 - provider_contact_view 
+--**                     - provider
+--**                     - contact
+--**
+--**        - product_type_element_view
+--**            - product_type_element
+--**            - product_type_element_dd_view
+--**                - product_type_element
+--**                - element_dd
+--**
+--**        - product_type_datetime_view
+--**            - product_type
+--**            - product_type_datetime
+--**
+--**        - product_type_character_view
+--**            - product_type
+--**            - product_type_character
+--**
+--**        - product_type_integer_view
+--**            - product_type
+--**            - product_type_integer
+--**
+--**        - product_type_real_view
+--**            - product_type
+--**            - product_type_real
+--**
+--*********************************************************************************************
+
+-----------------------------------------------------------------------------------------------
+-- product_type_dataset_view
+-----------------------------------------------------------------------------------------------
+DROP VIEW IF EXISTS dataset_imagery_view CASCADE;
+CREATE VIEW dataset_imagery_view AS
+SELECT
+
+   -- dataset_imagery
+   dataset_imagery.pt_id       as product_type_id,
+
+   -- dataset
+   string_agg(dataset.id::int8::text,
+              ',' order by dataset.id) as dataset_id_list,
+   string_agg(dataset.revision::int8::text,
+              ',' order by dataset.id) as dataset_revision_list,
+   string_agg(dataset.description,
+              ',' order by dataset.id) as dataset_description_list,
+   string_agg(dataset.long_name,
+              ',' order by dataset.id) as dataset_long_name_list,
+   string_agg(dataset.short_name,
+              ',' order by dataset.id) as dataset_short_name_list,
+   string_agg(dataset.metadata_endpoint,
+              ',' order by dataset.id) as dataset_metadata_endpoint_list,
+   string_agg(dataset.metadata_registry,
+              ',' order by dataset.id) as dataset_metadata_registry_list,
+   string_agg(dataset.remote_dataset_id,
+              ',' order by dataset.id) as dataset_remote_dataset_id_list
+
+FROM dataset_imagery
+LEFT JOIN dataset ON dataset.id = dataset_imagery.dataset_id
+GROUP BY dataset_imagery.pt_id;
+SELECT COUNT(*) AS dataset_imagery_view FROM dataset_imagery_view;
+--SELECT * FROM dataset_imagery_view ORDER BY product_type_id LIMIT 5;
+
+DROP VIEW IF EXISTS product_type_dataset_view CASCADE;
+CREATE VIEW product_type_dataset_view AS
+SELECT
+
+   -- product_type
+   product_type.id as product_type_id,
+
+   -- dataset_imagery_view
+   dataset_imagery_view.dataset_id_list                as product_type_dataset_id_list, 
+   dataset_imagery_view.dataset_revision_list          as product_type_dataset_revision_list, 
+   dataset_imagery_view.dataset_description_list       as product_type_dataset_description_list,          
+   dataset_imagery_view.dataset_long_name_list         as product_type_dataset_long_name_list,            
+   dataset_imagery_view.dataset_short_name_list        as product_type_dataset_short_name_list,           
+   dataset_imagery_view.dataset_metadata_endpoint_list as product_type_dataset_metadata_endpoint_list,    
+   dataset_imagery_view.dataset_metadata_registry_list as product_type_dataset_metadata_registry_list,    
+   dataset_imagery_view.dataset_remote_dataset_id_list as product_type_dataset_remote_dataset_id_list
+
+FROM product_type
+LEFT JOIN dataset_imagery_view ON dataset_imagery_view.product_type_id = product_type.id
+GROUP BY product_type.id,
+         dataset_imagery_view.dataset_id_list,  
+         dataset_imagery_view.dataset_revision_list,  
+         dataset_imagery_view.dataset_description_list,  
+         dataset_imagery_view.dataset_long_name_list,  
+         dataset_imagery_view.dataset_short_name_list,  
+         dataset_imagery_view.dataset_metadata_endpoint_list,
+         dataset_imagery_view.dataset_metadata_registry_list,
+         dataset_imagery_view.dataset_remote_dataset_id_list;
+SELECT COUNT(*) AS product_type_dataset_view FROM product_type_dataset_view;
+--SELECT * FROM product_type_dataset_view ORDER BY product_type_id LIMIT 5;
+
+---------------------------------------------------------------------------
+-- product_type_resource_view
+---------------------------------------------------------------------------
+DROP VIEW IF EXISTS product_type_resource_view CASCADE;
+CREATE VIEW product_type_resource_view AS
+SELECT
+   -- product_type
+   product_type.id as product_type_id,
+
+   -- product_type_resource
+   string_agg(product_type_resource.version::int8::text,
+              ',' order by product_type_resource.id) as product_type_resource_version_list,
+   string_agg(product_type_resource.type,        
+              ',' order by product_type_resource.id) as product_type_resource_type_list,
+   string_agg(product_type_resource.name,        
+              ',' order by product_type_resource.id) as product_type_resource_name_list,
+   string_agg(product_type_resource.path,        
+              ',' order by product_type_resource.id) as product_type_resource_path_list,
+   string_agg(product_type_resource.description, 
+              ',' order by product_type_resource.id) as product_type_resource_description_list
+FROM product_type
+LEFT JOIN product_type_resource ON product_type_resource.pt_id = product_type.id
+GROUP BY product_type.id;
+SELECT COUNT(*) AS product_type_resource_view_count FROM product_type_resource_view;
+--SELECT * FROM product_type_resource_view LIMIT 5;
+
+---------------------------------------------------------------------------
+-- product_type_coverage_view
+---------------------------------------------------------------------------
+DROP VIEW IF EXISTS product_type_coverage_view CASCADE;
+CREATE VIEW product_type_coverage_view AS 
+SELECT 
+   -- product_type
+   product_type.id as product_type_id,
+ 
+   -- product_type_coverage 
+   string_agg(product_type_coverage.version::int8::text,
+              ',' order by product_type_coverage.id) as product_type_coverage_version_list,
+   string_agg(product_type_coverage.north_latitude::real::text,      
+              ',' order by product_type_coverage.id) as product_type_coverage_north_latitude_list,
+   string_agg(product_type_coverage.east_longitude::real::text,      
+              ',' order by product_type_coverage.id) as product_type_coverage_east_longitude_list,
+   string_agg(product_type_coverage.south_latitude::real::text,      
+              ',' order by product_type_coverage.id) as product_type_coverage_south_latitude_list,
+   string_agg(product_type_coverage.west_longitude::real::text,      
+              ',' order by product_type_coverage.id) as product_type_coverage_west_longitude_list,
+   string_agg(product_type_coverage.start_time::int8::text,          
+              ',' order by product_type_coverage.id) as product_type_coverage_start_time_list,
+   string_agg(product_type_coverage.stop_time::int8::text,           
+              ',' order by product_type_coverage.id) as product_type_coverage_stop_time_list,
+   string_agg(('1970-01-01 00:00:00 GMT'::timestamp + ((product_type_coverage.start_time/1000)::text)::interval)::timestamp::text,
+              ',' order by product_type_coverage.id) as product_type_coverage_start_time_string_list,
+   string_agg(('1970-01-01 00:00:00 GMT'::timestamp + ((product_type_coverage.stop_time/1000)::text)::interval)::timestamp::text, 
+              ',' order by product_type_coverage.id) as product_type_coverage_stop_time_string_list
+FROM product_type
+LEFT JOIN product_type_coverage ON product_type_coverage.pt_id = product_type.id
+GROUP BY product_type.id;
+SELECT COUNT(*) AS product_type_coverage_view_count FROM product_type_coverage_view;
+--SELECT * FROM product_type_coverage_view LIMIT 5;
+
+---------------------------------------------------------------------------
+-- product_type_generation_view
+---------------------------------------------------------------------------
+DROP VIEW IF EXISTS product_type_generation_view CASCADE;
+CREATE VIEW product_type_generation_view AS 
+SELECT
+   -- product_type
+   product_type.id as product_type_id,
+ 
+   -- product_type_generation 
+   string_agg(product_type_generation.version::int8::text,      
+              ',' order by product_type_generation.id) as product_type_generation_version_list,
+   string_agg(product_type_generation.mrf_block_size::int8::text,      
+              ',' order by product_type_generation.id) as product_type_generation_mrf_block_size_list,
+   string_agg(product_type_generation.output_sizex::int8::text,        
+              ',' order by product_type_generation.id) as product_type_generation_output_sizex_list,
+   string_agg(product_type_generation.output_sizey::int8::text,        
+              ',' order by product_type_generation.id) as product_type_generation_output_sizey_list,
+   string_agg(product_type_generation.overview_levels::int8::text,     
+              ',' order by product_type_generation.id) as product_type_generation_overview_levels_list,
+   string_agg(product_type_generation.overview_resample,               
+              ',' order by product_type_generation.id) as product_type_generation_overview_resample_list,
+   string_agg(product_type_generation.overview_scale::int8::text,      
+              ',' order by product_type_generation.id) as product_type_generation_overview_scale_list,
+   string_agg(product_type_generation.reprojection_resample,           
+              ',' order by product_type_generation.id) as product_type_generation_reprojection_resample_list,
+   string_agg(product_type_generation.resize_resample,                 
+              ',' order by product_type_generation.id) as product_type_generation_resize_resample_list,
+   string_agg(product_type_generation.vrt_nodata,                      
+              ',' order by product_type_generation.id) as product_type_generation_vrt_nodata_list
+FROM product_type
+LEFT JOIN product_type_generation ON product_type_generation.pt_id = product_type.id
+GROUP BY product_type.id;
+SELECT COUNT(*) AS product_type_generation_view_count FROM product_type_generation_view;
+--SELECT * FROM product_type_generation_view LIMIT 5;
+
+---------------------------------------------------------------------------
+-- product_type_metadata_view
+---------------------------------------------------------------------------
+DROP VIEW IF EXISTS product_type_metadata_view;
+CREATE VIEW product_type_metadata_view AS 
+SELECT
+   -- product_type
+   product_type.id as product_type_id,
+ 
+   -- product_type_metadata 
+   product_type_metadata.version              as product_type_metadata_version,
+   product_type_metadata.asc_desc             as product_type_metadata_asc_desc,
+   product_type_metadata.science_parameter    as product_type_metadata_science_parameter,
+   product_type_metadata.data_version         as product_type_metadata_data_version,
+   product_type_metadata.day_night            as product_type_metadata_day_night,
+   product_type_metadata.display_resolution   as product_type_metadata_display_resolution,
+   product_type_metadata.instrument           as product_type_metadata_instrument,
+   product_type_metadata.native_resolution    as product_type_metadata_native_resolution,
+   product_type_metadata.platform             as product_type_metadata_platform,
+   product_type_metadata.processing_level     as product_type_metadata_processing_level,
+   product_type_metadata.project              as product_type_metadata_project,
+   product_type_metadata.source_projection_id as product_type_metadata_source_projection_id,
+   product_type_metadata.target_projection_id as product_type_metadata_target_projection_id,
+   product_type_metadata.region_coverage      as product_type_metadata_region_coverage
+FROM product_type
+LEFT JOIN product_type_metadata ON product_type_metadata.pt_id = product_type.id
+GROUP BY product_type.id,
+         product_type_metadata.version,
+         product_type_metadata.asc_desc,
+         product_type_metadata.science_parameter,
+         product_type_metadata.data_version,
+         product_type_metadata.day_night,
+         product_type_metadata.display_resolution,
+         product_type_metadata.instrument,
+         product_type_metadata.native_resolution,
+         product_type_metadata.platform,
+         product_type_metadata.processing_level,
+         product_type_metadata.project,
+         product_type_metadata.source_projection_id,
+         product_type_metadata.target_projection_id,
+         product_type_metadata.region_coverage;
+
+SELECT COUNT(*) AS product_type_metadata_view_count FROM product_type_metadata_view;
+--SELECT * FROM product_type_metadata_view LIMIT 5;
+
+---------------------------------------------------------------------------
+-- product_type_policy_view
+---------------------------------------------------------------------------
+DROP VIEW IF EXISTS product_type_policy_view;
+CREATE VIEW product_type_policy_view AS
+SELECT
+   -- product_type
+   product_type.id as product_type_id,
+
+   -- product_type_policy
+   string_agg(product_type_policy.version::int8::text,
+              ',' order by product_type_policy.id) as product_type_policy_version_list,
+   string_agg(product_type_policy.access_type,
+              ',' order by product_type_policy.id) as product_type_policy_access_type_list,
+   string_agg(product_type_policy.access_constraint,
+              ',' order by product_type_policy.id) as product_type_policy_access_constraint_list,
+   string_agg(product_type_policy.use_constraint,
+              ',' order by product_type_policy.id) as product_type_policy_use_constraint_list,
+   string_agg(product_type_policy.base_path_append_type,
+              ',' order by product_type_policy.id) as product_type_policy_base_path_append_type_list,
+   string_agg(product_type_policy.checksum_type,
+              ',' order by product_type_policy.id) as product_type_policy_checksum_type_list,
+   string_agg(product_type_policy.compress_type,
+              ',' order by product_type_policy.id) as product_type_policy_compress_type_list,
+   string_agg(product_type_policy.data_class,
+              ',' order by product_type_policy.id) as product_type_policy_data_class_list,
+   string_agg(product_type_policy.data_format,
+              ',' order by product_type_policy.id) as product_type_policy_data_format_list,
+   string_agg(product_type_policy.spatial_type,
+              ',' order by product_type_policy.id) as product_type_policy_spatial_type_list,
+   string_agg(product_type_policy.data_duration::int8::text,
+              ',' order by product_type_policy.id) as product_type_policy_data_duration_list,
+   string_agg(product_type_policy.data_frequency::int8::text,
+              ',' order by product_type_policy.id) as product_type_policy_data_frequency_list,
+   string_agg(product_type_policy.data_latency::int8::text,
+              ',' order by product_type_policy.id) as product_type_policy_data_latency_list,
+   string_agg(product_type_policy.data_volume::int8::text,
+              ',' order by product_type_policy.id) as product_type_policy_data_volume_list,
+   string_agg(product_type_policy.delivery_rate::int8::text,
+              ',' order by product_type_policy.id) as product_type_policy_delivery_rate_list,
+   string_agg(product_type_policy.multi_day::int8::text,
+              ',' order by product_type_policy.id) as product_type_policy_multi_day_list,
+   string_agg(product_type_policy.multi_day_link::boolean::text,
+              ',' order by product_type_policy.id) as product_type_policy_multi_day_link_list
+
+FROM product_type
+LEFT JOIN product_type_policy ON product_type_policy.pt_id = product_type.id
+GROUP BY product_type.id;
+SELECT COUNT(*) AS product_type_policy FROM product_type_policy_view;
+--SELECT * FROM product_type_policy_view LIMIT 5;
+
+---------------------------------------------------------------------------
+-- product_type_location_policy_view
+---------------------------------------------------------------------------
+DROP VIEW IF EXISTS product_type_location_policy_view;
+CREATE VIEW product_type_location_policy_view AS
+SELECT
+   -- product_type
+   product_type.id as product_type_id,
+
+   -- product_type_location_policy
+   string_agg(product_type_location_policy.version::int8::text,
+              ',' order by product_type_location_policy.id) as product_type_location_policy_version_list,
+   string_agg(product_type_location_policy.type,
+              ',' order by product_type_location_policy.id) as product_type_location_policy_type_list,
+   string_agg(product_type_location_policy.base_path,
+              ',' order by product_type_location_policy.id) as product_type_location_policy_access_base_path_list
+FROM product_type
+LEFT JOIN product_type_location_policy ON product_type_location_policy.pt_id = product_type.id
+GROUP BY product_type.id;
+SELECT COUNT(*) AS product_type_location_policy FROM product_type_location_policy_view;
+--SELECT * FROM product_type_location_policy_view LIMIT 5;
+
+--*********************************************************************************************
+-- Product Type Provider Model
+--*********************************************************************************************
+
+--------------------------------------------------
+-- product_type_provider_view
+--------------------------------------------------
+
+DROP VIEW IF EXISTS product_type_provider_view CASCADE;
+CREATE VIEW product_type_provider_view AS
+SELECT
+
+   -- product_type
+   product_type.id as product_type_id,
+
+   -- provider_view
+   provider_view.provider_version                     as product_type_provider_version,
+   provider_view.provider_long_name                   as product_type_provider_long_name,
+   provider_view.provider_short_name                  as product_type_provider_short_name,
+   provider_view.provider_type                        as product_type_provider_type,            
+   provider_view.provider_resource_version_list       as product_type_provider_resource_version_list,
+   provider_view.provider_resource_description_list   as product_type_provider_resource_description_list,
+   provider_view.provider_resource_name_list          as product_type_provider_resource_name_list,
+   provider_view.provider_resource_path_list          as product_type_provider_resource_path_list,
+   provider_view.provider_resource_type_list          as product_type_provider_resource_type_list,
+   provider_view.provider_contact_version_list        as product_type_provider_contact_version_list,
+   provider_view.provider_contact_role_list           as product_type_provider_contact_role_list,
+   provider_view.provider_contact_first_name_list     as product_type_provider_contact_first_name_list,
+   provider_view.provider_contact_last_name_list      as product_type_provider_contact_last_name_list,
+   provider_view.provider_contact_middle_name_list    as product_type_provider_contact_middle_name_list,
+   provider_view.provider_contact_address_list        as product_type_provider_contact_address_list,
+   provider_view.provider_contact_notify_type_list    as product_type_provider_contact_notify_type_list,
+   provider_view.provider_contact_email_list          as product_type_provider_contact_email_list,
+   provider_view.provider_contact_phone_list          as product_type_provider_contact_phone_list,
+   provider_view.provider_contact_fax_list            as product_type_provider_contact_fax_list
+
+FROM product_type,
+     provider_view
+WHERE product_type.provider_id = provider_view.provider_id;
+SELECT COUNT(*) AS product_type_provider_view_count FROM product_type_provider_view;
+--SELECT * FROM product_type_provider_view LIMIT 5;
+
+--*********************************************************************************************
+-- Product Type Elements Model
+--*********************************************************************************************
+
+--------------------------------------------------
+-- product_type_element_view
+--------------------------------------------------
+DROP VIEW IF EXISTS product_type_element_dd_view CASCADE;
+CREATE VIEW product_type_element_dd_view AS
+SELECT
+
+   -- product_type_element
+   product_type_element.id,
+   product_type_element.pt_id                  as product_type_id,
+   product_type_element.version                as product_type_element_version,
+   product_type_element.obligation_flag        as product_type_element_obligation_flag,
+   product_type_element.scope                  as product_type_element_scope,
+
+   -- element_dd
+   string_agg(element_dd.version::int8::text,    ';' order by element_dd.id) as product_type_element_dd_versions,
+   string_agg(element_dd.type,                   ';' order by element_dd.id) as product_type_element_dd_types,
+   string_agg(element_dd.description,            ';' order by element_dd.id) as product_type_element_dd_descriptions,
+   string_agg(element_dd.scope,                  ';' order by element_dd.id) as product_type_element_dd_scopes,
+   string_agg(element_dd.long_name,              ';' order by element_dd.id) as product_type_element_dd_long_names,
+   string_agg(element_dd.short_name,             ';' order by element_dd.id) as product_type_element_dd_short_names,
+   string_agg(element_dd.max_length::int8::text, ';' order by element_dd.id) as product_type_element_dd_max_lengths
+
+FROM product_type_element
+LEFT JOIN element_dd ON product_type_element.element_id = element_dd.id
+GROUP BY product_type_element.id;
+SELECT COUNT(*) AS product_type_element_dd_view_count FROM product_type_element_dd_view;
+--SELECT * FROM product_type_element_dd_view LIMIT 5;
+
+DROP VIEW IF EXISTS product_type_element_view CASCADE;
+CREATE VIEW product_type_element_view AS
+SELECT
+
+   -- product_type
+   product_type.id as product_type_id,
+
+   -- product_type_element_dd_view
+   string_agg(product_type_element_dd_view.product_type_element_version::int8::text,
+              ',' order by product_type_element_dd_view.id) as product_type_element_version_list,
+   string_agg(product_type_element_dd_view.product_type_element_obligation_flag::boolean::text,
+              ',' order by product_type_element_dd_view.id) as product_type_element_obligation_flag_list,
+   string_agg(product_type_element_dd_view.product_type_element_scope,
+              ',' order by product_type_element_dd_view.id) as product_type_element_scope_list,
+   string_agg(product_type_element_dd_view.product_type_element_dd_versions,                   
+              ',' order by product_type_element_dd_view.id) as product_type_element_dd_version_list,
+   string_agg(product_type_element_dd_view.product_type_element_dd_types,                   
+              ',' order by product_type_element_dd_view.id) as product_type_element_dd_type_list,
+   string_agg(product_type_element_dd_view.product_type_element_dd_descriptions,            
+              ',' order by product_type_element_dd_view.id) as product_type_element_dd_description_list,
+   string_agg(product_type_element_dd_view.product_type_element_dd_scopes,                  
+              ',' order by product_type_element_dd_view.id) as product_type_element_dd_scope_list,
+   string_agg(product_type_element_dd_view.product_type_element_dd_long_names,              
+              ',' order by product_type_element_dd_view.id) as product_type_element_dd_long_name_list,
+   string_agg(product_type_element_dd_view.product_type_element_dd_short_names,             
+              ',' order by product_type_element_dd_view.id) as product_type_element_dd_short_name_list,
+   string_agg(product_type_element_dd_view.product_type_element_dd_max_lengths, 
+              ',' order by product_type_element_dd_view.id) as product_type_element_dd_max_length_list
+
+FROM product_type
+LEFT JOIN product_type_element_dd_view ON product_type_element_dd_view.product_type_id = product_type.id
+GROUP BY product_type.id;
+SELECT COUNT(*) AS product_type_element_view_count FROM product_type_element_view;
+--SELECT * FROM product_type_element_view LIMIT 5;
+
+--------------------------------------------------
+-- product_type_datetime_view
+--------------------------------------------------
+
+DROP VIEW IF EXISTS product_type_datetime_view CASCADE;
+CREATE VIEW product_type_datetime_view AS
+SELECT
+
+   -- product_type
+   product_type.id as product_type_id,
+
+   -- product_type_datetime
+   string_agg(product_type_datetime.version::int8::text,
+              ',' order by product_type_datetime.id) as product_type_datetime_version_list,
+   string_agg(product_type_datetime.value_long::int8::text,
+              ',' order by product_type_datetime.id) as product_type_datetime_value_list,
+   string_agg(('1970-01-01 00:00:00 GMT'::timestamp +
+              ((product_type_datetime.value_long/1000)::text)::interval)::timestamp::text,
+              ',' order by product_type_datetime.id) as product_type_datetime_value_string_list
+
+FROM product_type
+LEFT JOIN product_type_datetime ON product_type_datetime.pt_id = product_type.id
+GROUP BY product_type.id;
+SELECT COUNT(*) AS product_type_datetime_view_count FROM product_type_datetime_view;
+--SELECT * FROM product_type_datetime_view LIMIT 5;
+
+--------------------------------------------------
+-- product_type_character_view
+--------------------------------------------------
+DROP VIEW IF EXISTS product_type_character_view CASCADE;
+CREATE VIEW product_type_character_view AS
+SELECT
+
+   -- product_type
+   product_type.id as product_type_id,
+
+   -- product_type_character
+   string_agg(product_type_character.version::int8::text,
+              ',' order by product_type_character.id) as product_type_character_version_list,
+   string_agg(product_type_character.value,
+              ',' order by product_type_character.id) as product_type_character_value_list
+
+FROM product_type
+LEFT JOIN product_type_character ON product_type_character.pt_id = product_type.id
+GROUP BY product_type.id;
+SELECT COUNT(*) AS product_type_character_view_count FROM product_type_character_view;
+--SELECT * FROM product_type_character_view LIMIT 5;
+
+--------------------------------------------------
+-- product_type_integer_view
+--------------------------------------------------
+DROP VIEW IF EXISTS product_type_integer_view CASCADE;
+CREATE VIEW product_type_integer_view AS
+SELECT
+
+   -- product_type
+   product_type.id as product_type_id,
+
+   -- product_type_integer
+   string_agg(product_type_integer.version::int8::text,
+              ',' order by product_type_integer.id) as product_type_integer_version_list,
+   string_agg(product_type_integer.units,
+              ',' order by product_type_integer.id) as product_type_integer_units_list,
+   string_agg(product_type_integer.value::int::text,
+              ',' order by product_type_integer.id) as product_type_integer_value_list
+
+FROM product_type
+LEFT JOIN product_type_integer ON product_type_integer.pt_id = product_type.id
+GROUP BY product_type.id;
+SELECT COUNT(*) AS product_type_integer_view_count FROM product_type_integer_view;
+--SELECT * FROM product_type_integer_view LIMIT 5;
+
+--------------------------------------------------
+-- product_type_real_view
+--------------------------------------------------
+DROP VIEW IF EXISTS product_type_real_view CASCADE;
+CREATE VIEW product_type_real_view AS
+SELECT
+
+   -- product_type
+   product_type.id as product_type_id,
+
+   -- product_type_real
+   string_agg(product_type_real.version::int8::text,
+              ',' order by product_type_real.id) as product_type_real_version_list,
+   string_agg(product_type_real.units,
+              ',' order by product_type_real.id) as product_type_real_units_list,
+   string_agg(product_type_real.value::numeric::text,
+              ',' order by product_type_real.id) as product_type_real_value_list
+
+FROM product_type
+LEFT JOIN product_type_real ON product_type_real.pt_id = product_type.id
+GROUP BY product_type.id;
+SELECT COUNT(*) AS product_type_real_view_count FROM product_type_real_view;
+--SELECT * FROM product_type_real_view LIMIT 5;
+
+-----------------------------------------------------------------------------------------------
+-- product_type_view
+-----------------------------------------------------------------------------------------------
+DROP VIEW IF EXISTS product_type_view CASCADE;
+CREATE VIEW product_type_view AS 
+SELECT 
+
+   -- product_type
+   product_type.id,
+   product_type.id           as product_type_id, 
+   product_type.version      as product_type_version, 
+   product_type.provider_id  as product_type_provider_id, 
+   product_type.identifier   as product_type_identifier,
+   product_type.title        as product_type_title,
+   product_type.description  as product_type_description,
+   product_type.purgable     as product_type_purgable,
+   product_type.purge_rate   as product_type_purge_rate,
+   product_type.last_updated as product_type_last_updated,
+   '1970-01-01 00:00:00 GMT'::timestamp + ((product_type.last_updated/1000)::text)::interval 
+                             as product_type_last_updated_string,
+
+   -- product_type_dataset_view
+   product_type_dataset_id_list,
+   product_type_dataset_revision_list,
+   product_type_dataset_description_list,
+   product_type_dataset_long_name_list,
+   product_type_dataset_short_name_list,
+   product_type_dataset_metadata_endpoint_list,
+   product_type_dataset_metadata_registry_list,
+   product_type_dataset_remote_dataset_id_list,
+
+   -- product_type_resource
+   product_type_resource_version_list,
+   product_type_resource_type_list,
+   product_type_resource_name_list,
+   product_type_resource_path_list,
+   product_type_resource_description_list,
+
+   -- product_type_coverage 
+   product_type_coverage_version_list,
+   product_type_coverage_east_longitude_list,
+   product_type_coverage_west_longitude_list,
+   product_type_coverage_north_latitude_list,
+   product_type_coverage_south_latitude_list,
+   product_type_coverage_stop_time_list,
+   product_type_coverage_stop_time_string_list,
+   product_type_coverage_start_time_list,
+   product_type_coverage_start_time_string_list,
+
+   -- product_type_generation
+   product_type_generation_version_list,
+   product_type_generation_mrf_block_size_list,
+   product_type_generation_output_sizex_list,
+   product_type_generation_output_sizey_list,
+   product_type_generation_overview_levels_list,
+   product_type_generation_overview_resample_list,
+   product_type_generation_overview_scale_list,
+   product_type_generation_reprojection_resample_list,
+   product_type_generation_resize_resample_list,
+   product_type_generation_vrt_nodata_list,
+
+   -- product_type_metadata
+   product_type_metadata_version,
+   product_type_metadata_asc_desc,
+   product_type_metadata_science_parameter,
+   product_type_metadata_data_version,
+   product_type_metadata_day_night,
+   product_type_metadata_display_resolution,
+   product_type_metadata_instrument,
+   product_type_metadata_native_resolution,
+   product_type_metadata_platform,
+   product_type_metadata_processing_level,
+   product_type_metadata_project,
+   product_type_metadata_source_projection_id,
+   product_type_metadata_target_projection_id,
+   product_type_metadata_region_coverage,
+
+   -- product_type_policy
+   product_type_policy_version_list,
+   product_type_policy_access_type_list,
+   product_type_policy_access_constraint_list,
+   product_type_policy_use_constraint_list,
+   product_type_policy_base_path_append_type_list,
+   product_type_policy_checksum_type_list,
+   product_type_policy_compress_type_list,
+   product_type_policy_data_class_list,
+   product_type_policy_data_format_list,
+   product_type_policy_spatial_type_list,
+   product_type_policy_data_duration_list,
+   product_type_policy_data_frequency_list,
+   product_type_policy_data_latency_list,
+   product_type_policy_data_volume_list,
+   product_type_policy_delivery_rate_list,
+   product_type_policy_multi_day_list,
+   product_type_policy_multi_day_link_list,
+
+   -- product_type_location_policy
+   product_type_location_policy_version_list,
+   product_type_location_policy_type_list,
+   product_type_location_policy_access_base_path_list,
+
+   -- product_type_provider_view
+   product_type_provider_version,
+   product_type_provider_long_name,
+   product_type_provider_short_name,
+   product_type_provider_type,
+   product_type_provider_resource_version_list,
+   product_type_provider_resource_description_list,
+   product_type_provider_resource_name_list,
+   product_type_provider_resource_path_list,
+   product_type_provider_resource_type_list,
+   product_type_provider_contact_version_list,
+   product_type_provider_contact_role_list,
+   product_type_provider_contact_first_name_list,
+   product_type_provider_contact_last_name_list,
+   product_type_provider_contact_middle_name_list,
+   product_type_provider_contact_address_list,
+   product_type_provider_contact_notify_type_list,
+   product_type_provider_contact_email_list,
+   product_type_provider_contact_phone_list,
+   product_type_provider_contact_fax_list,
+
+   -- product_type_element_view
+   product_type_element_obligation_flag_list,
+   product_type_element_scope_list,
+   product_type_element_dd_version_list,
+   product_type_element_dd_type_list,
+   product_type_element_dd_description_list,
+   product_type_element_dd_scope_list,
+   product_type_element_dd_long_name_list,
+   product_type_element_dd_short_name_list,
+   product_type_element_dd_max_length_list,
+
+   -- product_type_datetime_view
+   product_type_datetime_version_list,
+   product_type_datetime_value_list,
+   product_type_datetime_value_string_list,
+
+   -- product_type_character_view
+   product_type_character_version_list,
+   product_type_character_value_list,
+
+   -- product_type_integer_view
+   product_type_integer_version_list,
+   product_type_integer_value_list,
+   product_type_integer_units_list,
+
+   -- product_type_real_view
+   product_type_real_version_list,
+   product_type_real_value_list,
+   product_type_real_units_list
+
+FROM
+   product_type,
+   product_type_dataset_view,
+   product_type_resource_view,
+   product_type_coverage_view,
+   product_type_generation_view,
+   product_type_metadata_view,
+   product_type_policy_view,
+   product_type_location_policy_view,
+   product_type_provider_view,
+   product_type_element_view,
+   product_type_datetime_view,
+   product_type_character_view,
+   product_type_integer_view,
+   product_type_real_view
+WHERE
+   product_type.id = product_type_dataset_view.product_type_id AND
+   product_type.id = product_type_resource_view.product_type_id AND
+   product_type.id = product_type_coverage_view.product_type_id AND
+   product_type.id = product_type_generation_view.product_type_id AND
+   product_type.id = product_type_metadata_view.product_type_id AND
+   product_type.id = product_type_policy_view.product_type_id AND
+   product_type.id = product_type_location_policy_view.product_type_id AND
+   product_type.id = product_type_provider_view.product_type_id AND
+   product_type.id = product_type_element_view.product_type_id AND
+   product_type.id = product_type_datetime_view.product_type_id AND
+   product_type.id = product_type_character_view.product_type_id AND
+   product_type.id = product_type_integer_view.product_type_id AND
+   product_type.id = product_type_real_view.product_type_id;
+SELECT COUNT(*) AS product_type_view_count FROM product_type_view;
+--SELECT * FROM product_type_view LIMIT 5;


[04/15] incubator-sdap-edge git commit: SDAP-1 Import all code under the SDAP SGA

Posted by le...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/solr/product/conf/solrconfig.xml
----------------------------------------------------------------------
diff --git a/src/main/solr/product/conf/solrconfig.xml b/src/main/solr/product/conf/solrconfig.xml
new file mode 100644
index 0000000..fbf1878
--- /dev/null
+++ b/src/main/solr/product/conf/solrconfig.xml
@@ -0,0 +1,1626 @@
+<?xml version="1.0" encoding="UTF-8" ?>
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<!-- 
+     For more details about configurations options that may appear in
+     this file, see http://wiki.apache.org/solr/SolrConfigXml. 
+-->
+<config>
+  <!-- In all configuration below, a prefix of "solr." for class names
+       is an alias that causes solr to search appropriate packages,
+       including org.apache.solr.(search|update|request|core|analysis)
+
+       You may also specify a fully qualified Java classname if you
+       have your own custom plugins.
+    -->
+
+  <!-- Controls what version of Lucene various components of Solr
+       adhere to.  Generally, you want to use the latest version to
+       get all bug fixes and improvements. It is highly recommended
+       that you fully re-index after changing this setting as it can
+       affect both how text is indexed and queried.
+  -->
+  <luceneMatchVersion>5.0.0</luceneMatchVersion>
+
+  <!-- <lib/> directives can be used to instruct Solr to load any Jars
+       identified and use them to resolve any "plugins" specified in
+       your solrconfig.xml or schema.xml (ie: Analyzers, Request
+       Handlers, etc...).
+
+       All directories and paths are resolved relative to the
+       instanceDir.
+
+       Please note that <lib/> directives are processed in the order
+       that they appear in your solrconfig.xml file, and are "stacked" 
+       on top of each other when building a ClassLoader - so if you have 
+       plugin jars with dependencies on other jars, the "lower level" 
+       dependency jars should be loaded first.
+
+       If a "./lib" directory exists in your instanceDir, all files
+       found in it are included as if you had used the following
+       syntax...
+       
+              <lib dir="./lib" />
+    -->
+
+  <!-- A 'dir' option by itself adds any files found in the directory 
+       to the classpath, this is useful for including all jars in a
+       directory.
+
+       When a 'regex' is specified in addition to a 'dir', only the
+       files in that directory which completely match the regex
+       (anchored on both ends) will be included.
+
+       If a 'dir' option (with or without a regex) is used and nothing
+       is found that matches, a warning will be logged.
+
+       The examples below can be used to load some solr-contribs along 
+       with their external dependencies.
+    -->
+  <lib dir="${solr.install.dir:../../../..}/contrib/extraction/lib" regex=".*\.jar" />
+  <lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-cell-\d.*\.jar" />
+
+  <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+  <!-- GIBS: Adding lib path to dataimport jar file  -->
+  <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+
+  <lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-dataimporthandler-\d.*\.jar" />
+
+  <lib dir="${solr.install.dir:../../../..}/contrib/clustering/lib/" regex=".*\.jar" />
+  <lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-clustering-\d.*\.jar" />
+
+  <lib dir="${solr.install.dir:../../../..}/contrib/langid/lib/" regex=".*\.jar" />
+  <lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-langid-\d.*\.jar" />
+
+  <lib dir="${solr.install.dir:../../../..}/contrib/velocity/lib" regex=".*\.jar" />
+  <lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-velocity-\d.*\.jar" />
+
+  <!-- an exact 'path' can be used instead of a 'dir' to specify a 
+       specific jar file.  This will cause a serious error to be logged 
+       if it can't be loaded.
+    -->
+
+  <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+  <!-- GIBS: Adding lib path to postgresql JDBC jar file -->
+  <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+
+  <lib path="./lib/postgresql-9.4-1201.jdbc4.jar" /> 
+
+  <!-- Data Directory
+
+       Used to specify an alternate directory to hold all index data
+       other than the default ./data under the Solr home.  If
+       replication is in use, this should match the replication
+       configuration.
+    -->
+  <dataDir>${solr.data.dir:}</dataDir>
+
+
+  <!-- The DirectoryFactory to use for indexes.
+       
+       solr.StandardDirectoryFactory is filesystem
+       based and tries to pick the best implementation for the current
+       JVM and platform.  solr.NRTCachingDirectoryFactory, the default,
+       wraps solr.StandardDirectoryFactory and caches small files in memory
+       for better NRT performance.
+
+       One can force a particular implementation via solr.MMapDirectoryFactory,
+       solr.NIOFSDirectoryFactory, or solr.SimpleFSDirectoryFactory.
+
+       solr.RAMDirectoryFactory is memory based, not
+       persistent, and doesn't work with replication.
+    -->
+  <directoryFactory name="DirectoryFactory"
+                    class="${solr.directoryFactory:solr.NRTCachingDirectoryFactory}"/>
+
+  <!-- The CodecFactory for defining the format of the inverted index.
+       The default implementation is SchemaCodecFactory, which is the official Lucene
+       index format, but hooks into the schema to provide per-field customization of
+       the postings lists and per-document values in the fieldType element
+       (postingsFormat/docValuesFormat). Note that most of the alternative implementations
+       are experimental, so if you choose to customize the index format, it's a good
+       idea to convert back to the official format e.g. via IndexWriter.addIndexes(IndexReader)
+       before upgrading to a newer version to avoid unnecessary reindexing.
+  -->
+  <codecFactory class="solr.SchemaCodecFactory"/>
+
+  <!-- To disable dynamic schema REST APIs, use the following for <schemaFactory>:
+  
+       <schemaFactory class="ClassicIndexSchemaFactory"/>
+
+       When ManagedIndexSchemaFactory is specified instead, Solr will load the schema from
+       the resource named in 'managedSchemaResourceName', rather than from schema.xml.
+       Note that the managed schema resource CANNOT be named schema.xml.  If the managed
+       schema does not exist, Solr will create it after reading schema.xml, then rename
+       'schema.xml' to 'schema.xml.bak'. 
+       
+       Do NOT hand edit the managed schema - external modifications will be ignored and
+       overwritten as a result of schema modification REST API calls.
+
+       When ManagedIndexSchemaFactory is specified with mutable = true, schema
+       modification REST API calls will be allowed; otherwise, error responses will be
+       sent back for these requests. 
+
+  <schemaFactory class="ManagedIndexSchemaFactory">
+    <bool name="mutable">true</bool>
+    <str name="managedSchemaResourceName">managed-schema</str>
+  </schemaFactory>
+
+  -->
+
+  <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+  <!-- GIBS: Use classic index chema -->
+  <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+
+  <schemaFactory class="ClassicIndexSchemaFactory"/>
+
+  <!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+       Index Config - These settings control low-level behavior of indexing
+       Most example settings here show the default value, but are commented
+       out, to more easily see where customizations have been made.
+       
+       Note: This replaces <indexDefaults> and <mainIndex> from older versions
+       ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->
+  <indexConfig>
+    <!-- maxFieldLength was removed in 4.0. To get similar behavior, include a 
+         LimitTokenCountFilterFactory in your fieldType definition. E.g. 
+     <filter class="solr.LimitTokenCountFilterFactory" maxTokenCount="10000"/>
+    -->
+    <!-- Maximum time to wait for a write lock (ms) for an IndexWriter. Default: 1000 -->
+    <!-- <writeLockTimeout>1000</writeLockTimeout>  -->
+
+    <!-- The maximum number of simultaneous threads that may be
+         indexing documents at once in IndexWriter; if more than this
+         many threads arrive they will wait for others to finish.
+         Default in Solr/Lucene is 8. -->
+    <!-- <maxIndexingThreads>8</maxIndexingThreads>  -->
+
+    <!-- Expert: Enabling compound file will use less files for the index, 
+         using fewer file descriptors on the expense of performance decrease. 
+         Default in Lucene is "true". Default in Solr is "false" (since 3.6) -->
+    <!-- <useCompoundFile>false</useCompoundFile> -->
+
+    <!-- ramBufferSizeMB sets the amount of RAM that may be used by Lucene
+         indexing for buffering added documents and deletions before they are
+         flushed to the Directory.
+         maxBufferedDocs sets a limit on the number of documents buffered
+         before flushing.
+         If both ramBufferSizeMB and maxBufferedDocs is set, then
+         Lucene will flush based on whichever limit is hit first.  -->
+    <!-- <ramBufferSizeMB>100</ramBufferSizeMB> -->
+    <!-- <maxBufferedDocs>1000</maxBufferedDocs> -->
+
+    <!-- Expert: Merge Policy 
+         The Merge Policy in Lucene controls how merging of segments is done.
+         The default since Solr/Lucene 3.3 is TieredMergePolicy.
+         The default since Lucene 2.3 was the LogByteSizeMergePolicy,
+         Even older versions of Lucene used LogDocMergePolicy.
+      -->
+    <!--
+        <mergePolicy class="org.apache.lucene.index.TieredMergePolicy">
+          <int name="maxMergeAtOnce">10</int>
+          <int name="segmentsPerTier">10</int>
+        </mergePolicy>
+      -->
+
+    <!-- Merge Factor
+         The merge factor controls how many segments will get merged at a time.
+         For TieredMergePolicy, mergeFactor is a convenience parameter which
+         will set both MaxMergeAtOnce and SegmentsPerTier at once.
+         For LogByteSizeMergePolicy, mergeFactor decides how many new segments
+         will be allowed before they are merged into one.
+         Default is 10 for both merge policies.
+      -->
+    <!-- 
+    <mergeFactor>10</mergeFactor>
+      -->
+
+    <!-- Expert: Merge Scheduler
+         The Merge Scheduler in Lucene controls how merges are
+         performed.  The ConcurrentMergeScheduler (Lucene 2.3 default)
+         can perform merges in the background using separate threads.
+         The SerialMergeScheduler (Lucene 2.2 default) does not.
+     -->
+    <!-- 
+       <mergeScheduler class="org.apache.lucene.index.ConcurrentMergeScheduler"/>
+       -->
+
+    <!-- LockFactory 
+
+         This option specifies which Lucene LockFactory implementation
+         to use.
+      
+         single = SingleInstanceLockFactory - suggested for a
+                  read-only index or when there is no possibility of
+                  another process trying to modify the index.
+         native = NativeFSLockFactory - uses OS native file locking.
+                  Do not use when multiple solr webapps in the same
+                  JVM are attempting to share a single index.
+         simple = SimpleFSLockFactory  - uses a plain file for locking
+
+         Defaults: 'native' is default for Solr3.6 and later, otherwise
+                   'simple' is the default
+
+         More details on the nuances of each LockFactory...
+         http://wiki.apache.org/lucene-java/AvailableLockFactories
+    -->
+    <lockType>${solr.lock.type:native}</lockType>
+
+    <!-- Unlock On Startup
+
+         If true, unlock any held write or commit locks on startup.
+         This defeats the locking mechanism that allows multiple
+         processes to safely access a lucene index, and should be used
+         with care. Default is "false".
+
+         This is not needed if lock type is 'single'
+     -->
+    <!--
+    <unlockOnStartup>false</unlockOnStartup>
+      -->
+
+    <!-- Commit Deletion Policy
+         Custom deletion policies can be specified here. The class must
+         implement org.apache.lucene.index.IndexDeletionPolicy.
+
+         The default Solr IndexDeletionPolicy implementation supports
+         deleting index commit points on number of commits, age of
+         commit point and optimized status.
+         
+         The latest commit point should always be preserved regardless
+         of the criteria.
+    -->
+    <!-- 
+    <deletionPolicy class="solr.SolrDeletionPolicy">
+    -->
+    <!-- The number of commit points to be kept -->
+    <!-- <str name="maxCommitsToKeep">1</str> -->
+    <!-- The number of optimized commit points to be kept -->
+    <!-- <str name="maxOptimizedCommitsToKeep">0</str> -->
+    <!--
+        Delete all commit points once they have reached the given age.
+        Supports DateMathParser syntax e.g.
+      -->
+    <!--
+       <str name="maxCommitAge">30MINUTES</str>
+       <str name="maxCommitAge">1DAY</str>
+    -->
+    <!-- 
+    </deletionPolicy>
+    -->
+
+    <!-- Lucene Infostream
+       
+         To aid in advanced debugging, Lucene provides an "InfoStream"
+         of detailed information when indexing.
+
+         Setting The value to true will instruct the underlying Lucene
+         IndexWriter to write its debugging info the specified file
+      -->
+    <!-- <infoStream file="INFOSTREAM.txt">false</infoStream> -->
+  </indexConfig>
+
+
+  <!-- JMX
+       
+       This example enables JMX if and only if an existing MBeanServer
+       is found, use this if you want to configure JMX through JVM
+       parameters. Remove this to disable exposing Solr configuration
+       and statistics to JMX.
+
+       For more details see http://wiki.apache.org/solr/SolrJmx
+    -->
+  <jmx />
+  <!-- If you want to connect to a particular server, specify the
+       agentId 
+    -->
+  <!-- <jmx agentId="myAgent" /> -->
+  <!-- If you want to start a new MBeanServer, specify the serviceUrl -->
+  <!-- <jmx serviceUrl="service:jmx:rmi:///jndi/rmi://localhost:9999/solr"/>
+    -->
+
+  <!-- The default high-performance update handler -->
+  <updateHandler class="solr.DirectUpdateHandler2">
+
+    <!-- Enables a transaction log, used for real-time get, durability, and
+         and solr cloud replica recovery.  The log can grow as big as
+         uncommitted changes to the index, so use of a hard autoCommit
+         is recommended (see below).
+         "dir" - the target directory for transaction logs, defaults to the
+                solr data directory.  -->
+    <updateLog>
+      <str name="dir">${solr.ulog.dir:}</str>
+    </updateLog>
+
+    <!-- AutoCommit
+
+         Perform a hard commit automatically under certain conditions.
+         Instead of enabling autoCommit, consider using "commitWithin"
+         when adding documents. 
+
+         http://wiki.apache.org/solr/UpdateXmlMessages
+
+         maxDocs - Maximum number of documents to add since the last
+                   commit before automatically triggering a new commit.
+
+         maxTime - Maximum amount of time in ms that is allowed to pass
+                   since a document was added before automatically
+                   triggering a new commit. 
+         openSearcher - if false, the commit causes recent index changes
+           to be flushed to stable storage, but does not cause a new
+           searcher to be opened to make those changes visible.
+
+         If the updateLog is enabled, then it's highly recommended to
+         have some sort of hard autoCommit to limit the log size.
+      -->
+    <autoCommit>
+      <maxTime>15000</maxTime>
+      <openSearcher>false</openSearcher>
+    </autoCommit>
+
+    <!-- softAutoCommit is like autoCommit except it causes a
+         'soft' commit which only ensures that changes are visible
+         but does not ensure that data is synced to disk.  This is
+         faster and more near-realtime friendly than a hard commit.
+      -->
+    <!--
+      <autoSoftCommit> 
+        <maxTime>1000</maxTime> 
+      </autoSoftCommit>
+     -->
+
+    <!-- Update Related Event Listeners
+         
+         Various IndexWriter related events can trigger Listeners to
+         take actions.
+
+         postCommit - fired after every commit or optimize command
+         postOptimize - fired after every optimize command
+      -->
+    <!-- The RunExecutableListener executes an external command from a
+         hook such as postCommit or postOptimize.
+         
+         exe - the name of the executable to run
+         dir - dir to use as the current working directory. (default=".")
+         wait - the calling thread waits until the executable returns. 
+                (default="true")
+         args - the arguments to pass to the program.  (default is none)
+         env - environment variables to set.  (default is none)
+      -->
+    <!-- This example shows how RunExecutableListener could be used
+         with the script based replication...
+         http://wiki.apache.org/solr/CollectionDistribution
+      -->
+    <!--
+       <listener event="postCommit" class="solr.RunExecutableListener">
+         <str name="exe">solr/bin/snapshooter</str>
+         <str name="dir">.</str>
+         <bool name="wait">true</bool>
+         <arr name="args"> <str>arg1</str> <str>arg2</str> </arr>
+         <arr name="env"> <str>MYVAR=val1</str> </arr>
+       </listener>
+      -->
+
+  </updateHandler>
+
+  <!-- IndexReaderFactory
+
+       Use the following format to specify a custom IndexReaderFactory,
+       which allows for alternate IndexReader implementations.
+
+       ** Experimental Feature **
+
+       Please note - Using a custom IndexReaderFactory may prevent
+       certain other features from working. The API to
+       IndexReaderFactory may change without warning or may even be
+       removed from future releases if the problems cannot be
+       resolved.
+
+
+       ** Features that may not work with custom IndexReaderFactory **
+
+       The ReplicationHandler assumes a disk-resident index. Using a
+       custom IndexReader implementation may cause incompatibility
+       with ReplicationHandler and may cause replication to not work
+       correctly. See SOLR-1366 for details.
+
+    -->
+  <!--
+  <indexReaderFactory name="IndexReaderFactory" class="package.class">
+    <str name="someArg">Some Value</str>
+  </indexReaderFactory >
+  -->
+
+  <!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+       Query section - these settings control query time things like caches
+       ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->
+  <query>
+    <!-- Max Boolean Clauses
+
+         Maximum number of clauses in each BooleanQuery,  an exception
+         is thrown if exceeded.
+
+         ** WARNING **
+         
+         This option actually modifies a global Lucene property that
+         will affect all SolrCores.  If multiple solrconfig.xml files
+         disagree on this property, the value at any given moment will
+         be based on the last SolrCore to be initialized.
+         
+      -->
+    <maxBooleanClauses>1024</maxBooleanClauses>
+
+
+    <!-- Solr Internal Query Caches
+
+         There are two implementations of cache available for Solr,
+         LRUCache, based on a synchronized LinkedHashMap, and
+         FastLRUCache, based on a ConcurrentHashMap.  
+
+         FastLRUCache has faster gets and slower puts in single
+         threaded operation and thus is generally faster than LRUCache
+         when the hit ratio of the cache is high (> 75%), and may be
+         faster under other scenarios on multi-cpu systems.
+    -->
+
+    <!-- Filter Cache
+
+         Cache used by SolrIndexSearcher for filters (DocSets),
+         unordered sets of *all* documents that match a query.  When a
+         new searcher is opened, its caches may be prepopulated or
+         "autowarmed" using data from caches in the old searcher.
+         autowarmCount is the number of items to prepopulate.  For
+         LRUCache, the autowarmed items will be the most recently
+         accessed items.
+
+         Parameters:
+           class - the SolrCache implementation LRUCache or
+               (LRUCache or FastLRUCache)
+           size - the maximum number of entries in the cache
+           initialSize - the initial capacity (number of entries) of
+               the cache.  (see java.util.HashMap)
+           autowarmCount - the number of entries to prepopulate from
+               and old cache.  
+      -->
+    <filterCache class="solr.FastLRUCache"
+                 size="512"
+                 initialSize="512"
+                 autowarmCount="0"/>
+
+    <!-- Query Result Cache
+         
+         Caches results of searches - ordered lists of document ids
+         (DocList) based on a query, a sort, and the range of documents requested.  
+      -->
+    <queryResultCache class="solr.LRUCache"
+                      size="512"
+                      initialSize="512"
+                      autowarmCount="0"/>
+
+    <!-- Document Cache
+
+         Caches Lucene Document objects (the stored fields for each
+         document).  Since Lucene internal document ids are transient,
+         this cache will not be autowarmed.  
+      -->
+    <documentCache class="solr.LRUCache"
+                   size="512"
+                   initialSize="512"
+                   autowarmCount="0"/>
+
+    <!-- Field Value Cache
+         
+         Cache used to hold field values that are quickly accessible
+         by document id.  The fieldValueCache is created by default
+         even if not configured here.
+      -->
+    <!--
+       <fieldValueCache class="solr.FastLRUCache"
+                        size="512"
+                        autowarmCount="128"
+                        showItems="32" />
+      -->
+
+    <!-- Custom Cache
+
+         Example of a generic cache.  These caches may be accessed by
+         name through SolrIndexSearcher.getCache(),cacheLookup(), and
+         cacheInsert().  The purpose is to enable easy caching of
+         user/application level data.  The regenerator argument should
+         be specified as an implementation of solr.CacheRegenerator 
+         if autowarming is desired.  
+      -->
+    <!--
+       <cache name="myUserCache"
+              class="solr.LRUCache"
+              size="4096"
+              initialSize="1024"
+              autowarmCount="1024"
+              regenerator="com.mycompany.MyRegenerator"
+              />
+      -->
+
+
+    <!-- Lazy Field Loading
+
+         If true, stored fields that are not requested will be loaded
+         lazily.  This can result in a significant speed improvement
+         if the usual case is to not load all stored fields,
+         especially if the skipped fields are large compressed text
+         fields.
+    -->
+    <enableLazyFieldLoading>true</enableLazyFieldLoading>
+
+    <!-- Use Filter For Sorted Query
+ 
+         A possible optimization that attempts to use a filter to
+         satisfy a search.  If the requested sort does not include
+         score, then the filterCache will be checked for a filter
+         matching the query. If found, the filter will be used as the
+         source of document ids, and then the sort will be applied to
+         that.
+ 
+         For most situations, this will not be useful unless you
+         frequently get the same search repeatedly with different sort
+         options, and none of them ever use "score"
+      -->
+    <!--
+       <useFilterForSortedQuery>true</useFilterForSortedQuery>
+      -->
+
+    <!-- Result Window Size
+ 
+         An optimization for use with the queryResultCache.  When a search
+         is requested, a superset of the requested number of document ids
+         are collected.  For example, if a search for a particular query
+         requests matching documents 10 through 19, and queryWindowSize is 50,
+         then documents 0 through 49 will be collected and cached.  Any further
+         requests in that range can be satisfied via the cache.  
+      -->
+    <queryResultWindowSize>20</queryResultWindowSize>
+
+    <!-- Maximum number of documents to cache for any entry in the
+         queryResultCache. 
+      -->
+    <queryResultMaxDocsCached>200</queryResultMaxDocsCached>
+
+    <!-- Query Related Event Listeners
+ 
+         Various IndexSearcher related events can trigger Listeners to
+         take actions.
+ 
+         newSearcher - fired whenever a new searcher is being prepared
+         and there is a current searcher handling requests (aka
+         registered).  It can be used to prime certain caches to
+         prevent long request times for certain requests.
+ 
+         firstSearcher - fired whenever a new searcher is being
+         prepared but there is no current registered searcher to handle
+         requests or to gain autowarming data from.
+ 
+         
+      -->
+    <!-- QuerySenderListener takes an array of NamedList and executes a
+         local query request for each NamedList in sequence. 
+      -->
+    <listener event="newSearcher" class="solr.QuerySenderListener">
+      <arr name="queries">
+        <!--
+           <lst><str name="q">solr</str><str name="sort">price asc</str></lst>
+           <lst><str name="q">rocks</str><str name="sort">weight asc</str></lst>
+          -->
+      </arr>
+    </listener>
+    <listener event="firstSearcher" class="solr.QuerySenderListener">
+      <arr name="queries">
+        <!--
+        <lst>
+          <str name="q">static firstSearcher warming in solrconfig.xml</str>
+        </lst>
+        -->
+      </arr>
+    </listener>
+
+    <!-- Use Cold Searcher
+
+         If a search request comes in and there is no current
+         registered searcher, then immediately register the still
+         warming searcher and use it.  If "false" then all requests
+         will block until the first searcher is done warming.
+      -->
+    <useColdSearcher>false</useColdSearcher>
+
+    <!-- Max Warming Searchers
+         
+         Maximum number of searchers that may be warming in the
+         background concurrently.  An error is returned if this limit
+         is exceeded.
+
+         Recommend values of 1-2 for read-only slaves, higher for
+         masters w/o cache warming.
+      -->
+    <maxWarmingSearchers>2</maxWarmingSearchers>
+
+  </query>
+
+
+  <!-- Request Dispatcher
+
+       This section contains instructions for how the SolrDispatchFilter
+       should behave when processing requests for this SolrCore.
+
+       handleSelect is a legacy option that affects the behavior of requests
+       such as /select?qt=XXX
+
+       handleSelect="true" will cause the SolrDispatchFilter to process
+       the request and dispatch the query to a handler specified by the 
+       "qt" param, assuming "/select" isn't already registered.
+
+       handleSelect="false" will cause the SolrDispatchFilter to
+       ignore "/select" requests, resulting in a 404 unless a handler
+       is explicitly registered with the name "/select"
+
+       handleSelect="true" is not recommended for new users, but is the default
+       for backwards compatibility
+    -->
+  <requestDispatcher handleSelect="false" >
+    <!-- Request Parsing
+
+         These settings indicate how Solr Requests may be parsed, and
+         what restrictions may be placed on the ContentStreams from
+         those requests
+
+         enableRemoteStreaming - enables use of the stream.file
+         and stream.url parameters for specifying remote streams.
+
+         multipartUploadLimitInKB - specifies the max size (in KiB) of
+         Multipart File Uploads that Solr will allow in a Request.
+         
+         formdataUploadLimitInKB - specifies the max size (in KiB) of
+         form data (application/x-www-form-urlencoded) sent via
+         POST. You can use POST to pass request parameters not
+         fitting into the URL.
+         
+         addHttpRequestToContext - if set to true, it will instruct
+         the requestParsers to include the original HttpServletRequest
+         object in the context map of the SolrQueryRequest under the 
+         key "httpRequest". It will not be used by any of the existing
+         Solr components, but may be useful when developing custom 
+         plugins.
+         
+         *** WARNING ***
+         The settings below authorize Solr to fetch remote files, You
+         should make sure your system has some authentication before
+         using enableRemoteStreaming="true"
+
+      -->
+    <requestParsers enableRemoteStreaming="true"
+                    multipartUploadLimitInKB="2048000"
+                    formdataUploadLimitInKB="2048"
+                    addHttpRequestToContext="false"/>
+
+    <!-- HTTP Caching
+
+         Set HTTP caching related parameters (for proxy caches and clients).
+
+         The options below instruct Solr not to output any HTTP Caching
+         related headers
+      -->
+    <httpCaching never304="true" />
+    <!-- If you include a <cacheControl> directive, it will be used to
+         generate a Cache-Control header (as well as an Expires header
+         if the value contains "max-age=")
+         
+         By default, no Cache-Control header is generated.
+         
+         You can use the <cacheControl> option even if you have set
+         never304="true"
+      -->
+    <!--
+       <httpCaching never304="true" >
+         <cacheControl>max-age=30, public</cacheControl> 
+       </httpCaching>
+      -->
+    <!-- To enable Solr to respond with automatically generated HTTP
+         Caching headers, and to response to Cache Validation requests
+         correctly, set the value of never304="false"
+         
+         This will cause Solr to generate Last-Modified and ETag
+         headers based on the properties of the Index.
+
+         The following options can also be specified to affect the
+         values of these headers...
+
+         lastModFrom - the default value is "openTime" which means the
+         Last-Modified value (and validation against If-Modified-Since
+         requests) will all be relative to when the current Searcher
+         was opened.  You can change it to lastModFrom="dirLastMod" if
+         you want the value to exactly correspond to when the physical
+         index was last modified.
+
+         etagSeed="..." is an option you can change to force the ETag
+         header (and validation against If-None-Match requests) to be
+         different even if the index has not changed (ie: when making
+         significant changes to your config file)
+
+         (lastModifiedFrom and etagSeed are both ignored if you use
+         the never304="true" option)
+      -->
+    <!--
+       <httpCaching lastModifiedFrom="openTime"
+                    etagSeed="Solr">
+         <cacheControl>max-age=30, public</cacheControl> 
+       </httpCaching>
+      -->
+  </requestDispatcher>
+
+  <!-- Request Handlers 
+
+       http://wiki.apache.org/solr/SolrRequestHandler
+
+       Incoming queries will be dispatched to a specific handler by name
+       based on the path specified in the request.
+
+       Legacy behavior: If the request path uses "/select" but no Request
+       Handler has that name, and if handleSelect="true" has been specified in
+       the requestDispatcher, then the Request Handler is dispatched based on
+       the qt parameter.  Handlers without a leading '/' are accessed this way
+       like so: http://host/app/[core/]select?qt=name  If no qt is
+       given, then the requestHandler that declares default="true" will be
+       used or the one named "standard".
+
+       If a Request Handler is declared with startup="lazy", then it will
+       not be initialized until the first request that uses it.
+
+    -->
+
+   <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+   <!-- GIBS: Adding dataimport request handler -->
+   <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+
+   <requestHandler name="/dataimport" class="org.apache.solr.handler.dataimport.DataImportHandler">
+     <lst name="defaults">
+       <str name="config">/Applications/solr-5.1.0/server/solr/product/conf/data-config.xml</str>
+     </lst>
+   </requestHandler>
+
+   <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+   <!-- GIBS: Adding replication request handler -->
+   <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+
+   <!--
+   <requestHandler name="/replication" class="solr.ReplicationHandler" >
+     <lst name="master">
+       <str name="enable">${enable.master:false}</str>
+       <str name="replicateAfter">commit</str>
+       <str name="confFiles">schema.xml,stopwords.txt</str>
+    </lst>
+    <lst name="slave">
+       <str name="enable">${enable.slave:false}</str>
+      <str name="masterUrl">http://master_host:8983/solr</str>
+      <str name="pollInterval">00:00:60</str>
+    </lst>
+   </requestHandler>
+  -->
+
+  <!-- SearchHandler
+
+       http://wiki.apache.org/solr/SearchHandler
+
+       For processing Search Queries, the primary Request Handler
+       provided with Solr is "SearchHandler" It delegates to a sequent
+       of SearchComponents (see below) and supports distributed
+       queries across multiple shards
+    -->
+  <requestHandler name="/select" class="solr.SearchHandler">
+    <!-- default values for query parameters can be specified, these
+         will be overridden by parameters in the request
+      -->
+    <lst name="defaults">
+      <str name="echoParams">explicit</str>
+      <int name="rows">10</int>
+      <!-- <str name="df">text</str> -->
+    </lst>
+    <!-- In addition to defaults, "appends" params can be specified
+         to identify values which should be appended to the list of
+         multi-val params from the query (or the existing "defaults").
+      -->
+    <!-- In this example, the param "fq=instock:true" would be appended to
+         any query time fq params the user may specify, as a mechanism for
+         partitioning the index, independent of any user selected filtering
+         that may also be desired (perhaps as a result of faceted searching).
+
+         NOTE: there is *absolutely* nothing a client can do to prevent these
+         "appends" values from being used, so don't use this mechanism
+         unless you are sure you always want it.
+      -->
+    <!--
+       <lst name="appends">
+         <str name="fq">inStock:true</str>
+       </lst>
+      -->
+    <!-- "invariants" are a way of letting the Solr maintainer lock down
+         the options available to Solr clients.  Any params values
+         specified here are used regardless of what values may be specified
+         in either the query, the "defaults", or the "appends" params.
+
+         In this example, the facet.field and facet.query params would
+         be fixed, limiting the facets clients can use.  Faceting is
+         not turned on by default - but if the client does specify
+         facet=true in the request, these are the only facets they
+         will be able to see counts for; regardless of what other
+         facet.field or facet.query params they may specify.
+
+         NOTE: there is *absolutely* nothing a client can do to prevent these
+         "invariants" values from being used, so don't use this mechanism
+         unless you are sure you always want it.
+      -->
+    <!--
+       <lst name="invariants">
+         <str name="facet.field">cat</str>
+         <str name="facet.field">manu_exact</str>
+         <str name="facet.query">price:[* TO 500]</str>
+         <str name="facet.query">price:[500 TO *]</str>
+       </lst>
+      -->
+    <!-- If the default list of SearchComponents is not desired, that
+         list can either be overridden completely, or components can be
+         prepended or appended to the default list.  (see below)
+      -->
+    <!--
+       <arr name="components">
+         <str>nameOfCustomComponent1</str>
+         <str>nameOfCustomComponent2</str>
+       </arr>
+      -->
+  </requestHandler>
+
+  <!-- A request handler that returns indented JSON by default -->
+  <requestHandler name="/query" class="solr.SearchHandler">
+    <lst name="defaults">
+      <str name="echoParams">explicit</str>
+      <str name="wt">json</str>
+      <str name="indent">true</str>
+    </lst>
+  </requestHandler>
+
+
+  <requestHandler name="/browse" class="solr.SearchHandler" useParams="query,facets,velocity,browse">
+    <lst name="defaults">
+      <str name="echoParams">explicit</str>
+    </lst>
+  </requestHandler>
+
+
+  <initParams path="/update/**,/query,/select,/tvrh,/elevate,/spell,/browse">
+    <lst name="defaults">
+      <str name="df">_text_</str>
+    </lst>
+  </initParams>
+
+  <initParams path="/update/**">
+    <lst name="defaults">
+      <str name="update.chain">add-unknown-fields-to-the-schema</str>
+    </lst>
+  </initParams>
+
+  <!-- Solr Cell Update Request Handler
+
+       http://wiki.apache.org/solr/ExtractingRequestHandler 
+
+    -->
+  <requestHandler name="/update/extract"
+                  startup="lazy"
+                  class="solr.extraction.ExtractingRequestHandler" >
+    <lst name="defaults">
+      <str name="lowernames">true</str>
+      <str name="fmap.meta">ignored_</str>
+      <str name="fmap.content">_text_</str>
+    </lst>
+  </requestHandler>
+
+  <!--
+    The export request handler is used to export full sorted result sets.
+    Do not change these defaults.
+  -->
+
+  <requestHandler name="/export" class="solr.SearchHandler">
+    <lst name="invariants">
+      <str name="rq">{!xport}</str>
+      <str name="wt">xsort</str>
+      <str name="distrib">false</str>
+    </lst>
+
+    <arr name="components">
+      <str>query</str>
+    </arr>
+  </requestHandler>
+
+
+  <!--
+  Distributed Stream processing.
+  -->
+
+  <requestHandler name="/stream" class="solr.StreamHandler">
+    <lst name="invariants">
+      <str name="wt">json</str>
+      <str name="distrib">false</str>
+    </lst>
+  </requestHandler>
+
+
+
+  <!-- Field Analysis Request Handler
+
+       RequestHandler that provides much the same functionality as
+       analysis.jsp. Provides the ability to specify multiple field
+       types and field names in the same request and outputs
+       index-time and query-time analysis for each of them.
+
+       Request parameters are:
+       analysis.fieldname - field name whose analyzers are to be used
+
+       analysis.fieldtype - field type whose analyzers are to be used
+       analysis.fieldvalue - text for index-time analysis
+       q (or analysis.q) - text for query time analysis
+       analysis.showmatch (true|false) - When set to true and when
+           query analysis is performed, the produced tokens of the
+           field value analysis will be marked as "matched" for every
+           token that is produces by the query analysis
+   -->
+  <requestHandler name="/analysis/field"
+                  startup="lazy"
+                  class="solr.FieldAnalysisRequestHandler" />
+
+
+  <!-- Document Analysis Handler
+
+       http://wiki.apache.org/solr/AnalysisRequestHandler
+
+       An analysis handler that provides a breakdown of the analysis
+       process of provided documents. This handler expects a (single)
+       content stream with the following format:
+
+       <docs>
+         <doc>
+           <field name="id">1</field>
+           <field name="name">The Name</field>
+           <field name="text">The Text Value</field>
+         </doc>
+         <doc>...</doc>
+         <doc>...</doc>
+         ...
+       </docs>
+
+    Note: Each document must contain a field which serves as the
+    unique key. This key is used in the returned response to associate
+    an analysis breakdown to the analyzed document.
+
+    Like the FieldAnalysisRequestHandler, this handler also supports
+    query analysis by sending either an "analysis.query" or "q"
+    request parameter that holds the query text to be analyzed. It
+    also supports the "analysis.showmatch" parameter which when set to
+    true, all field tokens that match the query tokens will be marked
+    as a "match". 
+  -->
+  <requestHandler name="/analysis/document"
+                  class="solr.DocumentAnalysisRequestHandler"
+                  startup="lazy" />
+
+  <!-- Echo the request contents back to the client -->
+  <requestHandler name="/debug/dump" class="solr.DumpRequestHandler" >
+    <lst name="defaults">
+      <str name="echoParams">explicit</str>
+      <str name="echoHandler">true</str>
+    </lst>
+  </requestHandler>
+
+  <!-- Search Components
+
+       Search components are registered to SolrCore and used by 
+       instances of SearchHandler (which can access them by name)
+       
+       By default, the following components are available:
+       
+       <searchComponent name="query"     class="solr.QueryComponent" />
+       <searchComponent name="facet"     class="solr.FacetComponent" />
+       <searchComponent name="mlt"       class="solr.MoreLikeThisComponent" />
+       <searchComponent name="highlight" class="solr.HighlightComponent" />
+       <searchComponent name="stats"     class="solr.StatsComponent" />
+       <searchComponent name="debug"     class="solr.DebugComponent" />
+   
+       Default configuration in a requestHandler would look like:
+
+       <arr name="components">
+         <str>query</str>
+         <str>facet</str>
+         <str>mlt</str>
+         <str>highlight</str>
+         <str>stats</str>
+         <str>debug</str>
+       </arr>
+
+       If you register a searchComponent to one of the standard names, 
+       that will be used instead of the default.
+
+       To insert components before or after the 'standard' components, use:
+    
+       <arr name="first-components">
+         <str>myFirstComponentName</str>
+       </arr>
+    
+       <arr name="last-components">
+         <str>myLastComponentName</str>
+       </arr>
+
+       NOTE: The component registered with the name "debug" will
+       always be executed after the "last-components" 
+       
+     -->
+
+  <!-- Spell Check
+
+       The spell check component can return a list of alternative spelling
+       suggestions.  
+
+       http://wiki.apache.org/solr/SpellCheckComponent
+    -->
+  <searchComponent name="spellcheck" class="solr.SpellCheckComponent">
+
+    <str name="queryAnalyzerFieldType">text_general</str>
+
+    <!-- Multiple "Spell Checkers" can be declared and used by this
+         component
+      -->
+
+    <!-- a spellchecker built from a field of the main index -->
+    <lst name="spellchecker">
+      <str name="name">default</str>
+      <str name="field">text</str>
+      <str name="classname">solr.DirectSolrSpellChecker</str>
+      <!-- the spellcheck distance measure used, the default is the internal levenshtein -->
+      <str name="distanceMeasure">internal</str>
+      <!-- minimum accuracy needed to be considered a valid spellcheck suggestion -->
+      <float name="accuracy">0.5</float>
+      <!-- the maximum #edits we consider when enumerating terms: can be 1 or 2 -->
+      <int name="maxEdits">2</int>
+      <!-- the minimum shared prefix when enumerating terms -->
+      <int name="minPrefix">1</int>
+      <!-- maximum number of inspections per result. -->
+      <int name="maxInspections">5</int>
+      <!-- minimum length of a query term to be considered for correction -->
+      <int name="minQueryLength">4</int>
+      <!-- maximum threshold of documents a query term can appear to be considered for correction -->
+      <float name="maxQueryFrequency">0.01</float>
+      <!-- uncomment this to require suggestions to occur in 1% of the documents
+      	<float name="thresholdTokenFrequency">.01</float>
+      -->
+    </lst>
+
+    <!-- a spellchecker that can break or combine words.  See "/spell" handler below for usage -->
+    <lst name="spellchecker">
+      <str name="name">wordbreak</str>
+      <str name="classname">solr.WordBreakSolrSpellChecker</str>
+      <str name="field">name</str>
+      <str name="combineWords">true</str>
+      <str name="breakWords">true</str>
+      <int name="maxChanges">10</int>
+    </lst>
+
+    <!-- a spellchecker that uses a different distance measure -->
+    <!--
+       <lst name="spellchecker">
+         <str name="name">jarowinkler</str>
+         <str name="field">spell</str>
+         <str name="classname">solr.DirectSolrSpellChecker</str>
+         <str name="distanceMeasure">
+           org.apache.lucene.search.spell.JaroWinklerDistance
+         </str>
+       </lst>
+     -->
+
+    <!-- a spellchecker that use an alternate comparator 
+
+         comparatorClass be one of:
+          1. score (default)
+          2. freq (Frequency first, then score)
+          3. A fully qualified class name
+      -->
+    <!--
+       <lst name="spellchecker">
+         <str name="name">freq</str>
+         <str name="field">lowerfilt</str>
+         <str name="classname">solr.DirectSolrSpellChecker</str>
+         <str name="comparatorClass">freq</str>
+      -->
+
+    <!-- A spellchecker that reads the list of words from a file -->
+    <!--
+       <lst name="spellchecker">
+         <str name="classname">solr.FileBasedSpellChecker</str>
+         <str name="name">file</str>
+         <str name="sourceLocation">spellings.txt</str>
+         <str name="characterEncoding">UTF-8</str>
+         <str name="spellcheckIndexDir">spellcheckerFile</str>
+       </lst>
+      -->
+  </searchComponent>
+
+  <!-- A request handler for demonstrating the spellcheck component.  
+
+       NOTE: This is purely as an example.  The whole purpose of the
+       SpellCheckComponent is to hook it into the request handler that
+       handles your normal user queries so that a separate request is
+       not needed to get suggestions.
+
+       IN OTHER WORDS, THERE IS REALLY GOOD CHANCE THE SETUP BELOW IS
+       NOT WHAT YOU WANT FOR YOUR PRODUCTION SYSTEM!
+       
+       See http://wiki.apache.org/solr/SpellCheckComponent for details
+       on the request parameters.
+    -->
+  <requestHandler name="/spell" class="solr.SearchHandler" startup="lazy">
+    <lst name="defaults">
+      <!-- Solr will use suggestions from both the 'default' spellchecker
+           and from the 'wordbreak' spellchecker and combine them.
+           collations (re-written queries) can include a combination of
+           corrections from both spellcheckers -->
+      <str name="spellcheck.dictionary">default</str>
+      <str name="spellcheck.dictionary">wordbreak</str>
+      <str name="spellcheck">on</str>
+      <str name="spellcheck.extendedResults">true</str>
+      <str name="spellcheck.count">10</str>
+      <str name="spellcheck.alternativeTermCount">5</str>
+      <str name="spellcheck.maxResultsForSuggest">5</str>
+      <str name="spellcheck.collate">true</str>
+      <str name="spellcheck.collateExtendedResults">true</str>
+      <str name="spellcheck.maxCollationTries">10</str>
+      <str name="spellcheck.maxCollations">5</str>
+    </lst>
+    <arr name="last-components">
+      <str>spellcheck</str>
+    </arr>
+  </requestHandler>
+
+  <!-- Term Vector Component
+
+       http://wiki.apache.org/solr/TermVectorComponent
+    -->
+  <searchComponent name="tvComponent" class="solr.TermVectorComponent"/>
+
+  <!-- A request handler for demonstrating the term vector component
+
+       This is purely as an example.
+
+       In reality you will likely want to add the component to your 
+       already specified request handlers. 
+    -->
+  <requestHandler name="/tvrh" class="solr.SearchHandler" startup="lazy">
+    <lst name="defaults">
+      <bool name="tv">true</bool>
+    </lst>
+    <arr name="last-components">
+      <str>tvComponent</str>
+    </arr>
+  </requestHandler>
+
+  <!-- Clustering Component. (Omitted here. See the default Solr example for a typical configuration.) -->
+
+  <!-- Terms Component
+
+       http://wiki.apache.org/solr/TermsComponent
+
+       A component to return terms and document frequency of those
+       terms
+    -->
+  <searchComponent name="terms" class="solr.TermsComponent"/>
+
+  <!-- A request handler for demonstrating the terms component -->
+  <requestHandler name="/terms" class="solr.SearchHandler" startup="lazy">
+    <lst name="defaults">
+      <bool name="terms">true</bool>
+      <bool name="distrib">false</bool>
+    </lst>
+    <arr name="components">
+      <str>terms</str>
+    </arr>
+  </requestHandler>
+
+
+  <!-- Query Elevation Component
+
+       http://wiki.apache.org/solr/QueryElevationComponent
+
+       a search component that enables you to configure the top
+       results for a given query regardless of the normal lucene
+       scoring.
+    -->
+  <searchComponent name="elevator" class="solr.QueryElevationComponent" >
+    <!-- pick a fieldType to analyze queries -->
+    <str name="queryFieldType">string</str>
+    <str name="config-file">elevate.xml</str>
+  </searchComponent>
+
+  <!-- A request handler for demonstrating the elevator component -->
+  <requestHandler name="/elevate" class="solr.SearchHandler" startup="lazy">
+    <lst name="defaults">
+      <str name="echoParams">explicit</str>
+    </lst>
+    <arr name="last-components">
+      <str>elevator</str>
+    </arr>
+  </requestHandler>
+
+  <!-- Highlighting Component
+
+       http://wiki.apache.org/solr/HighlightingParameters
+    -->
+  <searchComponent class="solr.HighlightComponent" name="highlight">
+    <highlighting>
+      <!-- Configure the standard fragmenter -->
+      <!-- This could most likely be commented out in the "default" case -->
+      <fragmenter name="gap"
+                  default="true"
+                  class="solr.highlight.GapFragmenter">
+        <lst name="defaults">
+          <int name="hl.fragsize">100</int>
+        </lst>
+      </fragmenter>
+
+      <!-- A regular-expression-based fragmenter 
+           (for sentence extraction) 
+        -->
+      <fragmenter name="regex"
+                  class="solr.highlight.RegexFragmenter">
+        <lst name="defaults">
+          <!-- slightly smaller fragsizes work better because of slop -->
+          <int name="hl.fragsize">70</int>
+          <!-- allow 50% slop on fragment sizes -->
+          <float name="hl.regex.slop">0.5</float>
+          <!-- a basic sentence pattern -->
+          <str name="hl.regex.pattern">[-\w ,/\n\&quot;&apos;]{20,200}</str>
+        </lst>
+      </fragmenter>
+
+      <!-- Configure the standard formatter -->
+      <formatter name="html"
+                 default="true"
+                 class="solr.highlight.HtmlFormatter">
+        <lst name="defaults">
+          <str name="hl.simple.pre"><![CDATA[<em>]]></str>
+          <str name="hl.simple.post"><![CDATA[</em>]]></str>
+        </lst>
+      </formatter>
+
+      <!-- Configure the standard encoder -->
+      <encoder name="html"
+               class="solr.highlight.HtmlEncoder" />
+
+      <!-- Configure the standard fragListBuilder -->
+      <fragListBuilder name="simple"
+                       class="solr.highlight.SimpleFragListBuilder"/>
+
+      <!-- Configure the single fragListBuilder -->
+      <fragListBuilder name="single"
+                       class="solr.highlight.SingleFragListBuilder"/>
+
+      <!-- Configure the weighted fragListBuilder -->
+      <fragListBuilder name="weighted"
+                       default="true"
+                       class="solr.highlight.WeightedFragListBuilder"/>
+
+      <!-- default tag FragmentsBuilder -->
+      <fragmentsBuilder name="default"
+                        default="true"
+                        class="solr.highlight.ScoreOrderFragmentsBuilder">
+        <!-- 
+        <lst name="defaults">
+          <str name="hl.multiValuedSeparatorChar">/</str>
+        </lst>
+        -->
+      </fragmentsBuilder>
+
+      <!-- multi-colored tag FragmentsBuilder -->
+      <fragmentsBuilder name="colored"
+                        class="solr.highlight.ScoreOrderFragmentsBuilder">
+        <lst name="defaults">
+          <str name="hl.tag.pre"><![CDATA[
+               <b style="background:yellow">,<b style="background:lawgreen">,
+               <b style="background:aquamarine">,<b style="background:magenta">,
+               <b style="background:palegreen">,<b style="background:coral">,
+               <b style="background:wheat">,<b style="background:khaki">,
+               <b style="background:lime">,<b style="background:deepskyblue">]]></str>
+          <str name="hl.tag.post"><![CDATA[</b>]]></str>
+        </lst>
+      </fragmentsBuilder>
+
+      <boundaryScanner name="default"
+                       default="true"
+                       class="solr.highlight.SimpleBoundaryScanner">
+        <lst name="defaults">
+          <str name="hl.bs.maxScan">10</str>
+          <str name="hl.bs.chars">.,!? &#9;&#10;&#13;</str>
+        </lst>
+      </boundaryScanner>
+
+      <boundaryScanner name="breakIterator"
+                       class="solr.highlight.BreakIteratorBoundaryScanner">
+        <lst name="defaults">
+          <!-- type should be one of CHARACTER, WORD(default), LINE and SENTENCE -->
+          <str name="hl.bs.type">WORD</str>
+          <!-- language and country are used when constructing Locale object.  -->
+          <!-- And the Locale object will be used when getting instance of BreakIterator -->
+          <str name="hl.bs.language">en</str>
+          <str name="hl.bs.country">US</str>
+        </lst>
+      </boundaryScanner>
+    </highlighting>
+  </searchComponent>
+
+  <!-- Update Processors
+
+       Chains of Update Processor Factories for dealing with Update
+       Requests can be declared, and then used by name in Update
+       Request Processors
+
+       http://wiki.apache.org/solr/UpdateRequestProcessor
+
+    -->
+  
+  <!-- Add unknown fields to the schema 
+  
+       An example field type guessing update processor that will
+       attempt to parse string-typed field values as Booleans, Longs,
+       Doubles, or Dates, and then add schema fields with the guessed
+       field types.  
+       
+       This requires that the schema is both managed and mutable, by
+       declaring schemaFactory as ManagedIndexSchemaFactory, with
+       mutable specified as true. 
+       
+       See http://wiki.apache.org/solr/GuessingFieldTypes
+    -->
+  <updateRequestProcessorChain name="add-unknown-fields-to-the-schema">
+    <!-- UUIDUpdateProcessorFactory will generate an id if none is present in the incoming document -->
+    <processor class="solr.UUIDUpdateProcessorFactory" />
+
+    <processor class="solr.LogUpdateProcessorFactory"/>
+    <processor class="solr.DistributedUpdateProcessorFactory"/>
+    <processor class="solr.RemoveBlankFieldUpdateProcessorFactory"/>
+    <processor class="solr.FieldNameMutatingUpdateProcessorFactory">
+      <str name="pattern">[^\w-\.]</str>
+      <str name="replacement">_</str>
+    </processor>
+    <processor class="solr.ParseBooleanFieldUpdateProcessorFactory"/>
+    <processor class="solr.ParseLongFieldUpdateProcessorFactory"/>
+    <processor class="solr.ParseDoubleFieldUpdateProcessorFactory"/>
+    <processor class="solr.ParseDateFieldUpdateProcessorFactory">
+      <arr name="format">
+        <str>yyyy-MM-dd'T'HH:mm:ss.SSSZ</str>
+        <str>yyyy-MM-dd'T'HH:mm:ss,SSSZ</str>
+        <str>yyyy-MM-dd'T'HH:mm:ss.SSS</str>
+        <str>yyyy-MM-dd'T'HH:mm:ss,SSS</str>
+        <str>yyyy-MM-dd'T'HH:mm:ssZ</str>
+        <str>yyyy-MM-dd'T'HH:mm:ss</str>
+        <str>yyyy-MM-dd'T'HH:mmZ</str>
+        <str>yyyy-MM-dd'T'HH:mm</str>
+        <str>yyyy-MM-dd HH:mm:ss.SSSZ</str>
+        <str>yyyy-MM-dd HH:mm:ss,SSSZ</str>
+        <str>yyyy-MM-dd HH:mm:ss.SSS</str>
+        <str>yyyy-MM-dd HH:mm:ss,SSS</str>
+        <str>yyyy-MM-dd HH:mm:ssZ</str>
+        <str>yyyy-MM-dd HH:mm:ss</str>
+        <str>yyyy-MM-dd HH:mmZ</str>
+        <str>yyyy-MM-dd HH:mm</str>
+        <str>yyyy-MM-dd</str>
+      </arr>
+    </processor>
+    <processor class="solr.AddSchemaFieldsUpdateProcessorFactory">
+      <str name="defaultFieldType">strings</str>
+      <lst name="typeMapping">
+        <str name="valueClass">java.lang.Boolean</str>
+        <str name="fieldType">booleans</str>
+      </lst>
+      <lst name="typeMapping">
+        <str name="valueClass">java.util.Date</str>
+        <str name="fieldType">tdates</str>
+      </lst>
+      <lst name="typeMapping">
+        <str name="valueClass">java.lang.Long</str>
+        <str name="valueClass">java.lang.Integer</str>
+        <str name="fieldType">tlongs</str>
+      </lst>
+      <lst name="typeMapping">
+        <str name="valueClass">java.lang.Number</str>
+        <str name="fieldType">tdoubles</str>
+      </lst>
+    </processor>
+    <processor class="solr.RunUpdateProcessorFactory"/>
+  </updateRequestProcessorChain>
+
+  <!-- Deduplication
+
+       An example dedup update processor that creates the "id" field
+       on the fly based on the hash code of some other fields.  This
+       example has overwriteDupes set to false since we are using the
+       id field as the signatureField and Solr will maintain
+       uniqueness based on that anyway.  
+       
+    -->
+  <!--
+     <updateRequestProcessorChain name="dedupe">
+       <processor class="solr.processor.SignatureUpdateProcessorFactory">
+         <bool name="enabled">true</bool>
+         <str name="signatureField">id</str>
+         <bool name="overwriteDupes">false</bool>
+         <str name="fields">name,features,cat</str>
+         <str name="signatureClass">solr.processor.Lookup3Signature</str>
+       </processor>
+       <processor class="solr.LogUpdateProcessorFactory" />
+       <processor class="solr.RunUpdateProcessorFactory" />
+     </updateRequestProcessorChain>
+    -->
+
+  <!-- Language identification
+
+       This example update chain identifies the language of the incoming
+       documents using the langid contrib. The detected language is
+       written to field language_s. No field name mapping is done.
+       The fields used for detection are text, title, subject and description,
+       making this example suitable for detecting languages form full-text
+       rich documents injected via ExtractingRequestHandler.
+       See more about langId at http://wiki.apache.org/solr/LanguageDetection
+    -->
+  <!--
+   <updateRequestProcessorChain name="langid">
+     <processor class="org.apache.solr.update.processor.TikaLanguageIdentifierUpdateProcessorFactory">
+       <str name="langid.fl">text,title,subject,description</str>
+       <str name="langid.langField">language_s</str>
+       <str name="langid.fallback">en</str>
+     </processor>
+     <processor class="solr.LogUpdateProcessorFactory" />
+     <processor class="solr.RunUpdateProcessorFactory" />
+   </updateRequestProcessorChain>
+  -->
+
+  <!-- Script update processor
+
+    This example hooks in an update processor implemented using JavaScript.
+
+    See more about the script update processor at http://wiki.apache.org/solr/ScriptUpdateProcessor
+  -->
+  <!--
+    <updateRequestProcessorChain name="script">
+      <processor class="solr.StatelessScriptUpdateProcessorFactory">
+        <str name="script">update-script.js</str>
+        <lst name="params">
+          <str name="config_param">example config parameter</str>
+        </lst>
+      </processor>
+      <processor class="solr.RunUpdateProcessorFactory" />
+    </updateRequestProcessorChain>
+  -->
+
+  <!-- Response Writers
+
+       http://wiki.apache.org/solr/QueryResponseWriter
+
+       Request responses will be written using the writer specified by
+       the 'wt' request parameter matching the name of a registered
+       writer.
+
+       The "default" writer is the default and will be used if 'wt' is
+       not specified in the request.
+    -->
+  <!-- The following response writers are implicitly configured unless
+       overridden...
+    -->
+  <!--
+     <queryResponseWriter name="xml" 
+                          default="true"
+                          class="solr.XMLResponseWriter" />
+     <queryResponseWriter name="json" class="solr.JSONResponseWriter"/>
+     <queryResponseWriter name="python" class="solr.PythonResponseWriter"/>
+     <queryResponseWriter name="ruby" class="solr.RubyResponseWriter"/>
+     <queryResponseWriter name="php" class="solr.PHPResponseWriter"/>
+     <queryResponseWriter name="phps" class="solr.PHPSerializedResponseWriter"/>
+     <queryResponseWriter name="csv" class="solr.CSVResponseWriter"/>
+     <queryResponseWriter name="schema.xml" class="solr.SchemaXmlResponseWriter"/>
+    -->
+
+  <queryResponseWriter name="json" class="solr.JSONResponseWriter">
+    <!-- For the purposes of the tutorial, JSON responses are written as
+     plain text so that they are easy to read in *any* browser.
+     If you expect a MIME type of "application/json" just remove this override.
+    -->
+    <str name="content-type">text/plain; charset=UTF-8</str>
+  </queryResponseWriter>
+
+  <!--
+     Custom response writers can be declared as needed...
+    -->
+  <queryResponseWriter name="velocity" class="solr.VelocityResponseWriter" startup="lazy">
+    <str name="template.base.dir">${velocity.template.base.dir:}</str>
+  </queryResponseWriter>
+
+  <!-- XSLT response writer transforms the XML output by any xslt file found
+       in Solr's conf/xslt directory.  Changes to xslt files are checked for
+       every xsltCacheLifetimeSeconds.  
+    -->
+  <queryResponseWriter name="xslt" class="solr.XSLTResponseWriter">
+    <int name="xsltCacheLifetimeSeconds">5</int>
+  </queryResponseWriter>
+
+  <!-- Query Parsers
+
+       http://wiki.apache.org/solr/SolrQuerySyntax
+
+       Multiple QParserPlugins can be registered by name, and then
+       used in either the "defType" param for the QueryComponent (used
+       by SearchHandler) or in LocalParams
+    -->
+  <!-- example of registering a query parser -->
+  <!--
+     <queryParser name="myparser" class="com.mycompany.MyQParserPlugin"/>
+    -->
+
+  <!-- Function Parsers
+
+       http://wiki.apache.org/solr/FunctionQuery
+
+       Multiple ValueSourceParsers can be registered by name, and then
+       used as function names when using the "func" QParser.
+    -->
+  <!-- example of registering a custom function parser  -->
+  <!--
+     <valueSourceParser name="myfunc" 
+                        class="com.mycompany.MyValueSourceParser" />
+    -->
+
+
+  <!-- Document Transformers
+       http://wiki.apache.org/solr/DocTransformers
+    -->
+  <!--
+     Could be something like:
+     <transformer name="db" class="com.mycompany.LoadFromDatabaseTransformer" >
+       <int name="connection">jdbc://....</int>
+     </transformer>
+     
+     To add a constant value to all docs, use:
+     <transformer name="mytrans2" class="org.apache.solr.response.transform.ValueAugmenterFactory" >
+       <int name="value">5</int>
+     </transformer>
+     
+     If you want the user to still be able to change it with _value:something_ use this:
+     <transformer name="mytrans3" class="org.apache.solr.response.transform.ValueAugmenterFactory" >
+       <double name="defaultValue">5</double>
+     </transformer>
+
+      If you are using the QueryElevationComponent, you may wish to mark documents that get boosted.  The
+      EditorialMarkerFactory will do exactly that:
+     <transformer name="qecBooster" class="org.apache.solr.response.transform.EditorialMarkerFactory" />
+    -->
+
+
+  <!-- Legacy config for the admin interface -->
+  <admin>
+    <defaultQuery>*:*</defaultQuery>
+  </admin>
+
+</config>

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/solr/product_type/conf/data-config.xml
----------------------------------------------------------------------
diff --git a/src/main/solr/product_type/conf/data-config.xml b/src/main/solr/product_type/conf/data-config.xml
new file mode 100644
index 0000000..93cd49c
--- /dev/null
+++ b/src/main/solr/product_type/conf/data-config.xml
@@ -0,0 +1,124 @@
+<!--*******************************************************************************************************************-->
+<!-- GIBS: product_type dataimport data-config.xml                                                                     -->
+<!-- url="jdbc:postgresql://localhost:8888/gibs"                                                                       -->
+<!-- user="gibs"                                                                                                       -->
+<!--*******************************************************************************************************************-->
+
+<dataConfig>
+
+    <dataSource 
+        driver="org.postgresql.Driver" 
+        url="jdbc:postgresql://localhost/twright" 
+        user="twright" />
+
+    <document>
+
+        <entity name="product_type"
+            query="select * from product_type_view"
+            transformer="RegexTransformer">
+
+            <field column="product_type_dataset_id_list"                        name="product_type_dataset_id_list"                   splitBy = "," />
+            <field column="product_type_dataset_version_list"                   name="product_type_dataset_version_list"              splitBy = "," />
+            <field column="product_type_dataset_description_list"               name="product_type_dataset_description_list"          splitBy = "," />
+            <field column="product_type_dataset_long_name_list"                 name="product_type_dataset_long_name_list"            splitBy = "," />
+            <field column="product_type_dataset_short_name_list"                name="product_type_dataset_short_name_list"           splitBy = "," />
+            <field column="product_type_dataset_metadata_endpoint_list"         name="product_type_dataset_metadata_endpoint_list"    splitBy = "," />
+            <field column="product_type_dataset_metadata_registry_list"         name="product_type_dataset_metadata_registry_list"    splitBy = "," />
+            <field column="product_type_dataset_remote_dataset_id_list"         name="product_type_dataset_remote_dataset_id_list"    splitBy = "," />
+
+            <field column="product_type_resource_version_list"                  name="product_type_resource_version_list"             splitBy = "," />
+            <field column="product_type_resource_type_list"                     name="product_type_resource_type_list"                splitBy = "," />
+            <field column="product_type_resource_name_list"                     name="product_type_resource_name_list"                splitBy = "," />
+            <field column="product_type_resource_path_list"                     name="product_type_resource_path_list"                splitBy = "," />
+            <field column="product_type_resource_description_list"              name="product_type_resource_description_list"         splitBy = "," />
+
+            <field column="product_type_coverage_version_list"                  name="product_type_coverage_version_list"             splitBy = "," />
+            <field column="product_type_coverage_east_longitude_list"           name="product_type_coverage_east_longitude_list"      splitBy = "," />
+            <field column="product_type_coverage_west_longitude_list"           name="product_type_coverage_west_longitude_list"      splitBy = "," />
+            <field column="product_type_coverage_stop_time_list"                name="product_type_coverage_stop_time_list"           splitBy = "," />
+            <field column="product_type_coverage_stop_time_string_list"         name="product_type_coverage_stop_time_string_list"    splitBy = "," />
+            <field column="product_type_coverage_start_time_list"               name="product_type_coverage_start_time_list"          splitBy = "," />
+            <field column="product_type_coverage_start_time_string_list"        name="product_type_coverage_start_time_string_list"   splitBy = "," />
+
+            <field column="product_type_generation_version_list"                name="product_type_generation_version_list"               splitBy = "," />
+            <field column="product_type_generation_mrf_block_size_list"         name="product_type_generation_mrf_block_size_list"        splitBy = "," />
+            <field column="product_type_generation_output_sizex_list"           name="product_type_generation_output_sizex_list"          splitBy = "," />
+            <field column="product_type_generation_output_sizey_list"           name="product_type_generation_output_sizey_list"          splitBy = "," />
+            <field column="product_type_generation_overview_levels_list"        name="product_type_generation_overview_levels_list"       splitBy = "," />
+            <field column="product_type_generation_overview_resample_list"      name="product_type_generation_overview_resample_list"     splitBy = "," />
+            <field column="product_type_generation_overview_scale_list"         name="product_type_generation_overview_scale_list"        splitBy = "," />
+            <field column="product_type_generation_reprojection_resample_list"  name="product_type_generation_reprojection_resample_list" splitBy = "," />
+            <field column="product_type_generation_resize_resample_list"        name="product_type_generation_resize_resample_list"       splitBy = "," />
+            <field column="product_type_generation_vrt_nodata_list"             name="product_type_generation_vrt_nodata_list"            splitBy = "," />
+
+            <field column="product_type_policy_version_list"                    name="product_type_policy_version_list"                   splitBy = "," />
+            <field column="product_type_policy_access_type_list"                name="product_type_policy_access_type_list"               splitBy = "," />
+            <field column="product_type_policy_access_constraint_list"          name="product_type_policy_access_constraint_list"         splitBy = "," />
+            <field column="product_type_policy_use_constraint_list"             name="product_type_policy_use_constraint_list"            splitBy = "," />
+            <field column="product_type_policy_base_path_append_type_list"      name="product_type_policy_base_path_append_type_list"     splitBy = "," />
+            <field column="product_type_policy_checksum_type_list"              name="product_type_policy_checksum_type_list"             splitBy = "," />
+            <field column="product_type_policy_compress_type_list"              name="product_type_policy_compress_type_list"             splitBy = "," />
+            <field column="product_type_policy_data_class_list"                 name="product_type_policy_data_class_list"                splitBy = "," />
+            <field column="product_type_policy_data_format_list"                name="product_type_policy_data_format_list"               splitBy = "," />
+            <field column="product_type_policy_spatial_type_list"               name="product_type_policy_spatial_type_list"              splitBy = "," />
+            <field column="product_type_policy_data_duration_list"              name="product_type_policy_data_duration_list"             splitBy = "," />
+            <field column="product_type_policy_data_frequency_list"             name="product_type_policy_data_frequency_list"            splitBy = "," />
+            <field column="product_type_policy_data_latency_list"               name="product_type_policy_data_latency_list"              splitBy = "," />
+            <field column="product_type_policy_data_volume_list"                name="product_type_policy_data_volume_list"               splitBy = "," />
+            <field column="product_type_policy_delivery_rate_list"              name="product_type_policy_delivery_rate_list"             splitBy = "," />
+            <field column="product_type_policy_multi_day_list"                  name="product_type_policy_multi_day_list"                 splitBy = "," />
+            <field column="product_type_policy_multi_day_link_list"             name="product_type_policy_multi_day_link_list"            splitBy = "," />
+
+            <field column="product_type_location_policy_version_list"           name="product_type_location_policy_version_list"          splitBy = "," />
+            <field column="product_type_location_policy_type_list"              name="product_type_location_policy_type_list"             splitBy = "," />
+            <field column="product_type_location_policy_access_base_path_list"  name="product_type_location_policy_access_base_path_list" splitBy = "," />
+
+            <field column="product_type_provider_resource_version_list"         name="product_type_provider_resource_version_list"        splitBy = "," />
+            <field column="product_type_provider_resource_description_list"     name="product_type_provider_resource_description_list"    splitBy = "," />
+            <field column="product_type_provider_resource_name_list"            name="product_type_provider_resource_name_list"           splitBy = "," />
+            <field column="product_type_provider_resource_path_list"            name="product_type_provider_resource_path_list"           splitBy = "," />
+            <field column="product_type_provider_resource_type_list"            name="product_type_provider_resource_type_list"           splitBy = "," />
+
+            <field column="product_type_provider_contact_version_list"          name="product_type_provider_contact_version_list"         splitBy = "," />
+            <field column="product_type_provider_contact_role_list"             name="product_type_provider_contact_role_list"            splitBy = "," />
+            <field column="product_type_provider_contact_first_name_list"       name="product_type_provider_contact_first_name_list"      splitBy = "," />
+            <field column="product_type_provider_contact_last_name_list"        name="product_type_provider_contact_last_name_list"       splitBy = "," />
+            <field column="product_type_provider_contact_middle_name_list"      name="product_type_provider_contact_middle_name_list"     splitBy = "," />
+            <field column="product_type_provider_contact_address_list"          name="product_type_provider_contact_address_list"         splitBy = "," />
+            <field column="product_type_provider_contact_notify_type_list"      name="product_type_provider_contact_notify_type_list"     splitBy = "," />
+            <field column="product_type_provider_contact_email_list"            name="product_type_provider_contact_email_list"           splitBy = "," />
+            <field column="product_type_provider_contact_phone_list"            name="product_type_provider_contact_phone_list"           splitBy = "," />
+            <field column="product_type_provider_contact_fax_list"              name="product_type_provider_contact_fax_list"             splitBy = "," />
+
+            <field column="product_type_element_version_list"                   name="product_type_element_version_list"                  splitBy = "," />
+            <field column="product_type_element_obligation_flag_list"           name="product_type_element_obligation_flag_list"          splitBy = "," />
+            <field column="product_type_element_scope_list"                     name="product_type_element_scope_list"                    splitBy = "," />
+
+            <field column="product_type_element_dd_version_list"                name="product_type_element_dd_version_list"               splitBy = "," />
+            <field column="product_type_element_dd_type_list"                   name="product_type_element_dd_type_list"                  splitBy = "," />
+            <field column="product_type_element_dd_description_list"            name="product_type_element_dd_description_list"           splitBy = "," />
+            <field column="product_type_element_dd_scope_list"                  name="product_type_element_dd_scope_list"                 splitBy = "," />
+            <field column="product_type_element_dd_long_name_list"              name="product_type_element_dd_long_name_list"             splitBy = "," />
+            <field column="product_type_element_dd_short_name_list"             name="product_type_element_dd_short_name_list"            splitBy = "," />
+            <field column="product_type_element_dd_max_length_list"             name="product_type_element_dd_max_length_list"            splitBy = "," />
+
+            <field column="product_type_datetime_version_list"                  name="product_type_datetime_version_list"                 splitBy = "," />
+            <field column="product_type_datetime_value_list"                    name="product_type_datetime_value_list"                   splitBy = "," />
+            <field column="product_type_datetime_value_string_list"             name="product_type_datetime_value_string_list"            splitBy = "," />
+
+            <field column="product_type_character_version_list"                 name="product_type_character_version_list"                splitBy = "," />
+            <field column="product_type_character_value_list"                   name="product_type_character_value_list"                  splitBy = "," />
+
+            <field column="product_type_integer_version_list"                   name="product_type_integer_version_list"                  splitBy = "," />
+            <field column="product_type_integer_value_list"                     name="product_type_integer_value_list"                    splitBy = "," />
+            <field column="product_type_integer_units_list"                     name="product_type_integer_units_list"                    splitBy = "," />
+
+            <field column="product_type_real_version_list"                      name="product_type_real_version_list"                     splitBy = "," />
+            <field column="product_type_real_value_list"                        name="product_type_real_value_list"                       splitBy = "," />
+            <field column="product_type_real_units_list"                        name="product_type_real_units_list"                       splitBy = "," />
+
+        </entity>
+
+    </document>
+
+</dataConfig>

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/solr/product_type/conf/dataimport.properties
----------------------------------------------------------------------
diff --git a/src/main/solr/product_type/conf/dataimport.properties b/src/main/solr/product_type/conf/dataimport.properties
new file mode 100644
index 0000000..6bc576a
--- /dev/null
+++ b/src/main/solr/product_type/conf/dataimport.properties
@@ -0,0 +1,3 @@
+#Wed Nov 04 19:56:43 UTC 2015
+product_type.last_index_time=2015-11-04 19\:56\:41
+last_index_time=2015-11-04 19\:56\:41


[12/15] incubator-sdap-edge git commit: SDAP-1 Import all code under the SDAP SGA

Posted by le...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/opensearch/isoresponse.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/opensearch/isoresponse.py b/src/main/python/libraries/edge/opensearch/isoresponse.py
new file mode 100644
index 0000000..70254c1
--- /dev/null
+++ b/src/main/python/libraries/edge/opensearch/isoresponse.py
@@ -0,0 +1,38 @@
+import logging
+
+from xml.dom.minidom import *
+import xml.sax.saxutils
+from jinja2 import Environment, Template
+
+from edge.opensearch.response import Response
+
+class IsoResponse(Response):
+    def __init__(self):
+        self.env = Environment()
+        self.env.trim_blocks = True
+        self.env.autoescape = True
+        self.variables = {}
+
+    def setTemplate(self, template):
+        self.template = self.env.from_string(template)
+
+    def addNamespace(self, name, uri):
+        self.namespaces[name] = uri
+
+    def removeNamespace(self, name):
+        del self.namespaces[name]
+
+    def generate(self, pretty=False):
+        logging.debug('IsoResponse.generate is called.')
+        
+        if pretty:
+            try :
+                isoStr = self.template.render(self.variables).encode('utf-8').replace('\n', '')
+            except Exception as e:
+                logging.debug("Problem generating ISO " + str(e))
+                del self.variables['doc']
+                isoStr = self.template.render(self.variables).encode('utf-8').replace('\n', '')
+            document = xml.dom.minidom.parseString(isoStr)
+            return document.toprettyxml()
+        else:
+            return self.template.render(self.variables).replace('\n', '')

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/opensearch/isoresponsebysolr.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/opensearch/isoresponsebysolr.py b/src/main/python/libraries/edge/opensearch/isoresponsebysolr.py
new file mode 100644
index 0000000..fd9090b
--- /dev/null
+++ b/src/main/python/libraries/edge/opensearch/isoresponsebysolr.py
@@ -0,0 +1,121 @@
+import json
+import logging
+
+from edge.opensearch.isoresponse import IsoResponse
+from datetime import date, datetime
+
+class IsoResponseBySolr(IsoResponse):
+    def __init__(self):
+        super(IsoResponseBySolr, self).__init__()
+
+    def generate(self, solrDatasetResponse, solrGranuleResponse = None, pretty=False):
+        self._populate(solrDatasetResponse, solrGranuleResponse)
+        return super(IsoResponseBySolr, self).generate(pretty)
+
+    def _populate(self, solrDatasetResponse, solrGranuleResponse = None):
+        if solrDatasetResponse is not None:
+            solrJson = json.loads(solrDatasetResponse)
+
+            logging.debug('dataset count: '+str(len(solrJson['response']['docs'])))
+
+            if len(solrJson['response']['docs']) == 1:
+                # ok now populate variables!
+                doc = solrJson['response']['docs'][0]
+
+                #self.variables['Dataset_ShortName'] = doc['Dataset-ShortName'][0]
+                #self.variables['Dataset_ShortName'] = u'unko'
+                
+                self.variables['doc'] = doc
+                
+                # Format dates
+                try:
+                    self.variables['DatasetCitation_ReleaseDate'] = date.fromtimestamp(float(doc['DatasetCitation-ReleaseDateLong'][0]) / 1000).strftime('%Y%m%d')
+                    self.variables['DatasetCoverage_StartTime'] = self._convertTimeLongToISO(doc['DatasetCoverage-StartTimeLong'][0])
+                    self.variables['DatasetCoverage_StopTime'] = self._convertTimeLongToISO(doc['DatasetCoverage-StopTimeLong'][0])
+                except:
+                    pass
+                
+                try:
+                    # Create list of unique dataset sensor
+                    self.variables['UniqueDatasetSensor'] = {}
+                    for i, x in enumerate(doc['DatasetSource-Sensor-ShortName']):
+                        self.variables['UniqueDatasetSensor'][x] = i
+                    self.variables['UniqueDatasetSensor'] = self.variables['UniqueDatasetSensor'].values()
+                    
+                    # Create list of unique dataset source
+                    self.variables['UniqueDatasetSource'] = {}
+                    for i, x in enumerate(doc['DatasetSource-Source-ShortName']):
+                        self.variables['UniqueDatasetSource'][x] = i
+                    self.variables['UniqueDatasetSource'] = self.variables['UniqueDatasetSource'].values()
+                    
+                    # Replace all none, None values with empty string
+                    doc['DatasetParameter-VariableDetail'] = [self._filterString(variableDetail) for variableDetail in doc['DatasetParameter-VariableDetail']]
+                    
+                    # Current date
+                    self.variables['DateStamp'] = datetime.utcnow().strftime('%Y%m%d')
+                    
+                    # Data format version
+                    self.variables['DatasetPolicy_DataFormat_Version'] = self._getDataFormatVersion(doc['DatasetPolicy-DataFormat'][0])
+                except Exception as e:
+                    logging.debug("Problem generating ISO " + str(e))
+                    del self.variables['doc']
+                
+                if solrGranuleResponse is not None:
+                    solrGranuleJson = json.loads(solrGranuleResponse)
+                    
+                    logging.debug('granule count: '+str(len(solrGranuleJson['response']['docs'])))
+                    
+                    for doc in solrGranuleJson['response']['docs']:
+                        self._populateItem(solrGranuleResponse, doc, None)
+                        
+                        doc['Granule-StartTimeLong'][0] = self._convertTimeLongToISO(doc['Granule-StartTimeLong'][0])
+                        doc['Granule-StopTimeLong'][0] = self._convertTimeLongToISO(doc['Granule-StopTimeLong'][0])
+                        doc['Granule-ArchiveTimeLong'][0] = self._convertTimeLongToISO(doc['Granule-ArchiveTimeLong'][0])
+                        doc['Granule-CreateTimeLong'][0] = self._convertTimeLongToISO(doc['Granule-CreateTimeLong'][0])
+                        
+                        # Create dictionary for bounding box extent
+                        '''
+                        if ('GranuleReal-Value' in doc and 'GranuleReal-DatasetElement-Element-ShortName' in doc):
+                            self.variables['GranuleBoundingBox'] = dict(zip(doc['GranuleReal-DatasetElement-Element-ShortName'], doc['GranuleReal-Value']))
+                        '''
+                        if 'GranuleSpatial-NorthLat' in doc and 'GranuleSpatial-EastLon' in doc and 'GranuleSpatial-SouthLat' in doc and 'GranuleSpatial-WestLon' in doc:
+                            self.variables['GranuleBoundingBox'] = dict([('southernmostLatitude', doc['GranuleSpatial-SouthLat'][0]), 
+                                                              ('northernmostLatitude', doc['GranuleSpatial-NorthLat'][0]),
+                                                              ('westernmostLongitude', doc['GranuleSpatial-WestLon'][0]),
+                                                              ('easternmostLongitude', doc['GranuleSpatial-EastLon'][0])])
+                        break
+                        
+                    self.variables['granules'] = solrGranuleJson['response']['docs']
+                
+    def _populateChannel(self, solrResponse):
+        pass
+
+    def _populateItem(self, solrResponse, doc, item):
+        pass
+    
+    def _convertTimeLongToISO(self, time):
+        isoTime = ''
+        try:
+            isoTime = datetime.utcfromtimestamp(float(time) / 1000).isoformat() + 'Z'
+        except ValueError:
+            pass
+        return isoTime
+    
+    def _filterString(self, str):
+        if str.lower() == 'none':
+            return ''
+        else:
+            return str
+    
+    def _getDataFormatVersion(self, dataFormat):
+        version = ''
+        if dataFormat == 'NETCDF':
+            version = 3
+        elif dataFormat == 'HDF':
+            version = 4
+        else:
+            try:
+                version = int(dataFormat[-1])
+            except:
+                pass
+        return version

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/opensearch/response.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/opensearch/response.py b/src/main/python/libraries/edge/opensearch/response.py
new file mode 100644
index 0000000..6b68921
--- /dev/null
+++ b/src/main/python/libraries/edge/opensearch/response.py
@@ -0,0 +1,12 @@
+import logging
+
+from xml.dom.minidom import Document
+import xml.sax.saxutils
+
+class Response(object):
+    def __init__(self):
+        self.searchBasePath = '/ws/search/'
+        self.metadataBasePath = '/ws/metadata/'
+
+    def generate(self, pretty=False):
+        pass

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/opensearch/responsebysolr.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/opensearch/responsebysolr.py b/src/main/python/libraries/edge/opensearch/responsebysolr.py
new file mode 100644
index 0000000..eb01661
--- /dev/null
+++ b/src/main/python/libraries/edge/opensearch/responsebysolr.py
@@ -0,0 +1,67 @@
+import json
+
+from edge.opensearch.response import Response
+
+class ResponseBySolr(Response):
+    def __init__(self):
+        super(ResponseBySolr, self).__init__()
+
+    def generate(self, solrResponse):
+        self._populate(solrResponse)
+        return super(ResponseBySolr, self).generate()
+
+    def _populate(self, solrResponse):
+        #response.title = 'OCSI Dataset Search: '+searchText
+        #response.description = 'Search result for "'+searchText+'"'
+        #response.link = searchUrl
+        self._populateChannel(solrResponse)
+
+        if solrResponse is None:
+            self.variables.append(
+                {'namespace': 'openSearch', 'name': 'totalResults', 'value': 1}
+            )
+            self.variables.append(
+                {'namespace': 'openSearch', 'name': 'startIndex', 'value': 1}
+            )
+            self.variables.append(
+                {'namespace': 'openSearch', 'name': 'itemsPerPage', 'value': 1}
+            )
+            item = [
+                {'name': 'title', 'value': 'Error'},
+                {'name': 'description', 'value': 'error'}
+            ]
+            self.items.append(item)
+        else:
+            #logging.debug(solrResponse)
+            solrJson = json.loads(solrResponse)
+
+            self.variables.append(
+                {'namespace': 'openSearch', 'name': 'totalResults', 'value': solrJson['response']['numFound']}
+            )
+            self.variables.append(
+                {'namespace': 'openSearch', 'name': 'startIndex', 'value': solrJson['response']['start']}
+            )
+            self.variables.append(
+                {'namespace': 'openSearch', 'name': 'itemsPerPage', 'value': solrJson['responseHeader']['params']['rows']}
+            )
+
+            for doc in solrJson['response']['docs']:
+                """
+                item = [
+                    {'name': 'title', 'value': doc['Dataset-LongName'][0]},
+                    {'name': 'description', 'value': doc['Dataset-Description'][0]},
+                    {'name': 'link', 'value': self._configuration.get('portal', 'datasetUrl')+'/'+doc['Dataset-ShortName'][0]}
+                ]
+                """
+                item = []
+                for docKey in doc.keys():
+                    item.append({'namespace': 'podaac', 'name': docKey, 'value': doc[docKey]})
+
+                self._populateItem(solrResponse, doc, item)
+                self.items.append(item)
+
+    def _populateChannel(self, solrResponse):
+        pass
+
+    def _populateItem(self, solrResponse, doc, item):
+        pass

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/opensearch/responsewriter.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/opensearch/responsewriter.py b/src/main/python/libraries/edge/opensearch/responsewriter.py
new file mode 100644
index 0000000..2277c65
--- /dev/null
+++ b/src/main/python/libraries/edge/opensearch/responsewriter.py
@@ -0,0 +1,142 @@
+from types import *
+import logging
+import urllib
+
+import requestresponder
+from edge.httputility import HttpUtility
+import math
+
+class ResponseWriter(requestresponder.RequestResponder):
+    def __init__(self, configFilePath, requiredParams = None):
+        super(ResponseWriter, self).__init__(configFilePath)
+        if requiredParams is None:
+            requiredParams = []
+        self.requiredParams = requiredParams
+        self.searchParameters = {}
+        self.pretty = True
+        self.variables = {}
+    
+    def get(self, requestHandler):
+        super(ResponseWriter, self).get(requestHandler)
+        #check required parameters
+        for paramList in self.requiredParams:
+            countParamNotFound = 0
+            for param in paramList:
+                try:
+                    requestHandler.get_argument(param)
+                except:
+                    countParamNotFound += 1
+            if countParamNotFound == len(paramList):
+                raise Exception("One of the following parameters is required: " + ', '.join(paramList))
+    
+    def _constructSingleSolrDatasetQuery(self, variables):
+        queries = []
+        for key, value in variables.iteritems():
+            # Only key used for ISO granule record is dataset
+            if key == 'datasetId':
+                query = 'Dataset-PersistentId:'+self._urlEncodeSolrQueryValue(value)
+                queries.append(query)
+            elif key == 'shortName':
+                query = 'Dataset-ShortName-Full:'+self._urlEncodeSolrQueryValue(value)
+                queries.append(query)
+
+        if len(queries) == 0:
+            queries.append('*')
+
+        query = 'q='+'+AND+'.join(queries)+'&fq=DatasetPolicy-AccessType-Full:(OPEN+OR+PREVIEW+OR+SIMULATED+OR+REMOTE)+AND+DatasetPolicy-ViewOnline:Y&version=2.2&rows=1&indent=on&wt=json'
+        logging.debug('solr query: '+query)
+        
+        return query
+    
+    def _getSingleSolrDatasetResponse(self, variables, callback):
+        query = self._constructSingleSolrDatasetQuery(variables)
+        url = self._configuration.get('solr', 'datasetUrl')
+
+        httpUtility = HttpUtility()
+        return httpUtility.getResponse(url+'/select/?'+query, callback)
+    
+    def _urlEncodeSolrQueryValue(self, value):
+        return urllib.quote('"'+value+'"')
+    
+    def _constructBoundingBoxQuery(self, value):
+        coords = value.split(",")
+        if len(coords) < 4:
+            return None
+        try:
+            west = float(coords[0])
+            south = float(coords[1])
+            east = float(coords[2])
+            north = float(coords[3])
+            
+            centerY = (south + north) / 2
+            halfHeight = math.fabs(north - south) / 2
+            
+            #Check if we need to split box into two
+            if (east < west):
+                west1 = west
+                east1 = 180.0
+                
+                centerX1 = (west1 + east1) / 2
+                halfWidth1 = math.fabs(east1 - west1) / 2
+                
+                west2 = -180.0
+                east2 = east
+                
+                centerX2 = (west2 + east2) / 2
+                halfWidth2 = math.fabs(east2 - west2) / 2
+                
+                return "fq={!frange+l=1+u=2}map(sum(" + self._solrSeparatingXAxisFunctionQueryAggregate(centerX1, halfWidth1, centerX2, halfWidth2) + "," + self._solrSeparatingYAxisFunctionQuery(centerY, halfHeight) + "),0,0,1,0)&fq=CenterY:*"
+            else:
+                centerX = (west + east) / 2
+                halfWidth = math.fabs(east - west) / 2
+
+                return "fq={!frange+l=1+u=2}map(sum(" + self._solrSeparatingXAxisFunctionQuery(centerX, halfWidth) + "," + self._solrSeparatingYAxisFunctionQuery(centerY, halfHeight) + "),0,0,1,0)&fq=CenterY:*"
+        except:
+            return None
+
+    def _solrSeparatingXAxisFunctionQuery(self, center, width):
+        return "map(sum(" + self._solrSeparatingAxixFunction(center, "CenterX1", width, "HalfWidth1", 360) + "," + self._solrSeparatingXAxixFunctionQueryPossibleNullWidth(center, width) + "),0,1,0,1)"
+
+    def _solrSeparatingXAxisFunctionQueryAggregate(self, center1, width1, center2, width2):
+        return "map(sum(" + self._solrSeparatingAxixFunction(center1, "CenterX1", width1, "HalfWidth1", 360) + ","+ self._solrSeparatingXAxixFunctionQueryPossibleNullWidth(center1, width1) + "," +self._solrSeparatingAxixFunction(center2, "CenterX1", width2, "HalfWidth1", 360) + "," + self._solrSeparatingXAxixFunctionQueryPossibleNullWidth(center2, width2) +"),0,3,0,1)"
+    
+    def _solrSeparatingYAxisFunctionQuery(self, center, height):
+        return self._solrSeparatingAxixFunction(center, "CenterY", height, "HalfHeight", 180)
+    
+    def _solrSeparatingAxixFunction(self, center, centerVar, length, lengthVar, span):
+        return "map(sub(abs(sub(%.15g,%s)),sum(%.15g,%s)),0,%i,1,0)" % (center, centerVar, length, lengthVar, span)
+    
+    def _solrSeparatingXAxixFunctionQueryPossibleNullWidth(self, center, length):
+        return "map(sum(map(HalfWidth2,0,0,1,0),map(sub(abs(sub(%.15g,CenterX2)),sum(%.15g,HalfWidth2)),0,360,1,0)),0,0,0,1)" % (center, length)
+
+    def _generateOpenSearchResponse(self, solrResponse, searchText, searchUrl, searchParams, pretty):
+        pass
+    
+    def _onSolrResponse(self, response):
+        if response.error:
+            self._handleException(str(response.error))
+        else:
+            self._writeResponse(response.body)
+    
+    def _writeResponse(self, responseText):
+        searchText = ''
+        if 'keyword' in self.variables:
+            searchText = self.variables['keyword']
+        try:
+            openSearchResponse = self._generateOpenSearchResponse(
+                responseText,
+                searchText,
+                self._configuration.get('service', 'url') + self.requestHandler.request.path,
+                self.searchParameters,
+                self.pretty
+            )
+            self.requestHandler.set_header("Content-Type", "application/xml")
+            self.requestHandler.write(openSearchResponse)
+            self.requestHandler.finish()
+        except BaseException as exception:
+            self._handleException(str(exception))
+    
+    def _handleException(self, error):
+        self.requestHandler.set_status(404)
+        self.requestHandler.write(error)
+        self.requestHandler.finish()

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/opensearch/rssresponse.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/opensearch/rssresponse.py b/src/main/python/libraries/edge/opensearch/rssresponse.py
new file mode 100644
index 0000000..d36a109
--- /dev/null
+++ b/src/main/python/libraries/edge/opensearch/rssresponse.py
@@ -0,0 +1,126 @@
+import logging
+
+from xml.dom.minidom import Document
+import xml.sax.saxutils
+
+from edge.opensearch.response import Response
+
+class RssResponse(Response):
+    def __init__(self):
+        super(RssResponse, self).__init__()
+        self.namespaces = {
+            'opensearch': 'http://a9.com/-/spec/opensearch/1.1/',
+            'podaac': 'http://podaac.jpl.nasa.gov/opensearch/',
+            'georss': 'http://www.georss.org/georss',
+            'gml': 'http://www.opengis.net/gml',
+            'time': 'http://a9.com/-/opensearch/extensions/time/1.0/',
+            'atom': 'http://www.w3.org/2005/Atom'
+        }
+
+        self.title = None
+        self.link = None
+        self.description = None
+        self.variables = []
+        self.items = []
+        self.parameters = {}
+
+    def addNamespace(self, name, uri):
+        self.namespaces[name] = uri
+
+    def removeNamespace(self, name):
+        del self.namespaces[name]
+
+    def generate(self, pretty=False):
+        logging.debug('RssResponse.generate is called.')
+
+        document = Document()
+        rss = document.createElement('rss')
+        rss.setAttribute('version', '2.0')
+        for namespace in self.namespaces.keys():
+            rss.setAttribute('xmlns:'+namespace, self.namespaces[namespace])
+        document.appendChild(rss)
+
+        channel = document.createElement('channel')
+        rss.appendChild(channel)
+
+        title = document.createElement('title')
+        channel.appendChild(title)
+        title.appendChild(document.createTextNode(xml.sax.saxutils.escape(self.title)))
+
+        description = document.createElement('description')
+        channel.appendChild(description)
+        description.appendChild(document.createTextNode(xml.sax.saxutils.escape(self.description)))
+
+        link = document.createElement('link')
+        channel.appendChild(link)
+        link.appendChild(document.createTextNode(xml.sax.saxutils.escape(self.link)))
+
+        for variable in self.variables:
+            '''
+            elementName = variable['name']
+            if 'namespace' in variable:
+                elementName = variable['namespace']+':'+elementName
+
+            variableElement = document.createElement(elementName)
+            channel.appendChild(variableElement)
+            variableElement.appendChild(document.createTextNode(xml.sax.saxutils.escape(str(variable['value']))))
+            '''
+            self._createNode(document, variable, channel)
+
+        for item in self.items:
+            itemElement = document.createElement('item')
+            channel.appendChild(itemElement)
+
+            for itemEntry in item:
+                self._createNode(document, itemEntry, itemElement);
+                '''
+                elementName = itemEntry['name']
+                if 'namespace' in itemEntry:
+                    elementName = itemEntry['namespace']+':'+elementName
+
+                variableElement = document.createElement(elementName)
+                itemElement.appendChild(variableElement)
+
+                value = itemEntry['value']
+                if isinstance(value, list):
+                    if len(value) > 1:
+                        for valueEntry in value:
+                            valueName = 'value'
+                            if 'namespace' in itemEntry:
+                                valueName = itemEntry['namespace']+':'+valueName
+                            valueElement = document.createElement(valueName)
+                            variableElement.appendChild(valueElement)
+                            valueElement.appendChild(document.createTextNode(xml.sax.saxutils.escape(str(valueEntry))))
+                    else:
+                        variableElement.appendChild(document.createTextNode(xml.sax.saxutils.escape(str(value[0]))))
+                elif isinstance(value, dict):
+                    for key in value.keys():
+                        valueName = key
+                        if 'namespace' in itemEntry:
+                            valueName = itemEntry['namespace']+':'+valueName
+                        valueElement = document.createElement(valueName)
+                        variableElement.appendChild(valueElement)
+                        valueElement.appendChild(document.createTextNode(xml.sax.saxutils.escape(str(value[key]))))
+                else:
+                    variableElement.appendChild(document.createTextNode(xml.sax.saxutils.escape(str(value))))
+                '''
+        return document.toprettyxml() if pretty else document.toxml('utf-8') 
+
+    def _createNode(self, document, itemEntry, itemElement):
+        elementName = itemEntry['name']
+        if 'namespace' in itemEntry:
+            elementName = itemEntry['namespace']+':'+elementName
+        variableElement = document.createElement(elementName)
+        itemElement.appendChild(variableElement)
+        if 'value' in itemEntry:
+            value = itemEntry['value']
+            if isinstance(value, list):
+                for valueEntry in value:
+                    self._createNode(document, valueEntry, variableElement)
+            elif isinstance(value, dict):
+                self._createNode(document, value, variableElement)
+            else:
+                variableElement.appendChild(document.createTextNode(xml.sax.saxutils.escape(str(value))))
+        if 'attribute' in itemEntry:
+            for attr in itemEntry['attribute'].keys():
+                variableElement.setAttribute(attr, itemEntry['attribute'][attr])

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/opensearch/rssresponsebysolr.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/opensearch/rssresponsebysolr.py b/src/main/python/libraries/edge/opensearch/rssresponsebysolr.py
new file mode 100644
index 0000000..fffe234
--- /dev/null
+++ b/src/main/python/libraries/edge/opensearch/rssresponsebysolr.py
@@ -0,0 +1,134 @@
+import json
+import urllib
+
+from edge.opensearch.rssresponse import RssResponse
+from collections import defaultdict
+
+class RssResponseBySolr(RssResponse):
+    def __init__(self):
+        super(RssResponseBySolr, self).__init__()
+
+    def generate(self, solrResponse, pretty=False):
+        self._populate(solrResponse)
+        return super(RssResponseBySolr, self).generate(pretty)
+
+    def _populate(self, solrResponse):
+        #response.title = 'OCSI Dataset Search: '+searchText
+        #response.description = 'Search result for "'+searchText+'"'
+        #response.link = searchUrl
+        self._populateChannel(solrResponse)
+
+        if solrResponse is None:
+            self.variables.append(
+                {'namespace': 'opensearch', 'name': 'totalResults', 'value': 1}
+            )
+            self.variables.append(
+                {'namespace': 'opensearch', 'name': 'startIndex', 'value': 1}
+            )
+            self.variables.append(
+                {'namespace': 'opensearch', 'name': 'itemsPerPage', 'value': 1}
+            )
+            self.parameters['startIndex'] = 0
+            url = self.link + '?' + urllib.urlencode(self.parameters)
+            self.variables.append({'namespace': 'atom', 'name': 'link', 'attribute': {'href': url, 'rel': 'self', 'type': 'application/rss+xml'}})
+            self.variables.append({'namespace': 'atom', 'name': 'link', 'attribute': {'href': url, 'rel': 'first', 'type': 'application/rss+xml'}})
+            item = [
+                {'name': 'title', 'value': 'Error'},
+                {'name': 'description', 'value': 'error'}
+            ]
+            self.items.append(item)
+        else:
+            #logging.debug(solrResponse)
+            solrJson = json.loads(solrResponse)
+            numFound = int(solrJson['response']['numFound'])
+            start = int(solrJson['response']['start'])
+            rows = int(solrJson['responseHeader']['params']['rows'])
+
+            self.parameters['startIndex'] = start
+            self.variables.append({'namespace': 'atom', 'name': 'link', 'attribute': {'href': self.link + '?' + urllib.urlencode(self.parameters), 'rel': 'self', 'type': 'application/rss+xml'}})
+            self.parameters['startIndex'] = 0
+            self.variables.append({'namespace': 'atom', 'name': 'link', 'attribute': {'href': self.link + '?' + urllib.urlencode(self.parameters), 'rel': 'first', 'type': 'application/rss+xml'}})
+            if start > 0:
+                if (start - rows > 0):
+                    self.parameters['startIndex'] = start - rows
+                self.variables.append({'namespace': 'atom', 'name': 'link', 'attribute': {'href': self.link + '?' + urllib.urlencode(self.parameters), 'rel': 'previous', 'type': 'application/rss+xml'}})
+            if start + rows < numFound:
+                self.parameters['startIndex'] = start + rows
+                self.variables.append({'namespace': 'atom', 'name': 'link', 'attribute': {'href': self.link + '?' + urllib.urlencode(self.parameters), 'rel': 'next', 'type': 'application/rss+xml'}})
+            
+            self.variables.append(
+                {'namespace': 'opensearch', 'name': 'totalResults', 'value': solrJson['response']['numFound']}
+            )
+            self.variables.append(
+                {'namespace': 'opensearch', 'name': 'startIndex', 'value': solrJson['response']['start']}
+            )
+            self.variables.append(
+                {'namespace': 'opensearch', 'name': 'itemsPerPage', 'value': solrJson['responseHeader']['params']['rows']}
+            )
+
+            for doc in solrJson['response']['docs']:
+                """
+                item = [
+                    {'name': 'title', 'value': doc['Dataset-LongName'][0]},
+                    {'name': 'description', 'value': doc['Dataset-Description'][0]},
+                    {'name': 'link', 'value': self._configuration.get('portal', 'datasetUrl')+'/'+doc['Dataset-ShortName'][0]}
+                ]
+                """
+                item = []
+                '''
+                #Handle dataset_location_policy values differently
+                if 'DatasetLocationPolicy-Type' in doc and 'DatasetLocationPolicy-BasePath' in doc:
+                    for i, x in enumerate(doc['DatasetLocationPolicy-Type']):
+                        item.append({'namespace': 'podaac', 'name': self._camelCaseStripHyphen(x.title()), 'value': doc['DatasetLocationPolicy-BasePath'][i]})
+                    del doc['DatasetLocationPolicy-Type']
+                    del doc['DatasetLocationPolicy-BasePath']
+                
+                multiValuedElementsKeys = ('DatasetRegion-', 'DatasetCharacter-', 'DatasetCitation-', 'DatasetContact-Contact-', 'DatasetDatetime-', 
+                                           'DatasetInteger-', 'DatasetParameter-', 'DatasetProject-', 'DatasetReal-', 'DatasetResource-', 
+                                           'DatasetSoftware-', 'DatasetSource-', 'DatasetVersion-', 'Collection-',
+                                           'GranuleArchive-', 'GranuleReference-', 'GranuleReal-')
+                multiValuedElements = defaultdict(list)
+                for docKey in doc.keys():
+                    if docKey.startswith(multiValuedElementsKeys):
+                        multiValuedElements[docKey.split('-', 1)[0]].append(docKey)
+                    else:
+                        item.append({'namespace': 'podaac', 'name': self._camelCaseStripHyphen(docKey), 'value': doc[docKey]})
+                for multiValuedKey in multiValuedElements:
+                    for i, x in enumerate(doc[multiValuedElements[multiValuedKey][0]]):
+                        values = {}
+                        for key in multiValuedElements[multiValuedKey]:
+                            values[self._camelCaseStripHyphen(key.split('-', 1)[1])] = doc[key][i]
+                        item.append({'namespace': 'podaac', 'name': self._camelCaseStripHyphen(multiValuedKey), 'value': values})
+                '''
+                self._populateItem(solrResponse, doc, item)
+                self.items.append(item)
+
+    def _populateChannel(self, solrResponse):
+        pass
+
+    def _populateItem(self, solrResponse, doc, item):
+        pass
+    
+    def _populateItemWithPodaacMetadata(self, doc, item, multiValuedElementsKeys):
+        ignoreElementsEndingWith = ('-Full', '-Long')
+        multiValuedElements = defaultdict(list)
+        for docKey in doc.keys():
+            if docKey.startswith(multiValuedElementsKeys):
+                multiValuedElements[docKey.split('-', 1)[0]].append(docKey)
+            elif not docKey.endswith(ignoreElementsEndingWith):
+                if len(doc[docKey]) > 1:
+                    item.append({'namespace': 'podaac', 'name': self._camelCaseStripHyphen(docKey), 'value': [{'namespace': 'podaac', 'name': 'value', 'value': x} for x in doc[docKey]]})
+                else:
+                    item.append({'namespace': 'podaac', 'name': self._camelCaseStripHyphen(docKey), 'value': doc[docKey][0]})
+        for multiValuedKey in multiValuedElements:
+            for i, x in enumerate(doc[multiValuedElements[multiValuedKey][0]]):
+                values = []
+                for key in multiValuedElements[multiValuedKey]:
+                    if not key.endswith(ignoreElementsEndingWith):
+                        values.append({'namespace': 'podaac', 'name': self._camelCaseStripHyphen(key.split('-', 1)[1]), 'value': doc[key][i]})
+                item.append({'namespace': 'podaac', 'name': self._camelCaseStripHyphen(multiValuedKey), 'value': values})
+
+    def _camelCaseStripHyphen(self, key):
+        #special case to remove duplicate element, contact from element tag
+        key = key.replace('-Element-', '', 1).replace('Contact-', '', 1)
+        return key[0].lower() + key[1:].replace('-', '')

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/opensearch/solrcmrtemplateresponse.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/opensearch/solrcmrtemplateresponse.py b/src/main/python/libraries/edge/opensearch/solrcmrtemplateresponse.py
new file mode 100644
index 0000000..0dbcebb
--- /dev/null
+++ b/src/main/python/libraries/edge/opensearch/solrcmrtemplateresponse.py
@@ -0,0 +1,243 @@
+import datetime
+import pycurl
+from StringIO import StringIO
+import json
+import logging
+import urllib
+import os.path
+
+from edge.opensearch.templateresponse import TemplateResponse
+
+class SolrCmrTemplateResponse(TemplateResponse):
+    def __init__(self, configuration, link, parameters):
+        super(SolrCmrTemplateResponse, self).__init__()
+        self._configuration = configuration
+        self.link = link
+        self.parameters = parameters
+
+    def generate(self, solrResponse, pretty=False):
+        self._populate(solrResponse)
+        return super(SolrCmrTemplateResponse, self).generate(pretty)
+
+    def _get_cmr_response(self, url, cmr):
+
+        # Execute the curl command and load the json object.
+        buffer = StringIO()
+        try:
+            c = pycurl.Curl()
+            c.setopt(c.URL, url)
+            c.setopt(c.WRITEDATA, buffer)
+            c.perform()
+            c.close()
+        except:
+            output = '{ "errors": "%s" }' %(c.errstr())
+            c.close()
+        else:
+            try:
+                output = json.loads(buffer.getvalue())
+            except:
+                output = '{ "errors": "json loads failed: [%s]" }' %(buffer.getvalue())
+     
+        # Format the output if there are errors.
+        response = {}
+        if output.keys().__contains__('errors'):
+            response['entry'] = []
+            response['updated'] = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")
+            response['id'] = url
+            if url.find('collections') != -1:
+                response['title'] = 'ECHO dataset metadata'
+            elif url.find('granules') != -1:
+                response['title'] = 'ECHO granule metadata'
+            response['errors'] = output['errors'][0]
+        else:
+            response = output
+
+        try:
+
+            if response.keys().__contains__('errors'):
+                return(response)
+    
+            if not(response.keys().__contains__('feed')):
+                raise ValueError('no "feed" in the cmr response')
+            if not(response['feed'].keys().__contains__('entry')):
+                raise ValueError('no "entry" in the cmr response')
+            if not(response['feed'].keys().__contains__('updated')):
+                raise ValueError('no "updated" key in the cmr response')
+            if not(response['feed'].keys().__contains__('id')):
+                raise ValueError('no "id" key in the cmr response')
+            if not(response['feed'].keys().__contains__('title')):
+                raise ValueError('no "id" key in the cmr response')
+      
+            # Create lists if they do not exists.
+            if not(cmr.keys().__contains__('cmr_search_updated')):
+                cmr['cmr_search_updated'] = []
+            if not(cmr.keys().__contains__('cmr_search_url')):
+                cmr['cmr_search_url'] = []
+            if not(cmr.keys().__contains__('cmr_search_title')):
+                cmr['cmr_search_title'] = []
+
+            cmr['cmr_search_updated'].append(response['feed']['updated'])
+            cmr['cmr_search_url'].append(response['feed']['id'])
+            cmr['cmr_search_title'].append(response['feed']['title'])
+
+            # Create list objects for the "entry" key.
+            if response['feed']['entry'] != []:
+                entry = response['feed']['entry'][0]
+                for key in entry:
+                    keyname = 'cmr_%s' %(key)
+                    if not(cmr.keys().__contains__(keyname)):
+                        cmr[keyname] = []
+                    cmr[keyname].append(entry[key])
+    
+        except ValueError, e:
+            msg = 'Error! parse error: %s.' %e
+            print '%s\n' %msg
+    
+        return(cmr)
+
+    def _populate(self, solrResponse):
+        self.variables['link'] = self.link
+
+        start = 0
+        rows = 0
+        numFound = 0
+        
+        if solrResponse is not None:
+            solrJson = json.loads(solrResponse)
+
+            logging.debug('Total doc count: '+str(solrJson['response']['numFound']))
+            logging.debug('Total item count: '+ str(len(solrJson['response']['docs'])))
+
+            #----------------------------------------------------------------------------------------------
+            # CMR: Processing 
+            #----------------------------------------------------------------------------------------------
+            cmr_total_time = datetime.timedelta(0)
+            cmr_total_count = 0
+
+            for i in range(len(solrJson['response']['docs'])):
+
+                doc = solrJson['response']['docs'][i]
+                logging.debug('doc[id]: '+str(doc['id']))
+
+                #------------------------------------------------------------------------------------------
+                # CMR: Initialize 
+                #------------------------------------------------------------------------------------------
+                cmr = {}
+
+                #------------------------------------------------------------------------------------------
+                # CMR: PRODUCT_TYPE
+                #------------------------------------------------------------------------------------------
+
+                if solrJson['response']['docs'][i].keys().__contains__('product_type_dataset_short_name_list'):
+
+                    for j in range(len(doc['product_type_dataset_short_name_list'])):
+
+                        cmr_test = 0
+                        if cmr_test == 1:
+                           cmr_search_url = 'https://cmr.earthdata.nasa.gov/search/collections.json?keyword=NSIDC'
+                        else:
+                           cmr_search_url = 'https://cmr.earthdata.nasa.gov/search/collections.json?keyword=' + \
+                                             doc['product_type_dataset_short_name_list'][j]
+
+                        logging.debug('cmr search url: ' +cmr_search_url)
+                        time1 = datetime.datetime.now()
+                        cmr = self._get_cmr_response(cmr_search_url, cmr) 
+                        time2 = datetime.datetime.now()
+                        time3 = time2 - time1
+                        logging.debug('cmr search time: ' + time3.__str__() + ' 1 product_type')
+
+                        cmr_total_time = cmr_total_time + time3
+                        cmr_total_count = cmr_total_count + 1
+                        
+                #------------------------------------------------------------------------------------------
+                # CMR: PRODUCT (Only search when the query contains 'id' - ie individual cmr search)
+                #------------------------------------------------------------------------------------------
+
+                elif solrJson['response']['docs'][i].keys().__contains__('product_granule_remote_granule_ur_list') and \
+                     self.parameters.keys().__contains__('id'):
+
+                    for j in range(len(doc['product_granule_remote_granule_ur_list'])):
+
+                        cmr_test = 0
+                        if cmr_test == 1:
+                           cmr_search_url = 'https://cmr.earthdata.nasa.gov/search/granules.json?granule_ur[]=' + \
+                                            '20070106-NAR18_SST-EUR-L2P-sst1nar_noaa18_20070106_asc-v01.nc'
+                        else:
+                           cmr_search_url = 'https://cmr.earthdata.nasa.gov/search/granules.json?granule_ur[]=' + \
+                                             doc['product_granule_remote_granule_ur_list'][j]
+
+                        logging.debug('cmr search url: ' +cmr_search_url)
+                        time1 = datetime.datetime.now()
+
+                        cmr = self._get_cmr_response(cmr_search_url, cmr) 
+
+                        time2 = datetime.datetime.now()
+                        time3 = time2 - time1
+                        logging.debug('cmr search time: ' + time3.__str__() + ' 1 product')
+
+                        cmr_total_time = cmr_total_time + time3
+                        cmr_total_count = cmr_total_count + 1
+
+                #------------------------------------------------------------------------------------------
+                # CMR: Docs not containing CMR search related criteria
+                #------------------------------------------------------------------------------------------
+                else:
+                    continue
+
+                #------------------------------------------------------------------------------------------
+                # CMR: Post-processing
+                #------------------------------------------------------------------------------------------
+                if cmr != {}:
+                    doc.update(cmr)
+                    #print json.dumps(cmr, indent=4)
+                else:
+                    logging.debug('***>>> cmr was not set')
+                #------------------------------------------------------------------------------------------
+                # CMR: End
+                #------------------------------------------------------------------------------------------
+
+            self.variables['docs'] = solrJson['response']['docs']
+            self.variables['numFound'] = solrJson['response']['numFound']
+            self.variables['itemsPerPage'] = solrJson['responseHeader']['params']['rows']
+            self.variables['startIndex'] = solrJson['response']['start']
+            self.variables['updated'] = datetime.datetime.utcnow().isoformat() + 'Z'
+            
+            start = int(solrJson['response']['start'])
+            rows = int(solrJson['responseHeader']['params']['rows'])
+            numFound = int(solrJson['response']['numFound'])
+
+            logging.debug('Num docs found: ' + str(self.variables['numFound']))
+            logging.debug('Items per page: ' + str(self.variables['itemsPerPage']))
+
+            # Show CMR statistics.
+            cmr_type = ''
+            if os.path.basename(self.variables['link']) == 'product_type':
+               cmr_type = 'dataset'
+            elif os.path.basename(self.variables['link']) == 'product':
+               cmr_type = 'granule'
+            if (cmr_type == 'dataset' or cmr_type == 'granule') and cmr_total_count != 0:
+                cmr_average_time = cmr_total_time/cmr_total_count
+                cmr_stats_msg = 'CMR search results: %s total per/%d %s(s)   %s average per/%s' \
+                                %(cmr_total_time.__str__(), cmr_total_count, cmr_type, cmr_average_time.__str__(), cmr_type)
+                logging.debug(cmr_stats_msg)
+           
+            if 'facet_counts' in solrJson:
+                self.variables['facets'] = solrJson['facet_counts']
+
+        self.parameters['startIndex'] = start
+        self.variables['myself'] = self.link + '?' + urllib.urlencode(self.parameters, True)
+        
+        if rows != 0:
+            self.parameters['startIndex'] = numFound - (numFound % rows)
+        self.variables['last'] = self.link + '?' + urllib.urlencode(self.parameters, True)
+        
+        self.parameters['startIndex'] = 0
+        self.variables['first'] = self.link + '?' + urllib.urlencode(self.parameters, True)
+        if start > 0:
+            if (start - rows > 0):
+                self.parameters['startIndex'] = start - rows
+            self.variables['prev'] = self.link + '?' + urllib.urlencode(self.parameters, True)
+            
+        if start + rows < numFound:
+            self.parameters['startIndex'] = start + rows
+            self.variables['next'] = self.link + '?' + urllib.urlencode(self.parameters, True)

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/opensearch/solrtemplateresponse.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/opensearch/solrtemplateresponse.py b/src/main/python/libraries/edge/opensearch/solrtemplateresponse.py
new file mode 100644
index 0000000..952220f
--- /dev/null
+++ b/src/main/python/libraries/edge/opensearch/solrtemplateresponse.py
@@ -0,0 +1,65 @@
+import datetime
+import json
+import logging
+import urllib
+
+from edge.opensearch.templateresponse import TemplateResponse
+
+class SolrTemplateResponse(TemplateResponse):
+    def __init__(self, configuration, link, parameters):
+        super(SolrTemplateResponse, self).__init__()
+        self._configuration = configuration
+        self.link = link
+        self.parameters = parameters
+
+    def generate(self, solrResponse, pretty=False):
+        self._populate(solrResponse)
+        return super(SolrTemplateResponse, self).generate(pretty)
+
+    def _populate(self, solrResponse):
+        self.variables['link'] = self.link
+        self.variables['parameters'] = self.parameters
+
+        start = 0
+        rows = 0
+        numFound = 0
+        
+        if solrResponse is not None:
+            solrJson = json.loads(solrResponse)
+            
+            logging.debug('doc count: '+str(len(solrJson['response']['docs'])))
+            
+            self.variables['docs'] = solrJson['response']['docs']
+            self.variables['numFound'] = solrJson['response']['numFound']
+            self.variables['itemsPerPage'] = solrJson['responseHeader']['params']['rows']
+            self.variables['startIndex'] = solrJson['response']['start']
+            
+            self.variables['updated'] = datetime.datetime.utcnow().isoformat() + 'Z'
+            
+            start = int(solrJson['response']['start'])
+            rows = int(solrJson['responseHeader']['params']['rows'])
+            numFound = int(solrJson['response']['numFound'])
+
+            logging.debug(self.variables['numFound'])
+            logging.debug(self.variables['itemsPerPage'])
+        
+            if 'facet_counts' in solrJson:
+                self.variables['facets'] = solrJson['facet_counts']
+
+        self.parameters['startIndex'] = start
+        self.variables['myself'] = self.link + '?' + urllib.urlencode(self.parameters, True)
+        
+        if rows != 0:
+            self.parameters['startIndex'] = numFound - (numFound % rows)
+        self.variables['last'] = self.link + '?' + urllib.urlencode(self.parameters, True)
+        
+        self.parameters['startIndex'] = 0
+        self.variables['first'] = self.link + '?' + urllib.urlencode(self.parameters, True)
+        if start > 0:
+            if (start - rows > 0):
+                self.parameters['startIndex'] = start - rows
+            self.variables['prev'] = self.link + '?' + urllib.urlencode(self.parameters, True)
+            
+        if start + rows < numFound:
+            self.parameters['startIndex'] = start + rows
+            self.variables['next'] = self.link + '?' + urllib.urlencode(self.parameters, True)

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/opensearch/templateresponse.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/opensearch/templateresponse.py b/src/main/python/libraries/edge/opensearch/templateresponse.py
new file mode 100644
index 0000000..70b6211
--- /dev/null
+++ b/src/main/python/libraries/edge/opensearch/templateresponse.py
@@ -0,0 +1,33 @@
+import logging
+
+from xml.dom.minidom import *
+from jinja2 import Environment, Template
+
+from edge.dateutility import DateUtility
+from edge.opensearch.response import Response
+
+class TemplateResponse(Response):
+    def __init__(self):
+        super(TemplateResponse, self).__init__()
+        self.env = Environment()
+        self.env.trim_blocks = True
+        self.env.autoescape = True
+        self.variables = {}
+        self.env.filters['convertISOTime'] = DateUtility.convertISOTime
+
+    def setTemplate(self, template):
+        self.template = self.env.from_string(template)
+
+    def generate(self, pretty=False):
+        logging.debug('TemplateResponse.generate is called.')
+        
+        if pretty:
+            try :
+                xmlStr = self.template.render(self.variables).encode('utf-8').replace('\n', '')
+            except Exception as e:
+                logging.debug("Problem generating template " + str(e))
+                xmlStr = self.template.render({}).encode('utf-8').replace('\n', '')
+            document = xml.dom.minidom.parseString(xmlStr)
+            return document.toprettyxml()
+        else:
+            return self.template.render(self.variables).replace('\n', '')

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/response/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/response/__init__.py b/src/main/python/libraries/edge/response/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/response/estemplateresponse.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/response/estemplateresponse.py b/src/main/python/libraries/edge/response/estemplateresponse.py
new file mode 100644
index 0000000..0bb443c
--- /dev/null
+++ b/src/main/python/libraries/edge/response/estemplateresponse.py
@@ -0,0 +1,54 @@
+import datetime
+import json
+import logging
+import urllib
+
+from edge.opensearch.templateresponse import TemplateResponse
+
+class ESTemplateResponse(TemplateResponse):
+    def __init__(self, link = "", parameters = {}, defaultItemsPerPage = 0):
+        super(ESTemplateResponse, self).__init__()
+        self.link = link
+        self.parameters = parameters
+        self.defaultItemsPerPage = defaultItemsPerPage
+
+    def generate(self, solrResponse, pretty=False):
+        self._populate(solrResponse)
+        return super(ESTemplateResponse, self).generate(pretty)
+
+    def _populate(self, response):
+        start = 0
+        rows = 0
+        numFound = 0
+
+        self.variables['parameters'] = self.parameters
+
+        if response is not None:
+            solrJson = json.loads(response, strict = False)
+            self.variables['docs'] = solrJson['hits']['hits']
+            self.variables['numFound'] = int(solrJson['hits']['total'])
+            self.variables['itemsPerPage'] = int(self.parameters['itemsPerPage']) if 'itemsPerPage' in self.parameters else self.defaultItemsPerPage
+            self.variables['startIndex'] = int(self.parameters['startIndex']) if 'startIndex' in self.parameters else 0
+
+            start = self.variables['startIndex']
+            rows = self.variables['itemsPerPage']
+            numFound = self.variables['numFound']
+
+
+        self.parameters['startIndex'] = start
+        self.variables['myself'] = self.link + '?' + urllib.urlencode(self.parameters, True)
+
+        if rows != 0:
+            self.parameters['startIndex'] = numFound - (numFound % rows)
+        self.variables['last'] = self.link + '?' + urllib.urlencode(self.parameters, True)
+
+        self.parameters['startIndex'] = 0
+        self.variables['first'] = self.link + '?' + urllib.urlencode(self.parameters, True)
+        if start > 0:
+            if (start - rows > 0):
+                self.parameters['startIndex'] = start - rows
+            self.variables['prev'] = self.link + '?' + urllib.urlencode(self.parameters, True)
+
+        if start + rows < numFound:
+            self.parameters['startIndex'] = start + rows
+            self.variables['next'] = self.link + '?' + urllib.urlencode(self.parameters, True)

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/response/jsontemplateresponse.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/response/jsontemplateresponse.py b/src/main/python/libraries/edge/response/jsontemplateresponse.py
new file mode 100644
index 0000000..e2da786
--- /dev/null
+++ b/src/main/python/libraries/edge/response/jsontemplateresponse.py
@@ -0,0 +1,33 @@
+import logging
+import json
+
+from xml.dom.minidom import *
+from jinja2 import Environment, Template
+
+from edge.dateutility import DateUtility
+from edge.opensearch.response import Response
+
+class JsonTemplateResponse(Response):
+    def __init__(self):
+        super(JsonTemplateResponse, self).__init__()
+        self.env = Environment()
+        self.env.trim_blocks = True
+        self.env.autoescape = False
+        self.variables = {}
+        self.env.filters['convertISOTime'] = DateUtility.convertISOTime
+        self.env.filters['jsonify'] = self.jsonify
+
+    def setTemplate(self, template):
+        self.template = self.env.from_string(template)
+
+    def generate(self, pretty=False):
+        if pretty:
+            return json.dumps(json.loads(self.template.render(self.variables), strict=False), indent=3)
+        else:
+            return self.template.render(self.variables)
+
+    def jsonify(self, value):
+        if value or value == 0:
+            return json.dumps(value)
+        else:
+            return "null"

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/response/solrfacettemplateresponse.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/response/solrfacettemplateresponse.py b/src/main/python/libraries/edge/response/solrfacettemplateresponse.py
new file mode 100644
index 0000000..40c7e95
--- /dev/null
+++ b/src/main/python/libraries/edge/response/solrfacettemplateresponse.py
@@ -0,0 +1,23 @@
+import json
+import logging
+
+from edge.response.jsontemplateresponse import JsonTemplateResponse
+
+class SolrFacetTemplateResponse(JsonTemplateResponse):
+    def __init__(self, facetDefs):
+        super(SolrFacetTemplateResponse, self).__init__()
+        self.facetDefs = facetDefs
+
+    def generate(self, solrResponse, pretty=False):
+        self._populate(solrResponse)
+        return super(SolrFacetTemplateResponse, self).generate(pretty)
+
+    def _populate(self, solrResponse):
+        if solrResponse is not None:
+            solrJson = json.loads(solrResponse)
+
+            logging.debug('doc count: '+str(len(solrJson['response']['docs'])))
+
+            if 'facet_counts' in solrJson:
+                self.variables['facets'] = solrJson['facet_counts']
+                self.variables['facetDefs'] = self.facetDefs

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/response/solrjsontemplateresponse.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/response/solrjsontemplateresponse.py b/src/main/python/libraries/edge/response/solrjsontemplateresponse.py
new file mode 100644
index 0000000..76d988f
--- /dev/null
+++ b/src/main/python/libraries/edge/response/solrjsontemplateresponse.py
@@ -0,0 +1,60 @@
+import datetime
+import json
+import logging
+import urllib
+
+from edge.response.jsontemplateresponse import JsonTemplateResponse
+
+class SolrJsonTemplateResponse(JsonTemplateResponse):
+    def __init__(self, link = "", parameters = {}):
+        super(SolrJsonTemplateResponse, self).__init__()
+        self.link = link
+        self.parameters = parameters
+
+    def generate(self, solrResponse, pretty=False):
+        self._populate(solrResponse)
+        return super(SolrJsonTemplateResponse, self).generate(pretty)
+
+    def _populate(self, solrResponse):
+        start = 0
+        rows = 0
+        numFound = 0
+
+        self.variables['parameters'] = self.parameters
+
+        if solrResponse is not None:
+            solrJson = json.loads(solrResponse, strict = False)
+
+            self.variables['docs'] = solrJson['response']['docs']
+            self.variables['numFound'] = int(solrJson['response']['numFound'])
+            self.variables['itemsPerPage'] = int(solrJson['responseHeader']['params']['rows'])
+            self.variables['startIndex'] = int(solrJson['response']['start'])
+
+            if 'stats' in solrJson:
+                self.variables['stats'] = solrJson['stats']
+
+            if 'facet_counts' in solrJson:
+                self.variables['facets'] = solrJson['facet_counts']
+
+            start = int(solrJson['response']['start'])
+            rows = int(solrJson['responseHeader']['params']['rows'])
+            numFound = int(solrJson['response']['numFound'])
+
+
+        self.parameters['startIndex'] = start
+        self.variables['myself'] = self.link + '?' + urllib.urlencode(self.parameters, True)
+
+        if rows != 0:
+            self.parameters['startIndex'] = numFound - (numFound % rows)
+        self.variables['last'] = self.link + '?' + urllib.urlencode(self.parameters, True)
+
+        self.parameters['startIndex'] = 0
+        self.variables['first'] = self.link + '?' + urllib.urlencode(self.parameters, True)
+        if start > 0:
+            if (start - rows > 0):
+                self.parameters['startIndex'] = start - rows
+            self.variables['prev'] = self.link + '?' + urllib.urlencode(self.parameters, True)
+
+        if start + rows < numFound:
+            self.parameters['startIndex'] = start + rows
+            self.variables['next'] = self.link + '?' + urllib.urlencode(self.parameters, True)

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/spatialsearch.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/spatialsearch.py b/src/main/python/libraries/edge/spatialsearch.py
new file mode 100644
index 0000000..0bfa27e
--- /dev/null
+++ b/src/main/python/libraries/edge/spatialsearch.py
@@ -0,0 +1,66 @@
+import sys
+import logging
+#import cx_Oracle
+
+class SpatialSearch(object):
+    def __init__(self, identity):
+        self.identity = identity
+        logging.debug('identity: '+self.identity)
+
+    def searchGranules(self, offset, rows, west, south, east, north):
+        logging.debug('w='+str(west)+',s='+str(south)+',e='+str(east)+',n='+str(north))
+
+        ids = []
+        resultCount = 0
+        connection = None
+        '''
+        try:
+            connection = cx_Oracle.connect(self.identity)
+            cursor = connection.cursor()
+            refCursor = connection.cursor()
+
+            westValue = cursor.var(cx_Oracle.NUMBER)
+            westValue.setvalue(0, west)
+            southValue = cursor.var(cx_Oracle.NUMBER)
+            southValue.setvalue(0, south)
+            eastValue = cursor.var(cx_Oracle.NUMBER)
+            eastValue.setvalue(0, east)
+            northValue = cursor.var(cx_Oracle.NUMBER)
+            northValue.setvalue(0, north)
+            offsetValue = cursor.var(cx_Oracle.NUMBER)
+            offsetValue.setvalue(0, offset + 1)
+            rowsValue = cursor.var(cx_Oracle.NUMBER)
+            rowsValue.setvalue(0, rows)
+
+            cursor.execute(
+                'select inventory.searchGranuleSpatialCount(:south,:west,:north,:east) FROM dual',
+                {'south': southValue, 'west': westValue, 'north': northValue, 'east': eastValue}
+            )
+            result = cursor.fetchone()
+            if result is None:
+                raise Exception('Failed to get count from inventory.searchGranuleSpatialCount.')
+            else:
+                resultCount = int(result[0])
+
+            cursor.callproc(
+                'Inventory.searchGSpatial',
+                [southValue, westValue, northValue, eastValue, offsetValue, rowsValue, refCursor]
+            )
+
+            logging.debug('rowcount: '+str(cursor.rowcount))
+            logging.debug('ref rowcount: '+str(refCursor.rowcount))
+
+            row = refCursor.next()
+            while row:
+                ids.append(row[0])
+                row = refCursor.next()
+        except StopIteration:
+            pass
+        except BaseException as detail:
+            print 'ouch', detail
+            logging.error('Failed to search granules: '+str(sys.exc_info()[0]))
+        finally:
+            if connection is not None:
+                connection.close()
+        '''
+        return (ids, resultCount)

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/writer/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/writer/__init__.py b/src/main/python/libraries/edge/writer/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/writer/estemplateresponsewriter.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/writer/estemplateresponsewriter.py b/src/main/python/libraries/edge/writer/estemplateresponsewriter.py
new file mode 100644
index 0000000..e947031
--- /dev/null
+++ b/src/main/python/libraries/edge/writer/estemplateresponsewriter.py
@@ -0,0 +1,116 @@
+from types import *
+import logging
+import urllib
+import json
+from collections import OrderedDict
+
+from edge.httputility import HttpUtility
+from edge.writer.templateresponsewriter import TemplateResponseWriter
+
+class ESTemplateResponseWriter(TemplateResponseWriter):
+    def __init__(self, configFilePath, requiredParams = None):
+        super(ESTemplateResponseWriter, self).__init__(configFilePath, requiredParams)
+        self.searchParameters = {}
+        self.variables = {}
+        self.facet = False
+        self.facetDefs = {}
+        self.contentType = 'application/xml'
+
+    def get(self, requestHandler):
+        super(ESTemplateResponseWriter, self).get(requestHandler)
+
+        startIndex = 0
+        try:
+            startIndex = requestHandler.get_argument('startIndex')
+            self.searchParameters['startIndex'] = startIndex
+        except:
+            pass
+
+        entriesPerPage = self._configuration.getint('solr', 'entriesPerPage')
+        try:
+            entriesPerPage = requestHandler.get_argument('itemsPerPage')
+            maxEntriesPerPage = self._configuration.getint('solr', 'maxEntriesPerPage')
+            if (int(entriesPerPage) > maxEntriesPerPage):
+                entriesPerPage = maxEntriesPerPage
+            self.searchParameters['itemsPerPage'] = entriesPerPage
+        except:
+            pass
+
+        try:
+            if requestHandler.get_argument('pretty').lower() == 'true':
+                self.pretty = True
+                self.searchParameters['pretty'] = 'true'
+        except:
+            pass
+
+        parameters = {}
+        for parameter in self._configuration.get('solr', 'parameters').split(','):
+            try:
+                value = requestHandler.get_arguments(parameter)
+                if len(value) == 1:
+                    parameters[parameter] = value[0]
+                    self.searchParameters[parameter] = value[0]
+                elif len(value) > 0:
+                    parameters[parameter] = value
+                    self.searchParameters[parameter] = value
+            except:
+                pass
+
+        facets = {}
+        if self._configuration.has_option('solr', 'facets'):
+            self.facetDefs = json.loads(self._configuration.get('solr', 'facets'), object_pairs_hook=OrderedDict)
+            for facet in self.facetDefs.keys():
+                try:
+                    value = requestHandler.get_arguments(facet)
+                    if len(value) > 0:
+                        facets[self.facetDefs[facet]] = value
+                        self.searchParameters[facet] = value
+                except:
+                    pass
+
+        try:
+            self._getResponse(startIndex, entriesPerPage, parameters, facets)
+        except:
+            logging.exception('Failed to get solr response.')
+
+    def _urlEncodeSolrQueryValue(self, value):
+        return urllib.quote('"'+value+'"')
+
+    def _onResponse(self, response):
+        logging.debug(response)
+        if response.error:
+            self._handleException(str(response.error))
+        else:
+            self._writeResponse(response.body)
+
+    def _writeResponse(self, responseText):
+        searchText = ''
+        if 'keyword' in self.variables:
+            searchText = self.variables['keyword']
+        try:
+            openSearchResponse = self._generateOpenSearchResponse(
+                responseText,
+                searchText,
+                self._configuration.get('service', 'url') + self.requestHandler.request.path,
+                self.searchParameters,
+                self.pretty
+            )
+            self.requestHandler.set_header("Content-Type", self.contentType)
+            self.requestHandler.set_header('Access-Control-Allow-Origin', '*')
+            self.requestHandler.write(openSearchResponse)
+            self.requestHandler.finish()
+        except BaseException as exception:
+            self._handleException(str(exception))
+
+    def _getResponse(self, startIndex, entriesPerPage, parameters, facets):
+        query = self._constructQuery(startIndex, entriesPerPage, parameters, facets)
+        url = self._configuration.get('solr', 'datasetUrl')
+
+        httpUtility = HttpUtility()
+        httpUtility.getResponse(url+'/_search/?'+query, self._onResponse)
+
+    def _generateOpenSearchResponse(self, solrResponse, searchText, searchUrl, searchParams, pretty):
+        pass
+
+    def _constructQuery(self, startIndex, entriesPerPage, parameters, facets):
+        pass

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/writer/genericproxywriter.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/writer/genericproxywriter.py b/src/main/python/libraries/edge/writer/genericproxywriter.py
new file mode 100644
index 0000000..dd7b1da
--- /dev/null
+++ b/src/main/python/libraries/edge/writer/genericproxywriter.py
@@ -0,0 +1,14 @@
+import logging
+import urllib
+
+from edge.writer.proxywriter import ProxyWriter
+
+class GenericProxyWriter(ProxyWriter):
+    def __init__(self, configFilePath):
+        super(GenericProxyWriter, self).__init__(configFilePath)
+
+    def _generateUrl(self, requestHandler):
+        url = self._configuration.get('proxy', 'url')
+        url += requestHandler.request.uri[requestHandler.request.uri.index("?"):]
+        logging.debug("proxy to url : " + url)
+        return url

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/writer/proxywriter.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/writer/proxywriter.py b/src/main/python/libraries/edge/writer/proxywriter.py
new file mode 100644
index 0000000..f747681
--- /dev/null
+++ b/src/main/python/libraries/edge/writer/proxywriter.py
@@ -0,0 +1,32 @@
+import logging
+
+import requestresponder
+from edge.httputility import HttpUtility
+
+class ProxyWriter(requestresponder.RequestResponder):
+    def __init__(self, configFilePath):
+        super(ProxyWriter, self).__init__(configFilePath)
+
+    def get(self, requestHandler):
+        super(ProxyWriter, self).get(requestHandler)
+        try:
+            httpUtility = HttpUtility()
+            result = httpUtility.getResponse(self._generateUrl(requestHandler), self.onResponse)
+        except BaseException as exception:
+            raise exception
+
+    def onResponse(self, response):
+        if response.error:
+            self.requestHandler.set_status(404)
+            self.requestHandler.write(str(response.error))
+            self.requestHandler.finish()
+        else:
+            for name, value in response.headers.iteritems():
+                logging.debug('header: '+name+':'+value)
+                self.requestHandler.set_header(name, value)
+            self.requestHandler.set_header('Access-Control-Allow-Origin', '*')
+            self.requestHandler.write(response.body)
+            self.requestHandler.finish()
+
+    def _generateUrl(self, requestHandler):
+        pass

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/writer/solrtemplateresponsewriter.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/writer/solrtemplateresponsewriter.py b/src/main/python/libraries/edge/writer/solrtemplateresponsewriter.py
new file mode 100644
index 0000000..636a21a
--- /dev/null
+++ b/src/main/python/libraries/edge/writer/solrtemplateresponsewriter.py
@@ -0,0 +1,115 @@
+from types import *
+import logging
+import urllib
+import json
+from collections import OrderedDict
+
+from edge.httputility import HttpUtility
+from edge.writer.templateresponsewriter import TemplateResponseWriter
+
+class SolrTemplateResponseWriter(TemplateResponseWriter):
+    def __init__(self, configFilePath, requiredParams = None):
+        super(SolrTemplateResponseWriter, self).__init__(configFilePath, requiredParams)
+        self.searchParameters = {}
+        self.variables = {}
+        self.facet = False
+        self.facetDefs = {}
+        self.contentType = 'application/xml'
+
+    def get(self, requestHandler):
+        super(SolrTemplateResponseWriter, self).get(requestHandler)
+
+        startIndex = 0
+        try:
+            startIndex = requestHandler.get_argument('startIndex')
+        except:
+            pass
+
+        entriesPerPage = self._configuration.getint('solr', 'entriesPerPage')
+        try:
+            entriesPerPage = requestHandler.get_argument('itemsPerPage')
+            maxEntriesPerPage = self._configuration.getint('solr', 'maxEntriesPerPage')
+            if (int(entriesPerPage) > maxEntriesPerPage):
+                entriesPerPage = maxEntriesPerPage
+            self.searchParameters['itemsPerPage'] = entriesPerPage
+        except:
+            pass
+
+        try:
+            if requestHandler.get_argument('pretty').lower() == 'true':
+                self.pretty = True
+                self.searchParameters['pretty'] = 'true'
+        except:
+            pass
+
+        parameters = {}
+        for parameter in self._configuration.get('solr', 'parameters').split(','):
+            try:
+                value = requestHandler.get_arguments(parameter)
+                if len(value) == 1:
+                    parameters[parameter] = value[0]
+                    self.searchParameters[parameter] = value[0]
+                elif len(value) > 0:
+                    parameters[parameter] = value
+                    self.searchParameters[parameter] = value
+            except:
+                pass
+
+        facets = {}
+        if self._configuration.has_option('solr', 'facets'):
+            self.facetDefs = json.loads(self._configuration.get('solr', 'facets'), object_pairs_hook=OrderedDict)
+            for facet in self.facetDefs.keys():
+                try:
+                    value = requestHandler.get_arguments(facet)
+                    if len(value) > 0:
+                        facets[self.facetDefs[facet]] = value
+                        self.searchParameters[facet] = value
+                except:
+                    pass
+
+        try:
+            self._getSolrResponse(startIndex, entriesPerPage, parameters, facets)
+        except:
+            logging.exception('Failed to get solr response.')
+
+    def _urlEncodeSolrQueryValue(self, value):
+        return urllib.quote('"'+value+'"')
+
+    def _onSolrResponse(self, response):
+        logging.debug(response)
+        if response.error:
+            self._handleException(str(response.error))
+        else:
+            self._writeResponse(response.body)
+
+    def _writeResponse(self, responseText):
+        searchText = ''
+        if 'keyword' in self.variables:
+            searchText = self.variables['keyword']
+        try:
+            openSearchResponse = self._generateOpenSearchResponse(
+                responseText,
+                searchText,
+                self._configuration.get('service', 'url') + self.requestHandler.request.path,
+                self.searchParameters,
+                self.pretty
+            )
+            self.requestHandler.set_header("Content-Type", self.contentType)
+            self.requestHandler.set_header('Access-Control-Allow-Origin', '*')
+            self.requestHandler.write(openSearchResponse)
+            self.requestHandler.finish()
+        except BaseException as exception:
+            self._handleException(str(exception))
+
+    def _getSolrResponse(self, startIndex, entriesPerPage, parameters, facets):
+        query = self._constructSolrQuery(startIndex, entriesPerPage, parameters, facets)
+        url = self._configuration.get('solr', 'datasetUrl')
+
+        httpUtility = HttpUtility()
+        httpUtility.getResponse(url+'/select/?'+query, self._onSolrResponse)
+
+    def _generateOpenSearchResponse(self, solrResponse, searchText, searchUrl, searchParams, pretty):
+        pass
+
+    def _constructSolrQuery(self, startIndex, entriesPerPage, parameters, facets):
+        pass

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/writer/templateresponsewriter.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/writer/templateresponsewriter.py b/src/main/python/libraries/edge/writer/templateresponsewriter.py
new file mode 100644
index 0000000..17b9abc
--- /dev/null
+++ b/src/main/python/libraries/edge/writer/templateresponsewriter.py
@@ -0,0 +1,40 @@
+from types import *
+import logging
+import codecs
+
+import requestresponder
+from edge.httputility import HttpUtility
+
+class TemplateResponseWriter(requestresponder.RequestResponder):
+    def __init__(self, configFilePath, requiredParams = None):
+        super(TemplateResponseWriter, self).__init__(configFilePath)
+        if requiredParams is None:
+            requiredParams = []
+        self.requiredParams = requiredParams
+        self.pretty = False
+
+    def get(self, requestHandler):
+        super(TemplateResponseWriter, self).get(requestHandler)
+
+        #check required parameters
+        for paramList in self.requiredParams:
+            countParamNotFound = 0
+            for param in paramList:
+                try:
+                    requestHandler.get_argument(param)
+                except:
+                    countParamNotFound += 1
+            if countParamNotFound == len(paramList):
+                raise Exception("One of the following parameters is required: " + ', '.join(paramList))
+
+    def _handleException(self, error):
+        self.requestHandler.set_status(404)
+        self.requestHandler.write(error)
+        self.requestHandler.finish()
+
+    def _readTemplate(self, path):
+        file = codecs.open(path, encoding='utf-8')
+        data = file.read()
+        file.close()
+
+        return data

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/logging.conf
----------------------------------------------------------------------
diff --git a/src/main/python/logging.conf b/src/main/python/logging.conf
new file mode 100644
index 0000000..de2f611
--- /dev/null
+++ b/src/main/python/logging.conf
@@ -0,0 +1,28 @@
+[loggers]
+keys=root
+
+[handlers]
+keys=consoleHandler,timedRotatingFileHandler
+
+[formatters]
+keys=simpleFormatter
+
+[logger_root]
+level=DEBUG
+handlers=consoleHandler,timedRotatingFileHandler
+
+[handler_consoleHandler]
+class=StreamHandler
+level=DEBUG
+formatter=simpleFormatter
+args=(sys.stdout,)
+
+[handler_timedRotatingFileHandler]
+class=handlers.TimedRotatingFileHandler
+level=ERROR
+formatter=simpleFormatter
+args=(r'./tornado.log', 'midnight', 1, 30)
+
+[formatter_simpleFormatter]
+format=%(asctime)s - %(name)s - %(levelname)s - %(message)s
+datefmt=

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/pluginhandler.py
----------------------------------------------------------------------
diff --git a/src/main/python/pluginhandler.py b/src/main/python/pluginhandler.py
new file mode 100644
index 0000000..0b72788
--- /dev/null
+++ b/src/main/python/pluginhandler.py
@@ -0,0 +1,58 @@
+import os
+import sys
+import logging
+
+class PluginHandler(object):
+    def __init__(self, name, pluginPath, format=None):
+        self.name = name
+        self.pluginPath = pluginPath
+        self.format = format
+
+    def handleRequest(self, httpMethod, path, requestHandler):
+        paths = path.split('/')
+        fileName = paths.pop()
+        fileNames = fileName.split('.')
+        fileSuffix = fileNames.pop()
+        
+        # Support plugin lookup for [fgdc|gmcd|iso].xml
+        if fileSuffix == 'xml':
+            fileSuffix = fileNames.pop()
+
+        if self.format is not None and len(self.format) > 0:
+            try:
+                fileSuffix = requestHandler.get_argument('format')
+            except:
+                fileSuffix = self.format[0]
+                #raise Exception("Format parameter required.")
+            if fileSuffix not in self.format:
+                raise Exception("Format %s not supported." % fileSuffix)
+        
+        pluginName = self._getPluginName(self.pluginPath+'/'+self.name+'/'+fileSuffix)
+        if not pluginName:
+            raise Exception("Did not find plugin.")
+        
+        modulePath = self.pluginPath+'.'+self.name+'.'+fileSuffix+'.'+pluginName
+        if modulePath in sys.modules:
+            currentModuleName = ''
+            for moduleName in modulePath.split('.'):
+                currentModuleName += moduleName
+                #print('reloading: '+currentModuleName)
+                reload(sys.modules[currentModuleName])
+                currentModuleName += '.'
+
+        #print('modulePath: '+modulePath)
+        module = __import__(modulePath, globals(), locals(), [pluginName])
+        plugin = getattr(module, pluginName)
+        pluginObject = plugin(self.pluginPath+'/'+self.name+'/'+fileSuffix+'/plugin.conf')
+        method = getattr(pluginObject, httpMethod)
+        method(requestHandler)
+
+    def _getPluginName(self, path):
+        name = None
+        for fileName in os.listdir(path):
+            if fileName != '__init__.py' and fileName.endswith('.py'):
+                name = fileName.split('.')[0]
+                break
+
+        return name
+

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/TestPlugin.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/TestPlugin.py b/src/main/python/plugins/TestPlugin.py
new file mode 100644
index 0000000..8ddc657
--- /dev/null
+++ b/src/main/python/plugins/TestPlugin.py
@@ -0,0 +1,5 @@
+#import tornado.web
+
+class TestPlugin:
+    def run(self):
+        print("aaa")

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/__init__.py b/src/main/python/plugins/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/dataset/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/dataset/__init__.py b/src/main/python/plugins/dataset/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/dataset/atom/AtomWriter.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/dataset/atom/AtomWriter.py b/src/main/python/plugins/dataset/atom/AtomWriter.py
new file mode 100644
index 0000000..3995e39
--- /dev/null
+++ b/src/main/python/plugins/dataset/atom/AtomWriter.py
@@ -0,0 +1,26 @@
+import logging
+import datetime
+
+from edge.opensearch.datasetatomresponse import DatasetAtomResponse
+from edge.opensearch.datasetwriter import DatasetWriter
+
+class AtomWriter(DatasetWriter):
+    def __init__(self, configFilePath):
+        super(AtomWriter, self).__init__(configFilePath)
+
+    def _generateOpenSearchResponse(self, solrResponse, searchText, searchUrl, searchParams, pretty):
+        response = DatasetAtomResponse(
+            self._configuration.get('portal', 'datasetUrl'),
+            self._configuration.get('service', 'host'),
+            self._configuration.get('service', 'url'),
+            self.datasets
+        )
+
+        response.title = 'PO.DAAC Dataset Search Results'
+        response.link = searchUrl
+        response.authors.append('PO.DAAC Dataset Search Service')
+        response.updated = datetime.datetime.utcnow().isoformat()+'Z'
+        response.id = 'tag:'+self._configuration.get('service', 'host')+','+datetime.datetime.utcnow().date().isoformat()
+        response.parameters = searchParams
+
+        return response.generate(solrResponse, pretty)

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/dataset/atom/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/dataset/atom/__init__.py b/src/main/python/plugins/dataset/atom/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/dataset/atom/plugin.conf
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/dataset/atom/plugin.conf b/src/main/python/plugins/dataset/atom/plugin.conf
new file mode 100644
index 0000000..ae94729
--- /dev/null
+++ b/src/main/python/plugins/dataset/atom/plugin.conf
@@ -0,0 +1,11 @@
+[solr]
+datasetUrl=http://localhost:8983/solr.war/dataset
+granuleUrl=http://localhost:8983/solr.war/granule
+entriesPerPage=7
+
+[portal]
+datasetUrl=http://localhost:8000/drupal/dataset
+
+[service]
+url=http://localhost:8890
+host=localhost:8890

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/dataset/gcmd/DifWriter.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/dataset/gcmd/DifWriter.py b/src/main/python/plugins/dataset/gcmd/DifWriter.py
new file mode 100644
index 0000000..407dda3
--- /dev/null
+++ b/src/main/python/plugins/dataset/gcmd/DifWriter.py
@@ -0,0 +1,32 @@
+import logging
+import os
+import os.path
+import codecs
+
+from edge.opensearch.datasetgcmdresponse import DatasetGcmdResponse
+from edge.opensearch.datasetwriter import DatasetWriter
+
+class DifWriter(DatasetWriter):
+    def __init__(self, configFilePath):
+        super(DifWriter, self).__init__(configFilePath)
+        
+        templatePath = os.path.dirname(configFilePath) + os.sep
+        templatePath += self._configuration.get('service', 'template')
+        self.template = self._readTemplate(templatePath)
+
+    def _generateOpenSearchResponse(self, solrResponse, searchText, searchUrl, searchParams, pretty):
+        response = DatasetGcmdResponse(self._configuration)
+        response.setTemplate(self.template)
+        
+        allowNone = False
+        if 'allowNone' in searchParams and searchParams['allowNone'].lower() == 'true':
+            allowNone = True
+        
+        return response.generate(solrResponse, pretty=pretty, allowNone=allowNone)
+
+    def _readTemplate(self, path):
+        file = codecs.open(path, encoding='utf-8')
+        data = file.read()
+        file.close()
+
+        return data

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/dataset/gcmd/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/dataset/gcmd/__init__.py b/src/main/python/plugins/dataset/gcmd/__init__.py
new file mode 100644
index 0000000..e69de29


[06/15] incubator-sdap-edge git commit: SDAP-1 Import all code under the SDAP SGA

Posted by le...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/server.py
----------------------------------------------------------------------
diff --git a/src/main/python/server.py b/src/main/python/server.py
new file mode 100644
index 0000000..409593f
--- /dev/null
+++ b/src/main/python/server.py
@@ -0,0 +1,102 @@
+import tornado.httpserver
+import tornado.ioloop
+import tornado.web
+
+import logging
+import logging.config
+import os
+import ConfigParser
+import socket
+
+import pluginhandler
+
+class GenericHandler(tornado.web.RequestHandler):
+    def initialize(self, pluginName, format=None):
+        super(GenericHandler, self).initialize()
+        self._pluginHandler = pluginhandler.PluginHandler(pluginName, 'plugins', format)
+
+    @tornado.web.asynchronous
+    def get(self):
+        self._handleRequest('get')
+
+    @tornado.web.asynchronous
+    def post(self):
+        self._handleRequest('post')
+
+    @tornado.web.asynchronous
+    def options(self):
+        self._handleRequest('options')
+
+    def _handleRequest(self, httpMethod):
+        try:
+            #logging.debug("_handleRequest")
+            self._pluginHandler.handleRequest(httpMethod, self.request.path, self)
+        except BaseException as exception:
+            logging.exception('something is wrong with this plugin: '+str(exception))
+
+            self.set_status(404)
+            self.write('something is wrong with this plugin: '+str(exception))
+            self.finish()
+
+class TemplateRenderHandler(tornado.web.RequestHandler):
+    def get(self):
+        self.set_header("Content-Type", "application/xml")
+        try:
+            fileName = self.request.path.split('/').pop()
+            self.render(fileName)
+        except BaseException as exception:
+            self.set_header("Content-Type", "text/html")
+            self.set_status(404)
+            self.write('File not found '+fileName)
+
+if __name__ == "__main__":
+    #logging.basicConfig(filename="log.txt",level=logging.DEBUG)
+    logging.config.fileConfig(r'./logging.conf')
+
+    configuration = ConfigParser.RawConfigParser()
+    configuration.read(r'./config.conf')
+
+    settings = dict(static_path=os.path.join(os.path.dirname(__file__), "static"), static_url_prefix="/static/", template_path=os.path.join(os.path.dirname(__file__), "templates"))
+    application = tornado.web.Application([
+        #(r"/dataset/.*", DatasetHandler),
+        (r"/heartbeat", GenericHandler, dict(pluginName='heartbeat', format=['json'])),
+        (r"/ws/search/samos", GenericHandler, dict(pluginName='samos', format=['json'])),
+        (r"/ws/search/icoads", GenericHandler, dict(pluginName='icoads', format=['json'])),
+        (r"/ws/search/spurs", GenericHandler, dict(pluginName='spurs', format=['json'])),
+        (r"/ws/search/spurs2", GenericHandler, dict(pluginName='spurs2', format=['json'])),
+        (r"/nexus/climatology", GenericHandler, dict(pluginName='nexus', format=['climatology'])),
+        (r"/nexus/solr", GenericHandler, dict(pluginName='nexus', format=['solr'])),
+        (r"/nexus/subsetter", GenericHandler, dict(pluginName='nexus', format=['subsetter'])),
+        (r"/ws/search/dataset", GenericHandler, dict(pluginName='slcp', format=['atom'])),
+        (r"/ws/search/granule", GenericHandler, dict(pluginName='slcp', format=['granule'])),
+        (r"/ws/facet/dataset", GenericHandler, dict(pluginName='slcp', format=['facet'])),
+        (r"/ws/suggest/dataset", GenericHandler, dict(pluginName='slcp', format=['suggest'])),
+        (r"/ws/metadata/dataset", GenericHandler, dict(pluginName='slcp', format=['echo10', 'umm-json'])),
+        (r"/ws/indicator/dataset", GenericHandler, dict(pluginName='slcp', format=['indicator'])),
+        (r"/ws/dat/dataset", GenericHandler, dict(pluginName='slcp', format=['dat'])),
+        (r"/ws/search/content", GenericHandler, dict(pluginName='slcp', format=['content'])),
+        (r"/ws/search/basin", GenericHandler, dict(pluginName='slcp', format=['basin'])),
+        (r"/ws/search/anomaly", GenericHandler, dict(pluginName='oceanxtremes', format=['datacasting'])),
+        (r"/ws/submit/anomaly", GenericHandler, dict(pluginName='oceanxtremes', format=['post'])),
+        (r"/ws/search/attribute", GenericHandler, dict(pluginName='oiip', format=['json', 'xml'])),
+        (r"/tie/collection", GenericHandler, dict(pluginName='tie', format=['collection'])),
+        (r"/example/es", GenericHandler, dict(pluginName='example', format=['elastic'])),
+        #(r"/ws/metadata/dataset", DatasetHandler, dict(format=['iso', 'gcmd'])),
+        #(r"/granule/.*", GranuleHandler),
+        #(r"/ws/search/granule", GenericHandler, dict(pluginName='product', format=['atom'])),
+        #(r"/ws/metadata/granule", GranuleHandler, dict(format=['iso', 'fgdc', 'datacasting'])),
+        (r"/passthrough/.*", GenericHandler, dict(pluginName='passthrough')),
+        (r"/ws/search/.*", TemplateRenderHandler)
+    ], default_host=configuration.get('server', 'host'), **settings)
+
+    http_server = tornado.httpserver.HTTPServer(application)
+    http_server.listen(
+        configuration.getint('server', 'port')
+    )
+    ioLoop = tornado.ioloop.IOLoop.instance()
+    try:
+        logging.info('tornado is started.')
+        ioLoop.start()
+    except KeyboardInterrupt:
+        logging.info('tornado is shutting down.')
+        ioLoop.stop()

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/templates/podaac-dataset-osd.xml
----------------------------------------------------------------------
diff --git a/src/main/python/templates/podaac-dataset-osd.xml b/src/main/python/templates/podaac-dataset-osd.xml
new file mode 100644
index 0000000..a59f6b6
--- /dev/null
+++ b/src/main/python/templates/podaac-dataset-osd.xml
@@ -0,0 +1,16 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<OpenSearchDescription xmlns="http://a9.com/-/spec/opensearch/1.1/"
+  xmlns:georss="http://www.georss.org/georss" xmlns:time="http://a9.com/-/opensearch/extensions/time/1.0/"
+  xmlns:gml="http://www.opengis.net/gml" xmlns:podaac="http://podaac.jpl.nasa.gov/opensearch/">  
+   <ShortName>PO.DAAC Dataset Search</ShortName>
+   <Description>PO.DAAC Dataset Search</Description>
+   <Tags>PO.DAAC Dataset Search</Tags>
+   <Contact>podaac@podaac.jpl.nasa.gov</Contact>
+   <Url type="application/atom+xml" template="{{ request.protocol }}://{{ request.host }}/ws/search/dataset?keyword={searchTerms}&amp;startIndex={startIndex?}&amp;itemsPerPage={count?}&amp;bbox={georss:box?}&amp;startTime={time:start?}&amp;endTime={time:end?}&amp;datasetId={podaac:persistentId?}&amp;shortName={podaac:shortName?}&amp;sortBy={podaac:sortBy?}&amp;pretty={podaac:pretty?}&amp;processLevel={podaac:processLevel?}&amp;format=atom"/>
+   <Url type="application/rss+xml" template="{{ request.protocol }}://{{ request.host }}/ws/search/dataset?keyword={searchTerms}&amp;startIndex={startIndex?}&amp;itemsPerPage={count?}&amp;bbox={georss:box?}&amp;startTime={time:start?}&amp;endTime={time:end?}&amp;datasetId={podaac:persistentId?}&amp;shortName={podaac:shortName?}&amp;sortBy={podaac:sortBy?}&amp;pretty={podaac:pretty?}&amp;processLevel={podaac:processLevel?}&amp;format=rss"/>
+   <LongName>PO.DAAC Dataset Search</LongName>
+   <Developer>PO.DAAC</Developer>
+   <Language>en-us</Language>
+   <OutputEncoding>UTF-8</OutputEncoding>
+   <InputEncoding>UTF-8</InputEncoding>
+</OpenSearchDescription>

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/templates/podaac-granule-osd.xml
----------------------------------------------------------------------
diff --git a/src/main/python/templates/podaac-granule-osd.xml b/src/main/python/templates/podaac-granule-osd.xml
new file mode 100644
index 0000000..e55c3e9
--- /dev/null
+++ b/src/main/python/templates/podaac-granule-osd.xml
@@ -0,0 +1,16 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<OpenSearchDescription xmlns="http://a9.com/-/spec/opensearch/1.1/"
+  xmlns:georss="http://www.georss.org/georss" xmlns:time="http://a9.com/-/opensearch/extensions/time/1.0/"
+  xmlns:gml="http://www.opengis.net/gml" xmlns:podaac="http://podaac.jpl.nasa.gov/opensearch/">  
+   <ShortName>PO.DAAC Granule Search</ShortName>
+   <Description>PO.DAAC Granule Search</Description>
+   <Tags>PO.DAAC Granule Search</Tags>
+   <Contact>podaac@podaac.jpl.nasa.gov</Contact>
+   <Url type="application/atom+xml" template="{{ request.protocol }}://{{ request.host }}/ws/search/granule?datasetId={podaac:persistentId}&amp;startIndex={startIndex?}&amp;itemsPerPage={count?}&amp;bbox={georss:box?}&amp;startTime={time:start?}&amp;endTime={time:end?}&amp;granuleName={podaac:granuleName?}&amp;shortName={podaac:shortName?}&amp;sortBy={podaac:sortBy?}&amp;pretty={podaac:pretty?}&amp;format=atom"/>
+   <Url type="application/rss+xml" template="{{ request.protocol }}://{{ request.host }}/ws/search/granule?datasetId={podaac:persistentId}&amp;startIndex={startIndex?}&amp;itemsPerPage={count?}&amp;bbox={georss:box?}&amp;startTime={time:start?}&amp;endTime={time:end?}&amp;granuleName={podaac:granuleName?}&amp;shortName={podaac:shortName?}&amp;sortBy={podaac:sortBy?}&amp;pretty={podaac:pretty?}&amp;format=rss"/>
+   <LongName>PO.DAAC Granule Search</LongName>
+   <Developer>PO.DAAC</Developer>
+   <Language>en-us</Language>
+   <OutputEncoding>UTF-8</OutputEncoding>
+   <InputEncoding>UTF-8</InputEncoding>
+</OpenSearchDescription>

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/solr/product/conf/data-config.xml
----------------------------------------------------------------------
diff --git a/src/main/solr/product/conf/data-config.xml b/src/main/solr/product/conf/data-config.xml
new file mode 100644
index 0000000..afc4741
--- /dev/null
+++ b/src/main/solr/product/conf/data-config.xml
@@ -0,0 +1,106 @@
+<!--*******************************************************************************************************************-->
+<!-- GIBS: product dataimport data-config.xml                                                                          -->
+<!-- url="jdbc:postgresql://localhost:8888/gibs"                                                                       -->
+<!-- user=gibs"                                                                                                        -->
+<!--*******************************************************************************************************************-->
+<dataConfig>
+
+    <dataSource 
+        driver="org.postgresql.Driver" 
+        url="jdbc:postgresql://localhost/twright" 
+        user="twright" />
+
+    <document>
+
+        <entity name="product"
+            query="select * from product_view"
+            transformer="RegexTransformer">
+
+            <field column="product_granule_id_list"                             name="product_granule_id_list"                             splitBy = "," />
+            <field column="product_granule_version_list"                        name="product_granule_version_list"                        splitBy = "," />
+            <field column="product_granule_dataset_id_list"                     name="product_granule_dataset_id_list"                     splitBy = "," />
+            <field column="product_granule_metadata_endpoint_list"              name="product_granule_metadata_endpoint_list"              splitBy = "," />
+            <field column="product_granule_remote_granule_ur_list"              name="product_granule_remote_granule_ur_list"              splitBy = "," />
+
+            <field column="product_operation_version_list"                      name="product_operation_version_list"                      splitBy = "," />
+            <field column="product_operation_agent_list"                        name="product_operation_agent_list"                        splitBy = "," />
+            <field column="product_operation_list"                              name="product_operation_list"                              splitBy = "," />
+            <field column="product_operation_command_list"                      name="product_operation_command_list"                      splitBy = "," />
+            <field column="product_operation_arguments_list"                    name="product_operation_arguments_list"                    splitBy = "," />
+            <field column="product_operation_start_time_list"                   name="product_operation_start_time_list"                   splitBy = "," />
+            <field column="product_operation_stop_time_list"                    name="product_operation_stop_time_list"                    splitBy = "," />
+            <field column="product_operation_start_time_string_list"            name="product_operation_start_time_string_list"            splitBy = "," />
+            <field column="product_operation_stop_time_string_list"             name="product_operation_stop_time_string_list"             splitBy = "," />
+
+            <field column="product_archive_name_list"                           name="product_archive_name_list"                           splitBy = "," />
+            <field column="product_archive_version_list"                        name="product_archive_version_list"                        splitBy = "," />
+            <field column="product_archive_type_list"                           name="product_archive_type_list"                           splitBy = "," />
+            <field column="product_archive_file_size_list"                      name="product_archive_file_size_list"                      splitBy = "," />
+            <field column="product_archive_checksum_list"                       name="product_archive_checksum_list"                       splitBy = "," />
+            <field column="product_archive_compress_flag_list"                  name="product_archive_compress_flag_list"                  splitBy = "," />
+            <field column="product_archive_status_list"                         name="product_archive_status_list"                         splitBy = "," />
+            <field column="product_archive_reference_description_list"          name="product_archive_reference_description_list"          splitBy = "," />
+            <field column="product_archive_reference_name_list"                 name="product_archive_reference_name_list"                 splitBy = "," />
+            <field column="product_archive_reference_type_list"                 name="product_archive_reference_type_list"                 splitBy = "," />
+            <field column="product_archive_reference_status_list"               name="product_archive_reference_status_list"               splitBy = "," />
+
+            <field column="product_reference_version_list"                      name="product_reference_version_list"                      splitBy = "," />
+            <field column="product_reference_type_list"                         name="product_reference_type_list"                         splitBy = "," />
+            <field column="product_reference_name_list"                         name="product_reference_name_list"                         splitBy = "," />
+            <field column="product_reference_path_list"                         name="product_reference_path_list"                         splitBy = "," />
+            <field column="product_reference_description_list"                  name="product_reference_description_list"                  splitBy = "," />
+            <field column="product_reference_status_list"                       name="product_reference_status_list"                       splitBy = "," />
+
+            <field column="product_data_day_version_list"                       name="product_data_day_version_list"                       splitBy = "," />
+            <field column="product_data_day_list"                               name="product_data_day_list"                               splitBy = "," />
+            <field column="product_data_day_string_list"                        name="product_data_day_string_list"                        splitBy = "," />
+
+            <field column="product_contact_version_list"                        name="product_contact_version_list"                        splitBy = "," />
+            <field column="product_contact_role_list"                           name="product_contact_role_list"                           splitBy = "," />
+            <field column="product_contact_first_name_list"                     name="product_contact_first_name_list"                     splitBy = "," />
+            <field column="product_contact_last_name_list"                      name="product_contact_last_name_list"                      splitBy = "," />
+            <field column="product_contact_middle_name_list"                    name="product_contact_middle_name_list"                    splitBy = "," />
+            <field column="product_contact_address_list"                        name="product_contact_address_list"                        splitBy = "," />
+            <field column="product_contact_notify_type_list"                    name="product_contact_notify_type_list"                    splitBy = "," />
+            <field column="product_contact_email_list"                          name="product_contact_email_list"                          splitBy = "," />
+            <field column="product_contact_phone_list"                          name="product_contact_phone_list"                          splitBy = "," />
+            <field column="product_contact_fax_list"                            name="product_contact_fax_list"                            splitBy = "," />
+            <field column="product_contact_provider_long_name_list"             name="product_contact_provider_long_name_list"             splitBy = "," />
+            <field column="product_contact_provider_short_name_list"            name="product_contact_provider_short_name_list"            splitBy = "," />
+            <field column="product_contact_provider_type_list"                  name="product_contact_provider_type_list"                  splitBy = "," />
+            <field column="product_contact_provider_resource_descriptions_list" name="product_contact_provider_resource_descriptions_list" splitBy = "," />
+            <field column="product_contact_provider_resource_names_list"        name="product_contact_provider_resource_names_list"        splitBy = "," />
+            <field column="product_contact_provider_resource_paths_list"        name="product_contact_provider_resource_paths_list"        splitBy = "," />
+            <field column="product_contact_provider_resource_types_list"        name="product_contact_provider_resource_types_list"        splitBy = "," />
+  
+            <field column="product_element_version_list"                        name="product_element_version_list"                        splitBy = "," />
+            <field column="product_element_obligation_flag_list"                name="product_element_obligation_flag_list"                splitBy = "," />
+            <field column="product_element_scope_list"                          name="product_element_scope_list"                          splitBy = "," />
+            <field column="product_element_dd_version_list"                     name="product_element_dd_version_list"                     splitBy = "," />
+            <field column="product_element_dd_type_list"                        name="product_element_dd_type_list"                        splitBy = "," />
+            <field column="product_element_dd_description_list"                 name="product_element_dd_description_list"                 splitBy = "," />
+            <field column="product_element_dd_scope_list"                       name="product_element_dd_scope_list"                       splitBy = "," />
+            <field column="product_element_dd_long_name_list"                   name="product_element_dd_long_name_list"                   splitBy = "," />
+            <field column="product_element_dd_short_name_list"                  name="product_element_dd_short_name_list"                  splitBy = "," />
+            <field column="product_element_dd_max_length_list"                  name="product_element_element_dd_max_length_list"          splitBy = "," />
+
+            <field column="product_datetime_version_list"                       name="product_datetime_version_list"                       splitBy = "," />
+            <field column="product_datetime_value_list"                         name="product_datetime_value_list"                         splitBy = "," />
+            <field column="product_datetime_value_string_list"                  name="product_datetime_value_string_list"                  splitBy = "," />
+
+            <field column="product_character_version_list"                      name="product_character_verison_list"                      splitBy = "," />
+            <field column="product_character_value_list"                        name="product_character_value_list"                        splitBy = "," />
+
+            <field column="product_integer_version_list"                        name="product_integer_version_list"                        splitBy = "," />
+            <field column="product_integer_value_list"                          name="product_integer_value_list"                          splitBy = "," />
+            <field column="product_integer_units_list"                          name="product_integer_units_list"                          splitBy = "," />
+
+            <field column="product_real_version_list"                           name="product_real_version_list"                           splitBy = "," />
+            <field column="product_real_value_list"                             name="product_real_value_list"                             splitBy = "," />
+            <field column="product_real_units_list"                             name="product_real_units_list"                             splitBy = "," />  
+
+        </entity>
+
+    </document>
+
+</dataConfig>

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/solr/product/conf/dataimport.properties
----------------------------------------------------------------------
diff --git a/src/main/solr/product/conf/dataimport.properties b/src/main/solr/product/conf/dataimport.properties
new file mode 100644
index 0000000..849fdd7
--- /dev/null
+++ b/src/main/solr/product/conf/dataimport.properties
@@ -0,0 +1,3 @@
+#Thu Nov 05 22:31:01 UTC 2015
+last_index_time=2015-11-05 22\:28\:55
+product.last_index_time=2015-11-05 22\:28\:55


[07/15] incubator-sdap-edge git commit: SDAP-1 Import all code under the SDAP SGA

Posted by le...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/slcp/content/Writer.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/slcp/content/Writer.py b/src/main/python/plugins/slcp/content/Writer.py
new file mode 100644
index 0000000..c0e27ab
--- /dev/null
+++ b/src/main/python/plugins/slcp/content/Writer.py
@@ -0,0 +1,76 @@
+import logging
+import os
+import os.path
+import urllib
+
+from edge.writer.solrtemplateresponsewriter import SolrTemplateResponseWriter
+from edge.response.solrjsontemplateresponse import SolrJsonTemplateResponse
+
+class Writer(SolrTemplateResponseWriter):
+    def __init__(self, configFilePath):
+        super(Writer, self).__init__(configFilePath)
+        
+        self.contentType = 'application/json'
+
+        templatePath = os.path.dirname(configFilePath) + os.sep
+        templatePath += self._configuration.get('service', 'template')
+        self.template = self._readTemplate(templatePath)
+
+    def _generateOpenSearchResponse(self, solrResponse, searchText, searchUrl, searchParams, pretty):
+        response = SolrJsonTemplateResponse()
+        response.setTemplate(self.template)
+
+        return response.generate(solrResponse, pretty=pretty)
+
+    def _constructSolrQuery(self, startIndex, entriesPerPage, parameters, facets):
+        queries = []
+        filterQueries = []
+        filterQueries.append('status:1')
+        sort = None
+
+        for key, value in parameters.iteritems():
+            if value != "":
+                if key == 'keyword':
+                    #Special case keyword search on glossary_items only match title
+                    if 'table' in parameters and parameters['table'] == 'glossary_items':
+                        queries.append('title_t:('+urllib.quote(value) + ')')
+                    else:
+                        queries.append(urllib.quote(value))
+                elif key == 'year':
+                    start = value + "-01-01T00:00:00.000Z"
+                    end = value + "-12-31T23:59:59.999Z"
+                    filterQueries.append('created_at:['+start+'%20TO%20'+end+']')
+                elif key == 'table':
+                    filterQueries.append('type:' + value)
+                elif key == 'glossary_title':
+                    range = value.lower().split('-')
+                    filterQueries.append('{!frange%20l=' + range[0] + '%20u=' + range[1] + 'z}' + 'title_lc')
+                elif key == 'sort':
+                    sort = urllib.quote(value)
+                elif key == 'topic_id':
+                    filterQueries.append('categories_id:' + value)
+                elif key == 'mission_id':
+                    filterQueries.append('mission_ids_array:' + value)
+                else:
+                    if type(value) is list:
+                        if 'table' in parameters and parameters['table'] == 'news_items':
+                            filterQueries.append(key + ':(' + '+OR+'.join([self._urlEncodeSolrQueryValue(v) for v in value]) + ')')
+                        else:
+                            for v in value:
+                                filterQueries.append(key + ':' + self._urlEncodeSolrQueryValue(v))
+                    else:
+                        filterQueries.append(key + ':' + self._urlEncodeSolrQueryValue(value))
+        if len(queries) == 0:
+            queries.append('*:*')
+
+        query = 'q='+'+AND+'.join(queries)+'&version=2.2&indent=on&wt=json&start='+str(startIndex)+'&rows='+str(entriesPerPage)
+
+        if len(filterQueries) > 0:
+            query += '&fq='+'+AND+'.join(filterQueries)
+
+        if sort is not None:
+            query += '&sort=' + sort
+
+        logging.debug('solr query: '+query)
+
+        return query

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/slcp/content/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/slcp/content/__init__.py b/src/main/python/plugins/slcp/content/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/slcp/content/plugin.conf
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/slcp/content/plugin.conf b/src/main/python/plugins/slcp/content/plugin.conf
new file mode 100644
index 0000000..3e74010
--- /dev/null
+++ b/src/main/python/plugins/slcp/content/plugin.conf
@@ -0,0 +1,11 @@
+[solr]
+datasetUrl=http://localhost:8983/solr/content
+entriesPerPage=10
+maxEntriesPerPage=2000
+defaultSearchParam=keyword
+parameters=keyword,sort,categories_id,status,is_in_resource_list,category_type_id,is_faq,table,featured,year,publication_year,glossary_title,id,topic_id,mission_id
+facets={}
+
+[service]
+url=http://localhost:8890
+template=template.xml

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/slcp/content/template.xml
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/slcp/content/template.xml b/src/main/python/plugins/slcp/content/template.xml
new file mode 100755
index 0000000..357ff59
--- /dev/null
+++ b/src/main/python/plugins/slcp/content/template.xml
@@ -0,0 +1,158 @@
+{"items":[
+{% for doc in docs %}
+{
+{% if doc['type'] == 'resources' %}
+    "alt_text": {{ doc['alt_text'] | jsonify }},
+    "created_at": {{ doc['created_at'] | jsonify }},
+    "credit": {{ doc['credit'] | jsonify }},
+    "detail_content_type": {{ doc['detail_content_type'] | jsonify }},
+    "detail_file_name": {{ doc['detail_file_name'] | jsonify }},
+    "detail_file_size": {{ doc['detail_file_size'] | jsonify }},
+    "detail_image": {{ doc['detail_image'] | jsonify }},
+    "embed_code": {{ doc['embed_code'] | jsonify }},
+    "external_url": {{ doc['external_url'] | jsonify }},
+    "featured": {{ doc['featured'] | jsonify }},
+    "id": {{ doc['id'] }},
+    "is_in_resource_list": {{ doc['is_in_resource_list'] | jsonify }},
+    "is_latest": {{ doc['is_latest'] | jsonify }},
+    "latest_image_file_name": {{ doc['latest_image_file_name'] | jsonify }},
+    "list_image_content_type": {{ doc['list_image_content_type'] | jsonify }},
+    "list_image_file_name": {{ doc['list_image_file_name'] | jsonify }},
+    "list_image_file_size": {{ doc['list_image_file_size'] | jsonify }},
+    "list_image_src": "/system/resources/list_images/{{ doc['id'] }}_{{ doc['list_image_file_name'] }}",
+    "long_description": {{ doc['long_description'] | jsonify }},
+    "pub_date": {{ doc['pub_date'] | jsonify }},
+    "short_description": {{ doc['short_description'] | jsonify }},
+    "status": {{ doc['status'] }},
+    "table": {{ doc['type'] | jsonify }},
+    "title": {{ doc['title'] | jsonify }},
+    "updated_at": {{ doc['updated_at'] | jsonify }},
+    "updated_by": {{ doc['updated_by'] | jsonify }},
+    "url": {{ doc['permalink_path'] | jsonify }}
+{% elif doc['type'] == 'publications' %}
+    "abstract": {{ doc['abstract'] | jsonify }},
+    "author_address": {{ doc['author_address'] | jsonify }},
+    "authors": {{ doc['authors'] | jsonify }},
+    "category": {{ doc['category'] | jsonify }},
+    "citation": {{ doc['citation'] | jsonify }},
+    "created_at": {{ doc['created_at'] | jsonify }},
+    "external_uid": {{ doc['external_uid'] | jsonify }} ,
+    "file_content_type": {{ doc['file_content_type'] | jsonify }},
+    "file_file_name": {{ doc['file_file_name'] | jsonify }},
+    "file_file_size": {{ doc['file_file_size'] | jsonify }},
+    "graph_category": {{ doc['graph_category'] | jsonify }},
+    "id": {{ doc['id'] }},
+    "is_peer_reviewed": {{ doc['is_peer_reviewed'] | lower }},
+    "keywords": {{ doc['keywords'] | jsonify }},
+    "link": {{ doc['link'] | jsonify }},
+    "periodical": {
+        "created_at": {{ doc['periodical_created_at'] | jsonify }},
+        "id": {{ doc['periodical_id'] }},
+        "title": {{ doc['periodical_title'] | jsonify }},
+        "updated_at": {{ doc['periodical_updated_at'] | jsonify }},
+        "updated_by": {{ doc['periodical_updated_by'] | jsonify }}
+    },
+    "periodical_id": {{ doc['periodical_id'] }},
+    "publication_year": {{ doc['publication_year'] | jsonify }},
+    "status": {{ doc['status'] }},
+    "table": {{ doc['type'] | jsonify }},
+    "title": {{ doc['title'] | jsonify }},
+    "updated_at": {{ doc['updated_at'] | jsonify }},
+    "updated_by": {{ doc['updated_by'] | jsonify }},
+    "url": {{ doc['permalink_path'] | jsonify }}
+{% elif doc['type'] == 'news_items' %}
+    "categories_id": {{ doc['categories_id'] | jsonify }},
+    "categories_title": {{ doc['categories_title'] | jsonify }},
+    "date": "{{ doc['publish_date'] | convertISOTime('%B %-d, %Y') }}",
+    "description": {{ doc['long_description'] | jsonify }},
+    "id": {{ doc['id']}},
+    "main_image_alt": {{ doc['main_image_alt'] | jsonify }},
+    "main_image_caption": {{ doc['main_image_caption'] | jsonify }},
+    "main_image_file_name": {{ doc['main_image_file_name'] | jsonify }},
+    "status": {{ doc['status'] }},
+    "table": {{ doc['type'] | jsonify }},
+    "target": "_self",
+    "thumb": "/system/news_items/list_view_images/{{ doc['id']}}_{{ doc['list_view_image_file_name']}}",
+    "title": {{ doc['title'] | jsonify }},
+    "url": {{ doc['permalink_path'] | jsonify }},
+    "mission_ids": {{ doc['mission_ids_array'] | jsonify }}
+{% elif doc['type'] == 'glossary_items' %}
+    "caption": {{ doc['caption'] | jsonify }},
+    "created_at": {{ doc['created_at'] | jsonify }},
+    "credit": {{ doc['credit'] | jsonify }},
+    "definition": {{ doc['definition'] | jsonify }},
+    "id": {{ doc['id']}},
+    "list_view_image_alt": {{ doc['list_view_image_alt'] | jsonify }},
+    "list_view_image_file_name": {{ doc['list_view_image_file_name'] | jsonify }},
+    "status": {{ doc['status'] }},
+    "table": {{ doc['type'] | jsonify }},
+    "title": {{ doc['title'] | jsonify }},
+    "updated_at": {{ doc['updated_at'] | jsonify }},
+    "url": {{ doc['permalink_path'] | jsonify }},
+    "video_embed_code": {{ doc['video_embed_code'] | jsonify }}
+{% elif doc['type'] == 'content_pages' %}
+    "body": {{ doc['body'] | jsonify }},
+    "created_at": {{ doc['created_at'] | jsonify }},
+    "id": {{ doc['id']}},
+    "main_image_alt": {{ doc['main_image_alt'] | jsonify }},
+    "main_image_caption": {{ doc['main_image_caption'] | jsonify }},
+    "main_image_file_name": {{ doc['main_image_file_name'] | jsonify }},
+    "meta_desc": {{ doc['meta_desc'] | jsonify }},
+    "status": {{ doc['status'] }},
+    "subnav_title": {{ doc['subnav_title'] | jsonify }},
+    "table": {{ doc['type'] | jsonify }},
+    "title": {{ doc['title'] | jsonify }},
+    "updated_at": {{ doc['updated_at'] | jsonify }},
+    "updated_by": {{ doc['updated_by'] | jsonify }},
+    "url": {{ doc['permalink_path'] | jsonify }}
+{% elif doc['type'] == 'faq_items' %}
+    "answer": {{ doc['answer'] | jsonify }},
+    "created_at": {{ doc['created_at'] | jsonify }},
+    "data_release_version": {{ doc['data_release_version'] | jsonify }},
+    "id": {{ doc['id']}},
+    "is_faq": {{ doc['is_faq'] | jsonify }},
+    "keywords": {{ doc['keywords'] | jsonify }},
+    "physical_product": {{ doc['physical_product'] | jsonify }},
+    "post_date": {{ doc['post_date'] | jsonify }},
+    "question": {{ doc['question'] | jsonify }},
+    "question_type": {{ doc['question_type'] | jsonify }},
+    "sent_by": {{ doc['sent_by'] | jsonify }},
+    "status": {{ doc['status'] }},
+    "subject": {{ doc['subject'] | jsonify }},
+    "table": {{ doc['type'] | jsonify }},
+    "updated_at": {{ doc['updated_at'] | jsonify }},
+    "updated_by": {{ doc['updated_by'] | jsonify }},
+    "url": {{ doc['permalink_path'] | jsonify }}
+{% elif doc['type'] == 'missions' %}
+    "start_date": {{ doc['start_date'] | jsonify }},
+    "end_date": {{ doc['end_date'] | jsonify }},
+    "title": {{ doc['title'] | jsonify }},
+    "id": {{ doc['id']}},
+    "url": {{ doc['permalink_path'] | jsonify }},
+    "short_description": {{ doc['short_description'] | jsonify }},
+    "body": {{ doc['body'] | jsonify }},
+    "image_file_name": {{ doc['image_file_name'] | jsonify }},
+    "list_view_image_file_name": {{ doc['list_view_image_file_name'] | jsonify }},
+    "list_view_image_alt": {{ doc['list_view_image_alt'] | jsonify }},
+    "image_alt": {{ doc['image_alt'] | jsonify }},
+    "status": {{ doc['status'] }},
+    "created_at": {{ doc['created_at'] | jsonify }},
+    "short_title": {{ doc['short_title'] | jsonify }},
+    "updated_at": {{ doc['updated_at'] | jsonify }},
+    "sidebar_body": {{ doc['sidebar_body'] | jsonify }}
+{% endif %}
+}
+{% if not loop.last %}
+,
+{% endif %}
+{% endfor %}
+]
+,"more":
+{% if (startIndex + itemsPerPage) < numFound %}
+true
+{% else %}
+false
+{% endif %}
+,"total":{{ numFound }}
+,"page":{{ (startIndex / itemsPerPage + 1) | int }}
+,"per_page":{{ itemsPerPage }}}

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/slcp/dat/Writer.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/slcp/dat/Writer.py b/src/main/python/plugins/slcp/dat/Writer.py
new file mode 100644
index 0000000..1bdfe55
--- /dev/null
+++ b/src/main/python/plugins/slcp/dat/Writer.py
@@ -0,0 +1,53 @@
+import logging
+import os
+import os.path
+import urllib
+
+from edge.writer.solrtemplateresponsewriter import SolrTemplateResponseWriter
+from edge.response.solrjsontemplateresponse import SolrJsonTemplateResponse
+
+class Writer(SolrTemplateResponseWriter):
+    def __init__(self, configFilePath):
+        super(Writer, self).__init__(configFilePath)
+
+        self.contentType = 'application/json'
+
+        templatePath = os.path.dirname(configFilePath) + os.sep
+        templatePath += self._configuration.get('service', 'template')
+        self.template = self._readTemplate(templatePath)
+
+    def _generateOpenSearchResponse(self, solrResponse, searchText, searchUrl, searchParams, pretty):
+        response = SolrJsonTemplateResponse()
+        response.setTemplate(self.template)
+
+        return response.generate(solrResponse, pretty=pretty)
+
+    def _constructSolrQuery(self, startIndex, entriesPerPage, parameters, facets):
+        queries = []
+        filterQueries = []
+
+        for key, value in parameters.iteritems():
+            if key == 'id':
+                queries.append('id:' + self._urlEncodeSolrQueryValue(value))
+            elif key == 'slcpShortName':
+                queries.append('SlcpShortName:' + self._urlEncodeSolrQueryValue(value))
+            elif key == 'shortName':
+                queries.append('ShortName:' + self._urlEncodeSolrQueryValue(value))
+            elif key == 'nexusShortName':
+                queries.append('GlobalAttrNexusShortName:' + self._urlEncodeSolrQueryValue(value))
+            elif key == 'inDAT':
+                filterQueries.append('InDAT:%s' % value)
+
+        if len(queries) == 0:
+            queries.append('*:*')
+
+        query = 'q='+'+AND+'.join(queries)+'&version=2.2&indent=on&wt=json'+'&rows='+str(entriesPerPage)
+
+        if len(filterQueries) > 0:
+            query += '&fq='+'+AND+'.join(filterQueries)
+
+        query += '&sort=' + urllib.quote("DATOrder desc,ShortName asc")
+
+        logging.debug('solr query: '+query)
+
+        return query

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/slcp/dat/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/slcp/dat/__init__.py b/src/main/python/plugins/slcp/dat/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/slcp/dat/plugin.conf
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/slcp/dat/plugin.conf b/src/main/python/plugins/slcp/dat/plugin.conf
new file mode 100644
index 0000000..2c69d4f
--- /dev/null
+++ b/src/main/python/plugins/slcp/dat/plugin.conf
@@ -0,0 +1,9 @@
+[solr]
+datasetUrl=http://localhost:8983/solr/dataset
+entriesPerPage=10
+maxEntriesPerPage=2000
+parameters=id,slcpShortName,shortName,nexusShortName,inDAT
+
+[service]
+url=http://localhost:8890
+template=template.json

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/slcp/dat/template.json
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/slcp/dat/template.json b/src/main/python/plugins/slcp/dat/template.json
new file mode 100644
index 0000000..f7e96ae
--- /dev/null
+++ b/src/main/python/plugins/slcp/dat/template.json
@@ -0,0 +1,33 @@
+{
+"Datasets": [
+{% for doc in docs %}
+
+{% for i in range(doc['GlobalAttrTitle']|count)  %}
+{
+"GlobalAttrShortName": "{{ doc['ShortName'] }}",
+"GlobalAttrNexusShortName": "{{ doc['GlobalAttrNexusShortName'][i] }}",
+"GlobalAttrTitle": "{{ doc['GlobalAttrTitle'][i] }}",
+"GlobalAttrDescription": "{{ doc['GlobalAttrDescription'][i] }}",
+"GlobalAttrSource": "{{ doc['GlobalAttrSource'][i] }}",
+"GlobalAttrContact": "{{ doc['GlobalAttrContact'][i] }}",
+"GlobalAttrUnits": "{{ doc['GlobalAttrUnits'][i] }}"
+{% if 'SupportsBasin' in doc %}
+,"SupportsBasin": {{ doc['SupportsBasin'] | jsonify }}
+{% endif %}
+{% if 'Climatology' in doc %}
+,"Climatology": {{ doc['Climatology'] | jsonify }}
+{% endif %}
+{% if 'IsClimatology' in doc %}
+,"IsClimatology": {{ doc['IsClimatology'] | jsonify }}
+{% endif %}
+}
+{% if not loop.last %}
+,
+{% endif %}
+{% endfor %}
+{% if not loop.last %}
+,
+{% endif %}
+{% endfor %}
+]
+}

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/slcp/echo10/Writer.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/slcp/echo10/Writer.py b/src/main/python/plugins/slcp/echo10/Writer.py
new file mode 100644
index 0000000..ebafc40
--- /dev/null
+++ b/src/main/python/plugins/slcp/echo10/Writer.py
@@ -0,0 +1,39 @@
+import logging
+import os
+import os.path
+
+from edge.writer.solrtemplateresponsewriter import SolrTemplateResponseWriter
+from edge.opensearch.solrtemplateresponse import SolrTemplateResponse
+
+class Writer(SolrTemplateResponseWriter):
+    def __init__(self, configFilePath):
+        super(Writer, self).__init__(configFilePath)
+
+        templatePath = os.path.dirname(configFilePath) + os.sep
+        templatePath += self._configuration.get('service', 'template')
+        self.template = self._readTemplate(templatePath)
+
+    def _generateOpenSearchResponse(self, solrResponse, searchText, searchUrl, searchParams, pretty):
+        response = SolrTemplateResponse(self._configuration, searchUrl, searchParams)
+        response.setTemplate(self.template)
+
+        return response.generate(solrResponse, pretty=pretty)
+
+    def _constructSolrQuery(self, startIndex, entriesPerPage, parameters, facets):
+        queries = []
+
+        for key, value in parameters.iteritems():
+            if key == 'id':
+                queries.append('id:' + self._urlEncodeSolrQueryValue(value))
+            elif key == 'slcpShortName':
+                queries.append('SlcpShortName:' + self._urlEncodeSolrQueryValue(value))
+            elif key == 'shortName':
+                queries.append('ShortName:' + self._urlEncodeSolrQueryValue(value))
+            elif key == 'nexusShortName':
+                queries.append('GlobalAttrNexusShortName:' + self._urlEncodeSolrQueryValue(value))
+
+        query = 'q='+'+AND+'.join(queries)+'&version=2.2&indent=on&wt=json'+'&rows='+str(entriesPerPage)
+
+        logging.debug('solr query: '+query)
+
+        return query

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/slcp/echo10/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/slcp/echo10/__init__.py b/src/main/python/plugins/slcp/echo10/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/slcp/echo10/plugin.conf
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/slcp/echo10/plugin.conf b/src/main/python/plugins/slcp/echo10/plugin.conf
new file mode 100644
index 0000000..b9f3e23
--- /dev/null
+++ b/src/main/python/plugins/slcp/echo10/plugin.conf
@@ -0,0 +1,8 @@
+[solr]
+datasetUrl=http://localhost:8983/solr/dataset
+entriesPerPage=1
+parameters=id,slcpShortName,shortName,nexusShortName
+
+[service]
+url=http://localhost:8890
+template=template.xml

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/slcp/echo10/template.xml
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/slcp/echo10/template.xml b/src/main/python/plugins/slcp/echo10/template.xml
new file mode 100644
index 0000000..99de2f8
--- /dev/null
+++ b/src/main/python/plugins/slcp/echo10/template.xml
@@ -0,0 +1,190 @@
+<Collection>
+<ShortName>{{ docs[0]['ShortName'] }}</ShortName>
+<VersionId>{{ docs[0]['VersionId'] }}</VersionId>
+<LastUpdate>{{ docs[0]['LastUpdate'] }}</LastUpdate>
+<LongName>{{ docs[0]['LongName'] }}</LongName>
+<DataSetId>{{ docs[0]['DataSetId'] }}</DataSetId>
+<Description>{{ docs[0]['Description'] }}</Description>
+<ProcessingCenter>{{ docs[0]['ProcessingCenter'] }}</ProcessingCenter>
+<ProcessingLevelId>{{ docs[0]['ProcessingLevelId'] }}</ProcessingLevelId>
+<ArchiveCenter>{{ docs[0]['ArchiveCenter'] }}</ArchiveCenter>
+<DataFormat>{{ docs[0]['DataFormat'] }}</DataFormat>
+<SpatialKeywords>
+{% for keyword in docs[0]['SpatialKeywords-Keyword']  %}
+<Keyword>{{ keyword }}</Keyword>
+{% endfor %}
+</SpatialKeywords>
+<Temporal>
+{% for dt in docs[0]['BeginningEndingDateTime']  %}
+<RangeDateTime>
+{% if ' ' not in dt %}
+<BeginningDateTime>{{ dt }}</BeginningDateTime>
+<EndingDateTime>{{ dt }}</EndingDateTime>
+{% else %}
+{% if '*' not in dt[1:dt.index(' ')] %}
+<BeginningDateTime>{{ dt[1:dt.index(' ')] }}</BeginningDateTime>
+{% else %}
+<BeginningDateTime></BeginningDateTime>
+{% endif %}
+{% if '*' not in dt %}
+<EndingDateTime>{{ dt[dt.rindex(' ')+1:-1] }}</EndingDateTime>
+{% else %}
+<EndingDateTime></EndingDateTime>
+{% endif %}
+{% endif %}
+</RangeDateTime>
+{% endfor %}
+</Temporal>
+{% if docs[0]['CategoryKeyword'] %}
+<ScienceKeywords>
+{% for i in range(docs[0]['CategoryKeyword']|count)  %}
+<ScienceKeyword>
+<CategoryKeyword>{{ docs[0]['CategoryKeyword'][i] }}</CategoryKeyword>
+<TopicKeyword>{{ docs[0]['TopicKeyword'][i] }}</TopicKeyword>
+<TermKeyword>{{ docs[0]['TermKeyword'][i] }}</TermKeyword>
+<VariableLevel1Keyword>
+<Value>{{ docs[0]['VariableLevel1Keyword'][i] }}</Value>
+</VariableLevel1Keyword>
+{% if docs[0]['DetailedVariableKeyword'] %}
+<DetailedVariableKeyword>{{ docs[0]['DetailedVariableKeyword'][i] }}</DetailedVariableKeyword>
+{% endif %}
+</ScienceKeyword>
+{% endfor %}
+</ScienceKeywords>
+{% endif %}
+<Platforms>
+{% for i in range(docs[0]['Platform-ShortName']|count)  %}
+<Platform>
+<ShortName>{{ docs[0]['Platform-ShortName'][i] }}</ShortName>
+<LongName>{{ docs[0]['Platform-LongName'][i] }}</LongName>
+{% if docs[0]['Instrument-ShortName_' + i|string] %}
+<Instruments>
+{% for j in range(docs[0]['Instrument-ShortName_' + i|string]|count)  %}
+<Instrument>
+<ShortName>{{ docs[0]['Instrument-ShortName_' + i|string][j] }}</ShortName>
+<LongName>{{ docs[0]['Instrument-LongName_' + i|string][j] }}</LongName>
+{% if docs[0]['Sensor-ShortName_' + i|string + '_' + j|string] %}
+<Sensors>
+{% for k in range(docs[0]['Sensor-ShortName_' + i|string + '_' + j|string]|count)  %}
+<Sensor>
+<ShortName>{{ docs[0]['Sensor-ShortName_' + i|string + '_' + j|string][k] }}</ShortName>
+<LongName>{{ docs[0]['Sensor-LongName_' + i|string + '_' + j|string][k] }}</LongName>
+</Sensor>
+{% endfor %}
+</Sensors>
+{% endif %}
+</Instrument>
+{% endfor %}
+</Instruments>
+{% endif %}
+</Platform>
+{% endfor %}
+</Platforms>
+<AdditionalAttributes>
+{% if docs[0]['TemporalResolution'] %}
+<AdditionalAttribute>
+<Name>Temporal Resolution</Name>
+<Value>{{docs[0]['TemporalResolution']}}</Value>
+</AdditionalAttribute>
+{% endif %}
+{% if docs[0]['LatitudeResolution'] %}
+<AdditionalAttribute>
+<Name>Spatial Resolution</Name>
+<Value>{{docs[0]['LatitudeResolution']}} degrees (Latitude) x {{docs[0]['LongitudeResolution']}} degrees (Longitude)</Value>
+</AdditionalAttribute>
+{% endif %}
+{% if docs[0]['AcrossTrackResolution'] %}
+<AdditionalAttribute>
+<Name>Spatial Resolution</Name>
+<Value>{{docs[0]['AlongTrackResolution'] / 1000 }} km (Along) x {{docs[0]['AcrossTrackResolution'] / 1000 }} km (Across)</Value>
+</AdditionalAttribute>
+{% endif %}
+{% if docs[0]['DOI'] %}
+<AdditionalAttribute>
+<Name>DOI</Name>
+<Value>{{docs[0]['DOI']}}</Value>
+</AdditionalAttribute>
+{% endif %}
+{% for key in ['GlobalAttrTitle', 'GlobalAttrDescription', 'GlobalAttrSource', 'GlobalAttrContact', 'GlobalAttrUnits'] %}
+{% if docs[0][key] %}
+<AdditionalAttribute>
+<Name>{{ key }}</Name>
+<Value>{{ docs[0][key] }}</Value>
+</AdditionalAttribute>
+{% endif %}
+{% endfor %}
+</AdditionalAttributes>
+<Campaigns>
+{% for i in range(docs[0]['Campaign-ShortName']|count)  %}
+<Campaign>
+<ShortName>{{ docs[0]['Campaign-ShortName'][i] }}</ShortName>
+<LongName>{{ docs[0]['Campaign-LongName'][i] }}</LongName>
+</Campaign>
+{% endfor %}
+</Campaigns>
+{% if docs[0]['OnlineAccessURL-URL'] %}
+<OnlineAccessURLs>
+{% for i in range(docs[0]['OnlineAccessURL-URL']|count)  %}
+<OnlineAccessURL>
+<URL>{{ docs[0]['OnlineAccessURL-URL'][i] }}</URL>
+<URLDescription>{{ docs[0]['OnlineAccessURL-URLDescription'][i] }}</URLDescription>
+</OnlineAccessURL>
+{% endfor %}
+</OnlineAccessURLs>
+{% endif %}
+{% if docs[0]['OnlineResource-URL'] %}
+<OnlineResources>
+{% for i in range(docs[0]['OnlineResource-URL']|count)  %}
+<OnlineResource>
+<URL>{{ docs[0]['OnlineResource-URL'][i] }}</URL>
+{% if docs[0]['OnlineResource-Description'] %}
+<Description>{{ docs[0]['OnlineResource-Description'][i] }}</Description>
+{% endif %}
+<Type>{{ docs[0]['OnlineResource-Type'][i] }}</Type>
+</OnlineResource>
+{% endfor %}
+</OnlineResources>
+{% endif %}
+{% if docs[0]['Spatial-Geometry'] %}
+<Spatial>
+<HorizontalSpatialDomain>
+<Geometry>
+{% for box in docs[0]['Spatial-Box']  %}
+<BoundingRectangle>
+<WestBoundingCoordinate>{{ box.split()[1] }}</WestBoundingCoordinate>
+<NorthBoundingCoordinate>{{ box.split()[2] }}</NorthBoundingCoordinate>
+<EastBoundingCoordinate>{{ box.split()[3] }}</EastBoundingCoordinate>
+<SouthBoundingCoordinate>{{ box.split()[0] }}</SouthBoundingCoordinate>
+</BoundingRectangle>
+{% endfor %}
+{% for polygon in docs[0]['Spatial-Polygon']  %}
+<GPolygon>
+<Boundary>
+{% for i in range(0, polygon.split()|count, 2) %}
+<Point>
+<PointLongitude>{{ polygon.split()[i+1] }}</PointLongitude>
+<PointLatitude>{{ polygon.split()[i] }}</PointLatitude>
+</Point>
+{% endfor %}
+</Boundary>
+</GPolygon>
+{% endfor %}
+</Geometry>
+</HorizontalSpatialDomain>
+</Spatial>
+{% endif %}
+<CollectionAssociations>
+{% for i in range(docs[0]['CollectionAssociation-ShortName']|count)  %}
+<CollectionAssociation>
+<ShortName>{{ docs[0]['CollectionAssociation-ShortName'][i] }}</ShortName>
+<LongName>{{ docs[0]['CollectionAssociation-LongName'][i] }}</LongName>
+<VersionId>{{ docs[0]['CollectionAssociation-VersionId'][i] }}</VersionId>
+<CollectionType>{{ docs[0]['CollectionAssociation-CollectionType'][i] }}</CollectionType>
+<CollectionUse>{{ docs[0]['CollectionAssociation-CollectionUse'][i] }}</CollectionUse>
+{% if 'CollectionAssociation-URL' in docs[0] %}
+<URL>{{ docs[0]['CollectionAssociation-URL'][i] }}</URL>
+{% endif %}
+</CollectionAssociation>
+{% endfor %}
+</CollectionAssociations>
+</Collection>

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/slcp/facet/Writer.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/slcp/facet/Writer.py b/src/main/python/plugins/slcp/facet/Writer.py
new file mode 100644
index 0000000..4194bde
--- /dev/null
+++ b/src/main/python/plugins/slcp/facet/Writer.py
@@ -0,0 +1,70 @@
+import logging
+import os
+import os.path
+import urllib
+
+from edge.writer.solrtemplateresponsewriter import SolrTemplateResponseWriter
+from edge.response.solrfacettemplateresponse import SolrFacetTemplateResponse
+
+class Writer(SolrTemplateResponseWriter):
+    def __init__(self, configFilePath):
+        super(Writer, self).__init__(configFilePath)
+        
+        self.facet = True
+        self.contentType = 'application/json'
+        
+        templatePath = os.path.dirname(configFilePath) + os.sep
+        templatePath += self._configuration.get('service', 'template')
+        self.template = self._readTemplate(templatePath)
+
+    def _generateOpenSearchResponse(self, solrResponse, searchText, searchUrl, searchParams, pretty):
+        response = SolrFacetTemplateResponse(self.facetDefs)
+        response.setTemplate(self.template)
+        
+        return response.generate(solrResponse, pretty=pretty)
+
+    def _constructSolrQuery(self, startIndex, entriesPerPage, parameters, facets):
+        queries = []
+        filterQueries = []
+
+        for key, value in parameters.iteritems():
+            if value != "":
+                if key == 'keyword':
+                    queries.append(urllib.quote(value))
+                elif key == 'startTime':
+                    queries.append('EndingDateTime-Internal:['+value+'%20TO%20*]')
+                elif key == 'endTime':
+                    queries.append('BeginningDateTime:[*%20TO%20'+value+']')
+                elif key == 'bbox':
+                    coordinates = value.split(",")
+                    filterQueries.append('Spatial-Geometry:[' + coordinates[1] + ',' + coordinates[0] + '%20TO%20' + coordinates[3] + ',' + coordinates[2] + ']')
+                elif key == 'concept_id':
+                    queries.append('concept-id:' + self._urlEncodeSolrQueryValue(value))
+
+        for key, value in facets.iteritems():
+            tagKey = '{!tag=' + key + '}' + key
+            if type(value) is list:
+                if (len(value) == 1):
+                    filterQueries.append(tagKey + ':' + self._urlEncodeSolrQueryValue(value[0]))
+                else:
+                    filterQueries.append(tagKey + ':(' + '+OR+'.join([ self._urlEncodeSolrQueryValue(x) for x in value ]) + ")")
+            else:    
+                filterQueries.append(tagKey + ':' + self._urlEncodeSolrQueryValue(value))
+
+        if len(queries) == 0:
+            queries.append('*:*')
+
+        query = 'q='+'+AND+'.join(queries)+'&version=2.2&indent=on&wt=json'
+
+        if len(filterQueries) > 0:
+            query += '&fq='+'&fq='.join(filterQueries)
+        
+        if self.facet:
+            query += '&rows=0&facet=true&facet.limit=-1&'
+            query += '&'.join(['facet.field={!ex=' + facet +'}' + facet if facet in facets else 'facet.field=' + facet for facet in self.facetDefs.values()])
+        else:
+            query += '&start='+str(startIndex)+'&rows='+str(entriesPerPage)
+
+        logging.debug('solr query: '+query)
+
+        return query

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/slcp/facet/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/slcp/facet/__init__.py b/src/main/python/plugins/slcp/facet/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/slcp/facet/plugin.conf
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/slcp/facet/plugin.conf b/src/main/python/plugins/slcp/facet/plugin.conf
new file mode 100644
index 0000000..21627fd
--- /dev/null
+++ b/src/main/python/plugins/slcp/facet/plugin.conf
@@ -0,0 +1,11 @@
+[solr]
+datasetUrl=http://localhost:8983/solr/dataset
+entriesPerPage=10
+maxEntriesPerPage=2000
+defaultSearchParam=keyword
+parameters=keyword,startTime,endTime,bbox,concept_id
+facets={"Collection": "Collection", "Processing_Level": "ProcessingLevelBin", "Swath_Spatial_Resolution": "SwathSpatialResolution", "Grid_Spatial_Resolution": "GridSpatialResolution", "Temporal_Resolution": "TemporalResolution", "Parameter": "TermKeyword"}
+
+[service]
+url=http://localhost:8890
+template=template.xml

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/slcp/facet/template.xml
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/slcp/facet/template.xml b/src/main/python/plugins/slcp/facet/template.xml
new file mode 100644
index 0000000..c202f54
--- /dev/null
+++ b/src/main/python/plugins/slcp/facet/template.xml
@@ -0,0 +1,41 @@
+{
+"facets":[
+{% for key, facet in facetDefs.iteritems() %}
+{
+"field":"{{ key }}",
+"values":[
+{% if facet == 'GridSpatialResolution' or facet == 'SwathSpatialResolution'  %}
+{% for i in range(facets['facet_fields'][facet]|count, 0, -2) %}
+{
+"count":{{ facets['facet_fields'][facet][i-1] }},
+"value":"{{ facets['facet_fields'][facet][i-2] }}",
+"displayedValue":"{{ facets['facet_fields'][facet][i-2] }}{% if facet == 'GridSpatialResolution' %} degree(s){% endif %}{% if facet == 'SwathSpatialResolution' %} km{% endif %}"
+}
+{% if not loop.last %}
+,
+{% endif %}
+{% endfor %}
+{% else %}
+{% for i in range(0, facets['facet_fields'][facet]|count, 2) %}
+{
+"count":{{ facets['facet_fields'][facet][i+1] }},
+"value":"{{ facets['facet_fields'][facet][i] }}",
+{% if facet == 'ProcessingLevelBin' %}
+"displayedValue":"Level-{{ facets['facet_fields'][facet][i] }}{% if facets['facet_fields'][facet][i] == '2' %} (Swath){% elif facets['facet_fields'][facet][i] == '3' %} (Grid){% elif facets['facet_fields'][facet][i] == '4' %} (Blended){% endif %}"
+{% else %}
+"displayedValue":"{{ facets['facet_fields'][facet][i] }}"
+{% endif %}
+}
+{% if not loop.last %}
+,
+{% endif %}
+{% endfor %}
+{% endif %}
+]
+}
+{% if not loop.last %}
+,
+{% endif %}
+{% endfor %}
+]
+}

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/slcp/granule/Writer.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/slcp/granule/Writer.py b/src/main/python/plugins/slcp/granule/Writer.py
new file mode 100644
index 0000000..e462e1c
--- /dev/null
+++ b/src/main/python/plugins/slcp/granule/Writer.py
@@ -0,0 +1,79 @@
+import logging
+import os
+import os.path
+import urllib
+import json
+
+from edge.writer.solrtemplateresponsewriter import SolrTemplateResponseWriter
+from edge.opensearch.solrtemplateresponse import SolrTemplateResponse
+
+class Writer(SolrTemplateResponseWriter):
+    def __init__(self, configFilePath):
+        super(Writer, self).__init__(configFilePath)
+
+        templatePath = os.path.dirname(configFilePath) + os.sep
+        templatePath += self._configuration.get('service', 'template')
+        self.template = self._readTemplate(templatePath)
+
+    def _generateOpenSearchResponse(self, solrResponse, searchText, searchUrl, searchParams, pretty):
+        response = SolrTemplateResponse(self._configuration, searchUrl, searchParams)
+        response.setTemplate(self.template)
+        response.variables['serviceUrl'] = self._configuration.get('service', 'url')
+
+        return response.generate(solrResponse, pretty=pretty)
+
+    def _constructSolrQuery(self, startIndex, entriesPerPage, parameters, facets):
+        sortKeys = json.loads(self._configuration.get('solr', 'sortKeys'))
+
+        queries = []
+        filterQueries = []
+        sort = None
+        sortDir = 'asc'
+
+        for key, value in parameters.iteritems():
+            if value != "":
+                if key == 'keyword':
+                    queries.append(urllib.quote(value))
+                elif key == 'startTime':
+                    queries.append('EndingDateTime:['+value+'%20TO%20*]')
+                elif key == 'endTime':
+                    queries.append('BeginningDateTime:[*%20TO%20'+value+']')
+                elif key == 'bbox':
+                    coordinates = value.split(",")
+                    filterQueries.append('Spatial-Geometry:[' + coordinates[1] + ',' + coordinates[0] + '%20TO%20' + coordinates[3] + ',' + coordinates[2] + ']')
+                elif key == 'shortName':
+                    queries.append('ShortName:' + self._urlEncodeSolrQueryValue(value))
+                elif key == 'sortKey':
+                    if value in sortKeys.keys():
+                        sort = sortKeys[value]
+                elif key == 'sortDir':
+                    sortDir = value
+
+        for key, value in facets.iteritems():
+            if type(value) is list:
+                if (len(value) == 1):
+                    filterQueries.append(key + ':' + self._urlEncodeSolrQueryValue(value[0]))
+                else:
+                    filterQueries.append(key + ':(' + '+OR+'.join([ self._urlEncodeSolrQueryValue(x) for x in value ]) + ")")
+            else:    
+                filterQueries.append(key + ':' + self._urlEncodeSolrQueryValue(value))
+
+        if len(queries) == 0:
+            queries.append('*:*')
+
+        query = 'q='+'+AND+'.join(queries)+'&version=2.2&indent=on&wt=json'
+
+        if len(filterQueries) > 0:
+            query += '&fq='+'+AND+'.join(filterQueries)
+        
+        if self.facet:
+            query += '&rows=0&facet=true&facet.limit=-1&facet.mincount=1&'
+            query += '&'.join(['facet.field=' + facet for facet in self.facetDefs.values()])
+        else:
+            query += '&start='+str(startIndex)+'&rows='+str(entriesPerPage)
+            if sort is not None:
+                query += '&sort=' + urllib.quote(sort + ' ' + sortDir)
+
+        logging.debug('solr query: '+query)
+
+        return query

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/slcp/granule/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/slcp/granule/__init__.py b/src/main/python/plugins/slcp/granule/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/slcp/granule/plugin.conf
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/slcp/granule/plugin.conf b/src/main/python/plugins/slcp/granule/plugin.conf
new file mode 100644
index 0000000..2095306
--- /dev/null
+++ b/src/main/python/plugins/slcp/granule/plugin.conf
@@ -0,0 +1,12 @@
+[solr]
+datasetUrl=http://localhost:8983/solr/granule
+entriesPerPage=10
+maxEntriesPerPage=2000
+defaultSearchParam=keyword
+parameters=keyword,startTime,endTime,bbox,sortKey,sortDir,shortName
+facets={}
+sortKeys={"Relevance": "score", "Start_Date": "BeginningDateTime", "Stop_Date": "EndingDateTime", "Last_Updated": "LastUpdate"}
+
+[service]
+url=http://localhost:8890
+template=template.xml

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/slcp/granule/template.xml
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/slcp/granule/template.xml b/src/main/python/plugins/slcp/granule/template.xml
new file mode 100755
index 0000000..8457aa4
--- /dev/null
+++ b/src/main/python/plugins/slcp/granule/template.xml
@@ -0,0 +1,49 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<feed esipdiscovery:version="1.2" xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/terms/" xmlns:echo="http://www.echo.nasa.gov/esip" xmlns:esipdiscovery="http://commons.esipfed.org/ns/discovery/1.2/" xmlns:georss="http://www.georss.org/georss" xmlns:gml="http://www.opengis.net/gml" xmlns:os="http://a9.com/-/spec/opensearch/1.1/" xmlns:time="http://a9.com/-/opensearch/extensions/time/1.0/">
+<updated>{{ updated }}</updated>
+<id></id>
+<author>
+<name>SLCP</name>
+<email></email>
+</author>
+<title type="text">SLCP granule metadata</title>
+<link href="{{ myself }}" hreflang="en-US" rel="self" type="application/atom+xml" />
+{% if last %}<link href="{{ last }}" hreflang="en-US" rel="last" type="application/atom+xml" />{% endif %}
+{% if prev %}<link href="{{ prev }}" hreflang="en-US" rel="previous" type="application/atom+xml" />{% endif %}
+{% if next %}<link href="{{ next }}" hreflang="en-US" rel="next" type="application/atom+xml" />{% endif %}
+{% if first %}<link href="{{ first }}" hreflang="en-US" rel="first" type="application/atom+xml" />{% endif %}
+<link href="https://wiki.earthdata.nasa.gov/display/echo/Open+Search+API+release+information" hreflang="en-US" rel="describedBy" title="Release Notes" type="text/html"/>
+<os:totalResults>{{ numFound }}</os:totalResults>
+<os:itemsPerPage>{{ itemsPerPage }}</os:itemsPerPage>
+<os:startIndex>{{ startIndex }}</os:startIndex>
+{% for doc in docs %}
+<entry>
+{% if 0 == 1 %}
+<id>https://api.echo.nasa.gov:443/opensearch/granules.atom?uid=G752080-PODAAC</id>
+{% endif %}
+<title type="text">{{ doc['GranuleUR'] }}</title>
+<updated>{{ doc['LastUpdate'] }}</updated>
+{% for i in range(doc['OnlineAccessURL-URL']|count)  %}
+<link href="{{ doc['OnlineAccessURL-URL'][i] }}" hreflang="en-US" rel="enclosure" {% if doc['OnlineAccessURL-URLDescription'] and doc['OnlineAccessURL-URLDescription'][i] != ''  %} title="Data Access {{ doc['OnlineAccessURL-URLDescription'][i] }}" {% endif %} />
+{% endfor %}
+{% if 0 == 1 %}
+<dc:identifier>G752080-PODAAC</dc:identifier>
+{% endif %}
+<dc:date>{{ doc['BeginningDateTime'] }}/{{ doc['EndingDateTime'] }}</dc:date>
+{% if 0 == 1 %}
+<echo:datasetId>GHRSST Level 2P USA NASA MODIS Aqua SST:1</echo:datasetId>
+{% endif %}
+<echo:granuleSizeMB>{{ doc['FileSize'] / 1048576  }}</echo:granuleSizeMB>
+{% if 0 == 1 %}
+<echo:originalFormat>ECHO10</echo:originalFormat>
+<echo:dataCenter>PODAAC</echo:dataCenter>
+<echo:orbitCalSpatialDomain/>
+<echo:coordinateSystem>CARTESIAN</echo:coordinateSystem>
+{% endif %}
+{% if doc['Spatial-Geometry'][0].startswith('ENVELOPE')  %}
+{% set box = doc['Spatial-Geometry'][0][9:-1].split(',') %}
+<georss:box>{{ box[3] }} {{ box[0] }} {{ box[2] }} {{ box[1] }}</georss:box>
+{% endif %}
+</entry>
+{% endfor %}
+</feed>

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/slcp/indicator/Writer.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/slcp/indicator/Writer.py b/src/main/python/plugins/slcp/indicator/Writer.py
new file mode 100644
index 0000000..2d02f45
--- /dev/null
+++ b/src/main/python/plugins/slcp/indicator/Writer.py
@@ -0,0 +1,64 @@
+import logging
+import urllib
+import json
+
+from edge.writer.proxywriter import ProxyWriter
+
+class Writer(ProxyWriter):
+    def __init__(self, configFilePath):
+        super(Writer, self).__init__(configFilePath)
+
+    def _generateUrl(self, requestHandler):
+        url = self._configuration.get('solr', 'url')
+        parameters = {}
+        parameters['wt'] = 'json'
+        parameters['omitHeader'] = 'true'
+        parameters['q'] = '*:*'
+        try:
+            parameters['fq'] = 'id:"' + requestHandler.get_argument('id') + '"'
+        except:
+            parameters['fl'] = 'id,name,rate,uncertainties,unit,shortenUnit,abbrUnit,updated_at'
+        try:
+            if requestHandler.get_argument('latest').lower() == 'true':
+                parameters['fl'] = 'xLatest,yLatest,unit,abbrUnit,updated_at'
+        except:
+            pass
+        url += '/select?' + urllib.urlencode(parameters)
+        logging.debug("proxy to url : " + url)
+        return url
+
+    def onResponse(self, response):
+        if response.error:
+            self.requestHandler.set_status(404)
+            self.requestHandler.write(str(response.error))
+            self.requestHandler.finish()
+        else:
+            for name, value in response.headers.iteritems():
+                logging.debug('header: '+name+':'+value)
+                self.requestHandler.set_header(name, value)
+            self.requestHandler.set_header('Access-Control-Allow-Origin', '*')
+
+            solrJson = json.loads(response.body)
+            if len(solrJson['response']['docs']) > 1:
+                # Need to order indicators accordingly
+                solrJsonClone = {}
+                solrJsonClone['response'] = {}
+                solrJsonClone['response']['start'] = solrJson['response']['start']
+                solrJsonClone['response']['numFound'] = solrJson['response']['numFound']
+                solrJsonClone['response']['docs'] = []
+
+                indicators = {}
+                for doc in solrJson['response']['docs']:
+                    indicators[doc['id']] = doc
+                for indicator in self._configuration.get('solr', 'ordering').split(','):
+                    if indicator in indicators:
+                        solrJsonClone['response']['docs'].append(indicators[indicator])
+                solrJson = solrJsonClone
+            for doc in solrJson['response']['docs']:
+                if 'uncertainties' in doc:
+                    if doc['id'] in self._configuration.get('solr', 'uncertainties').split(','):
+                        doc['uncertainties'] = int(round(doc['uncertainties']))
+                        doc['rate'] = int(round(doc['rate']))
+
+            self.requestHandler.write(solrJson)
+            self.requestHandler.finish()

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/slcp/indicator/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/slcp/indicator/__init__.py b/src/main/python/plugins/slcp/indicator/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/slcp/indicator/plugin.conf
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/slcp/indicator/plugin.conf b/src/main/python/plugins/slcp/indicator/plugin.conf
new file mode 100644
index 0000000..5f6b9a6
--- /dev/null
+++ b/src/main/python/plugins/slcp/indicator/plugin.conf
@@ -0,0 +1,4 @@
+[solr]
+url=http://localhost:8983/solr/indicator
+ordering=Global_Mean_Sea_Level,Ocean_Mass,Steric_Height,Greenland_Mass_Variation,Antarctica_Mass_Variation
+uncertainties=Greenland_Mass_Variation,Antarctica_Mass_Variation

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/slcp/stats/Writer.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/slcp/stats/Writer.py b/src/main/python/plugins/slcp/stats/Writer.py
new file mode 100644
index 0000000..f7675af
--- /dev/null
+++ b/src/main/python/plugins/slcp/stats/Writer.py
@@ -0,0 +1,57 @@
+import logging
+import os
+import os.path
+import urllib
+
+from edge.writer.solrtemplateresponsewriter import SolrTemplateResponseWriter
+from edge.response.solrjsontemplateresponse import SolrJsonTemplateResponse
+
+class Writer(SolrTemplateResponseWriter):
+    def __init__(self, configFilePath):
+        super(Writer, self).__init__(configFilePath)
+        
+        self.contentType = 'application/json'
+
+        templatePath = os.path.dirname(configFilePath) + os.sep
+        templatePath += self._configuration.get('service', 'template')
+        self.template = self._readTemplate(templatePath)
+
+    def _generateOpenSearchResponse(self, solrResponse, searchText, searchUrl, searchParams, pretty):
+        response = SolrJsonTemplateResponse(parameters=searchParams)
+        response.setTemplate(self.template)
+
+        return response.generate(solrResponse, pretty=pretty)
+
+    def _constructSolrQuery(self, startIndex, entriesPerPage, parameters, facets):
+        queries = []
+        filterQueries = []
+        start = '*'
+        end = '*'
+
+        for key, value in parameters.iteritems():
+            if value != "":
+                if key == 'keyword':
+                    queries.append(urllib.quote(value))
+                elif key == 'ds':
+                    filterQueries.append('datasetShortName:%s' % value)
+                elif key == 'startTime':
+                    start = value
+                elif key == 'endTime':
+                    end = value
+                elif key == 'basinId':
+                    filterQueries.append('basinId_i:%s' % value)
+                elif key == 'basinName':
+                    filterQueries.append('basinName:%s' % value)
+        filterQueries.append('(time:[' + start + '%20TO%20' + end + '])')
+
+        if len(queries) == 0:
+            queries.append('*:*')
+
+        query = 'q='+'+AND+'.join(queries)+'&version=2.2&indent=on&wt=json&start='+str(startIndex)+'&rows='+str(entriesPerPage)
+
+        if len(filterQueries) > 0:
+            query += '&fq='+'+AND+'.join(filterQueries)
+
+        logging.debug('solr query: '+query)
+
+        return query

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/slcp/stats/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/slcp/stats/__init__.py b/src/main/python/plugins/slcp/stats/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/slcp/stats/plugin.conf
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/slcp/stats/plugin.conf b/src/main/python/plugins/slcp/stats/plugin.conf
new file mode 100644
index 0000000..7ea6616
--- /dev/null
+++ b/src/main/python/plugins/slcp/stats/plugin.conf
@@ -0,0 +1,11 @@
+[solr]
+datasetUrl=http://localhost:8983/solr/stats
+entriesPerPage=2147483647
+maxEntriesPerPage=2147483647
+defaultSearchParam=keyword
+parameters=keyword,ds,basinId,basinName,startTime,endTime
+facets={}
+
+[service]
+url=http://localhost:8890
+template=template.json

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/slcp/stats/template.json
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/slcp/stats/template.json b/src/main/python/plugins/slcp/stats/template.json
new file mode 100755
index 0000000..3b21951
--- /dev/null
+++ b/src/main/python/plugins/slcp/stats/template.json
@@ -0,0 +1,40 @@
+{
+    "stats": {},
+    "meta": [
+        {
+            "shortName": "{{ parameters['ds'] }}",
+            "basinId": {{ parameters['basinId'] | jsonify }},
+            "time": {
+                "start": {{ parameters['startTime'] }},
+                "stop": {{ parameters['endTime'] }}
+            }
+        }
+    ],
+    "data": [
+      {% for doc in docs %}
+[
+            {
+                "std": {{ doc['std'] }},
+                "cnt": {{ doc['cnt'] }},
+                "minSeasonalLowPass": {{ doc['minSeasonalLowPass'] | jsonify }},
+                "minSeasonal": {{ doc['minSeasonal'] | jsonify }},
+                "maxLowPass": {{ doc['maxLowPass'] | jsonify }},
+                "min": {{ doc['min'] }},
+                "max": {{ doc['max'] }},
+                "meanSeasonal": {{ doc['meanSeasonal'] | jsonify }},
+                "ds": 0,
+                "meanSeasonalLowPass": {{ doc['meanSeasonalLowPass'] | jsonify }},
+                "maxSeasonalLowPass": {{ doc['maxSeasonalLowPass'] | jsonify }},
+                "time": {{ doc['time'] }},
+                "maxSeasonal": {{ doc['maxSeasonal'] | jsonify }},
+                "meanLowPass": {{ doc['meanLowPass'] | jsonify }},
+                "minLowPass": {{ doc['minLowPass'] | jsonify }},
+                "mean": {{ doc['mean'] }}
+            }
+        ]
+      {% if not loop.last %}
+,
+{% endif %}
+{% endfor %}
+          ]
+}

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/slcp/suggest/Writer.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/slcp/suggest/Writer.py b/src/main/python/plugins/slcp/suggest/Writer.py
new file mode 100644
index 0000000..5fec63d
--- /dev/null
+++ b/src/main/python/plugins/slcp/suggest/Writer.py
@@ -0,0 +1,22 @@
+import logging
+import urllib
+
+from edge.writer.proxywriter import ProxyWriter
+
+class Writer(ProxyWriter):
+    def __init__(self, configFilePath):
+        super(Writer, self).__init__(configFilePath)
+
+    def _generateUrl(self, requestHandler):
+        url = self._configuration.get('solr', 'url')
+        parameters = {}
+        parameters['wt'] = 'json'
+        parameters['suggest'] = 'true'
+        #parameters['suggest.build'] = 'true'
+        try:
+            parameters['suggest.q'] = requestHandler.get_argument('keyword')
+            url += '/suggest?' + urllib.urlencode(parameters)
+        except:
+            raise Exception('Missing keyword parameter.')
+        logging.debug("proxy to url : " + url)
+        return url

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/slcp/suggest/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/slcp/suggest/__init__.py b/src/main/python/plugins/slcp/suggest/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/slcp/suggest/plugin.conf
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/slcp/suggest/plugin.conf b/src/main/python/plugins/slcp/suggest/plugin.conf
new file mode 100644
index 0000000..55aa1a3
--- /dev/null
+++ b/src/main/python/plugins/slcp/suggest/plugin.conf
@@ -0,0 +1,2 @@
+[solr]
+url=http://localhost:8983/solr/dataset

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/slcp/umm-json/Writer.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/slcp/umm-json/Writer.py b/src/main/python/plugins/slcp/umm-json/Writer.py
new file mode 100644
index 0000000..388e610
--- /dev/null
+++ b/src/main/python/plugins/slcp/umm-json/Writer.py
@@ -0,0 +1,39 @@
+import logging
+import os
+import os.path
+
+from edge.writer.solrtemplateresponsewriter import SolrTemplateResponseWriter
+from edge.response.solrjsontemplateresponse import SolrJsonTemplateResponse
+
+class Writer(SolrTemplateResponseWriter):
+    def __init__(self, configFilePath):
+        super(Writer, self).__init__(configFilePath)
+
+        self.contentType = 'application/json'
+
+        templatePath = os.path.dirname(configFilePath) + os.sep
+        templatePath += self._configuration.get('service', 'template')
+        self.template = self._readTemplate(templatePath)
+
+    def _generateOpenSearchResponse(self, solrResponse, searchText, searchUrl, searchParams, pretty):
+        response = SolrJsonTemplateResponse()
+        response.setTemplate(self.template)
+
+        return response.generate(solrResponse, pretty=pretty)
+
+    def _constructSolrQuery(self, startIndex, entriesPerPage, parameters, facets):
+        queries = []
+
+        for key, value in parameters.iteritems():
+            if key == 'id':
+                queries.append('id:' + self._urlEncodeSolrQueryValue(value))
+            elif key == 'slcpShortName':
+                queries.append('SlcpShortName:' + self._urlEncodeSolrQueryValue(value))
+            elif key == 'shortName':
+                queries.append('ShortName:' + self._urlEncodeSolrQueryValue(value))
+
+        query = 'q='+'+AND+'.join(queries)+'&version=2.2&indent=on&wt=json'+'&rows='+str(entriesPerPage)
+
+        logging.debug('solr query: '+query)
+
+        return query

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/slcp/umm-json/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/slcp/umm-json/__init__.py b/src/main/python/plugins/slcp/umm-json/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/slcp/umm-json/plugin.conf
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/slcp/umm-json/plugin.conf b/src/main/python/plugins/slcp/umm-json/plugin.conf
new file mode 100644
index 0000000..3836fac
--- /dev/null
+++ b/src/main/python/plugins/slcp/umm-json/plugin.conf
@@ -0,0 +1,8 @@
+[solr]
+datasetUrl=http://localhost:8983/solr/dataset
+entriesPerPage=1
+parameters=id,slcpShortName,shortName
+
+[service]
+url=http://localhost:8890
+template=template.json

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/slcp/umm-json/template.json
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/slcp/umm-json/template.json b/src/main/python/plugins/slcp/umm-json/template.json
new file mode 100644
index 0000000..1e00e8d
--- /dev/null
+++ b/src/main/python/plugins/slcp/umm-json/template.json
@@ -0,0 +1,274 @@
+{
+"Projects": [
+{% for i in range(docs[0]['Campaign-ShortName']|count)  %}
+{
+"ShortName" : {{ docs[0]['Campaign-ShortName'][i] | jsonify }},
+"LongName" : {{ docs[0]['Campaign-LongName'][i] | jsonify }}
+}
+{% if not loop.last %}
+,
+{% endif %}
+{% endfor %}
+],
+{% if docs[0]['Spatial-Geometry'] %}
+"SpatialExtent" : {
+"SpatialCoverageType" : "HORIZONTAL",
+"HorizontalSpatialDomain" : {
+"Geometry" : {
+"CoordinateSystem" : {{ docs[0]['CoordinateSystem'] | jsonify }},
+"BoundingRectangles" : [
+{% for box in docs[0]['Spatial-Box']  %}
+{
+"WestBoundingCoordinate" : {{ box.split()[1] }},
+"NorthBoundingCoordinate" : {{ box.split()[2] }},
+"EastBoundingCoordinate" : {{ box.split()[3] }},
+"SouthBoundingCoordinate" : {{ box.split()[0] }}
+}
+{% if not loop.last %}
+,
+{% endif %}
+{% endfor %}
+]
+}
+},
+"GranuleSpatialRepresentation" : {{ docs[0]['CoordinateSystem'] | jsonify }}
+},
+{% endif %}
+"Distributions" : [ {
+"DistributionFormat" : {{ docs[0]['DataFormat'] | jsonify }}
+} ],
+{% if docs[0]['CategoryKeyword'] %}
+"ScienceKeywords" : [
+{% for i in range(docs[0]['CategoryKeyword']|count)  %}
+{
+"Category" : {{ docs[0]['CategoryKeyword'][i] | jsonify }},
+"Topic" : {{ docs[0]['TopicKeyword'][i] | jsonify }},
+"Term" : {{ docs[0]['TermKeyword'][i] | jsonify }},
+"VariableLevel1" : {{ docs[0]['VariableLevel1Keyword'][i] | jsonify }}
+{% if docs[0]['DetailedVariableKeyword'] %}
+,"DetailedVariable" : {{ docs[0]['DetailedVariableKeyword'][i] | jsonify }}
+{% endif %}
+}
+{% if not loop.last %}
+,
+{% endif %}
+{% endfor %}
+],
+{% endif %}
+"TemporalExtents" : [ {
+ "RangeDateTimes" : [
+{% for dt in docs[0]['BeginningEndingDateTime']  %}
+{
+{% if ' ' not in dt %}
+"BeginningDateTime" : "{{ dt }}",
+"EndingDateTime" : "{{ dt }}"
+{% else %}
+"BeginningDateTime" : "{{ dt[1:dt.index(' ')] }}"
+{% if '*' not in dt %}
+,
+"EndingDateTime" : "{{ dt[dt.rindex(' ')+1:-1] }}"
+{% endif %}
+{% endif %}
+}
+{% if not loop.last %}
+,
+{% endif %}
+{% endfor %}
+]
+} ],
+"ProcessingLevel" : {
+"Id" : {{ docs[0]['ProcessingLevelId'] | jsonify }}
+},
+"ShortName" : {{ docs[0]['ShortName'] | jsonify }},
+"EntryTitle" : {{ docs[0]['LongName'] | jsonify }},
+"RelatedUrls" : [
+{% for i in range(docs[0]['OnlineAccessURL-URL']|count)  %}
+{
+"Description" : {{ docs[0]['OnlineAccessURL-URLDescription'][i] | jsonify }},
+"Relation" : [ "GET DATA" ],
+"URLs" : [ {{ docs[0]['OnlineAccessURL-URL'][i] | jsonify }} ]
+}
+{% if not loop.last %}
+,
+{% endif %}
+{% endfor %}
+{% if docs[0]['OnlineResource-URL'] %}
+,
+{% endif %}
+{% for i in range(docs[0]['OnlineResource-URL']|count)  %}
+{
+{% if docs[0]['OnlineResource-Description'][i] != '' %}
+"Description" : {{ docs[0]['OnlineResource-Description'][i] | jsonify }},
+{% endif %}
+"URLs" : [ {{ docs[0]['OnlineResource-URL'][i] | jsonify }} ]
+}
+{% if not loop.last %}
+,
+{% endif %}
+{% endfor %}
+],
+"DataDates" : [ {
+"Date" : "{{ docs[0]['LastUpdate'] }}",
+"Type" : "UPDATE"
+} ],
+"Abstract" : {{ docs[0]['Description'] | jsonify }},
+"Version" : {{ docs[0]['VersionId'] | jsonify }},
+"LocationKeywords" : [
+{% for keyword in docs[0]['SpatialKeywords-Keyword']  %}
+{
+"Category": "GEOGRAPHIC REGION",
+ "Type": {{ keyword | jsonify }}
+}
+{% if not loop.last %}
+,
+{% endif %}
+{% endfor %}
+],
+"Platforms" : [
+{% for i in range(docs[0]['Platform-ShortName']|count)  %}
+{
+"Type" : "SPACECRAFT",
+"ShortName" : {{ docs[0]['Platform-ShortName'][i] | jsonify }},
+"LongName" : {{ docs[0]['Platform-LongName'][i] | jsonify }}
+{% if docs[0]['Instrument-ShortName_' + i|string] %}
+,
+"Instruments" : [
+{% for j in range(docs[0]['Instrument-ShortName_' + i|string]|count)  %}
+{
+"ShortName" : {{ docs[0]['Instrument-ShortName_' + i|string][j] | jsonify }},
+"LongName" : {{ docs[0]['Instrument-LongName_' + i|string][j] | jsonify }}
+{% if docs[0]['Sensor-ShortName_' + i|string + '_' + j|string] %}
+,
+"Sensors" : [
+{% for k in range(docs[0]['Sensor-ShortName_' + i|string + '_' + j|string]|count)  %}
+{
+"ShortName" : {{ docs[0]['Sensor-ShortName_' + i|string + '_' + j|string][k] | jsonify }},
+"LongName" : {{ docs[0]['Sensor-LongName_' + i|string + '_' + j|string][k] | jsonify }}
+}
+{% if not loop.last %}
+,
+{% endif %}
+{% endfor %}
+]
+{% endif %}
+}
+{% if not loop.last %}
+,
+{% endif %}
+{% endfor %}
+]
+{% endif %}
+}
+{% if not loop.last %}
+,
+{% endif %}
+{% endfor %}
+]
+{% if docs[0]['DOI'] %}
+,
+"DOI" : {"DOI" : {{ docs[0]['DOI'] | jsonify }}}
+{% endif %}
+,
+"AncillaryKeywords" : [
+{% for keyword in docs[0]['Keyword']  %}
+{{ keyword | jsonify }}
+{% if not loop.last %}
+,
+{% endif %}
+{% endfor %}
+],
+"DataCenters": [
+{% if docs[0]['ProcessingCenter'] %}
+{
+"Roles": ["PROCESSOR"],
+"ShortName": {{ docs[0]['ProcessingCenter'] | jsonify }}
+}
+{% endif %}
+{% if docs[0]['ArchiveCenter'] %}
+{% if docs[0]['ProcessingCenter'] %}
+,
+{% endif %}
+{
+"Roles": ["ARCHIVER"],
+"ShortName": {{ docs[0]['ArchiveCenter'] | jsonify }}
+}
+{% endif %}
+],
+{% if docs[0]['LatitudeResolution'] %}
+"GeographicCoordinateSystemType" : {
+"GeographicCoordinateUnits" : "degrees",
+"LatitudeResolution" : {{ docs[0]['LatitudeResolution'] }},
+"LongitudeResolution" : {{ docs[0]['LongitudeResolution'] }}
+},
+{% endif %}
+{% if docs[0]['TemporalResolution'] %}
+"TemporalKeywords" : [
+{{ docs[0]['TemporalResolution'] | jsonify }}
+],
+{% endif %}
+"MetadataAssociations":[
+{% for i in range(docs[0]['CollectionAssociation-ShortName']|count)  %}
+{
+"EntryId": {{ docs[0]['CollectionAssociation-ShortName'][i] | jsonify }},
+"Description": {{ docs[0]['CollectionAssociation-CollectionUse'][i] | jsonify }},
+"Type": {{ docs[0]['CollectionAssociation-CollectionType'][i] | jsonify }},
+"Version": {{ docs[0]['CollectionAssociation-VersionId'][i] | jsonify }},
+"EntryTitle": {{ docs[0]['CollectionAssociation-LongName'][i] | jsonify }}
+{% if 'CollectionAssociation-URL' in docs[0] %}
+,"URL": {{ docs[0]['CollectionAssociation-URL'][i] | jsonify }}
+{% endif %}
+}
+{% if not loop.last %}
+,
+{% endif %}
+{% endfor %}
+],
+"PublicationReferences":[
+{% set publicationKeys = {"PublicationReferences-Title": "Title", "PublicationReferences-Publisher": "Publisher", "PublicationReferences-Author": "Author", "PublicationReferences-PublicationDate": "PublicationDate", "PublicationReferences-Series": "Series", "PublicationReferences-Edition": "Edition", "PublicationReferences-Volume": "Volume", "PublicationReferences-Issue": "Issue", "PublicationReferences-ReportNumber": "ReportNumber", "PublicationReferences-PublicationPlace": "PublicationPlace", "PublicationReferences-Pages": "Pages", "PublicationReferences-ISBN": "ISBN", "PublicationReferences-OtherReferenceDetails": "OtherReferenceDetails"} %}
+{% for i in range(docs[0]['PublicationReferences-Title']|count)  %}
+{
+{% for key, value in publicationKeys.iteritems() %}
+"{{ value }}": {{ docs[0][key][i].strip() | jsonify }},
+{% endfor %}
+"DOI": {"DOI" : {{ docs[0]['PublicationReferences-DOI'][i] | jsonify}} }
+}
+{% if not loop.last %}
+,
+{% endif %}
+{% endfor %}
+],
+"AdditionalAttributes" : [
+{% set hasAttrs = False %}
+{% if docs[0]['NexusDataBeginningDateTime'] %}
+{% set hasAttrs = True %}
+{"Name": "NexusDataBeginningDateTime", "Description": "Beginning date of data available in Nexus", "DataType": "DATETIME", "Value": "{{ docs[0]['NexusDataBeginningDateTime'] }}"}
+{% endif %}
+{% if docs[0]['NexusDataEndingDateTime'] %}
+{% if hasAttrs %},{% endif %}
+{% set hasAttrs = True %}
+{"Name": "NexusDataEndingDateTime", "Description": "Ending date of data available in Nexus", "DataType": "DATETIME", "Value": "{{ docs[0]['NexusDataEndingDateTime'] }}"}
+{% endif %}
+{% if docs[0]['NexusShortName'] %}
+{% if hasAttrs %},{% endif %}
+{% set hasAttrs = True %}
+{"Name": "NexusShortName", "Description": "Name of dataset in Nexus", "DataType": "STRING", "Value": "{{ docs[0]['NexusShortName'][0] }}"}
+{% endif %}
+{% if docs[0]['LatitudeResolution'] %}
+{% if hasAttrs %},{% endif %}
+{% set hasAttrs = True %}
+{"Name": "Spatial Resolution", "Description": "Spatial resolution of dataset", "DataType": "STRING", "Value": "{{docs[0]['LatitudeResolution']}} degrees (Latitude) x {{docs[0]['LongitudeResolution']}} degrees (Longitude)"}
+{% endif %}
+{% if docs[0]['AcrossTrackResolution'] %}
+{% if hasAttrs %},{% endif %}
+{% set hasAttrs = True %}
+{"Name": "Spatial Resolution", "Description": "Spatial resolution of dataset", "DataType": "STRING", "Value": "{{docs[0]['AlongTrackResolution'] / 1000 }} km (Along) x {{docs[0]['AcrossTrackResolution'] / 1000 }} km (Across)"}
+{% endif %}
+{% for key in ['GlobalAttrTitle', 'GlobalAttrDescription', 'GlobalAttrSource', 'GlobalAttrContact', 'GlobalAttrUnits'] %}
+{% if docs[0][key] %}
+{% if hasAttrs %},{% endif %}
+{% set hasAttrs = True %}
+{"Name": "{{ key }}", "DataType": "STRING", "Value": {{ docs[0][key] | jsonify }}}
+{% endif %}
+{% endfor %}
+]
+}

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/spurs/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/spurs/__init__.py b/src/main/python/plugins/spurs/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/spurs/json/Writer.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/spurs/json/Writer.py b/src/main/python/plugins/spurs/json/Writer.py
new file mode 100644
index 0000000..126089f
--- /dev/null
+++ b/src/main/python/plugins/spurs/json/Writer.py
@@ -0,0 +1,76 @@
+import logging
+import os
+import os.path
+import urllib
+import json
+
+from edge.writer.solrtemplateresponsewriter import SolrTemplateResponseWriter
+from edge.response.solrjsontemplateresponse import SolrJsonTemplateResponse
+
+class Writer(SolrTemplateResponseWriter):
+    def __init__(self, configFilePath):
+        super(Writer, self).__init__(configFilePath)
+        
+        self.contentType = 'application/json'
+
+        templatePath = os.path.dirname(configFilePath) + os.sep
+        templatePath += self._configuration.get('service', 'template')
+        self.template = self._readTemplate(templatePath)
+
+    def _generateOpenSearchResponse(self, solrResponse, searchText, searchUrl, searchParams, pretty):
+        response = SolrJsonTemplateResponse(searchUrl, searchParams)
+        response.setTemplate(self.template)
+
+        return response.generate(solrResponse, pretty=pretty)
+
+    def _constructSolrQuery(self, startIndex, entriesPerPage, parameters, facets):
+        variable = json.loads(self._configuration.get('solr', 'variable'))
+
+        queries = []
+        filterQueries = []
+        sort = None
+
+        for key, value in parameters.iteritems():
+            if value != "":
+                if key == 'keyword':
+                    queries.append(urllib.quote(value))
+                elif key == 'startTime':
+                    filterQueries.append('time:['+value+'%20TO%20*]')
+                elif key == 'endTime':
+                    filterQueries.append('time:[*%20TO%20'+value+']')
+                elif key == 'bbox':
+                    coordinates = value.split(",")
+                    filterQueries.append('point_srpt:[' + coordinates[1] + ',' + coordinates[0] + '%20TO%20' + coordinates[3] + ',' + coordinates[2] + ']')
+                elif key == 'variable':
+                    if value.lower() in variable:
+                        filterQueries.append("(" + "+OR+".join([x + ":[*%20TO%20*]" for x in variable[value.lower()]]) + ")")
+                elif key == 'minDepth':
+                    filterQueries.append('depth:['+value+'%20TO%20*]')
+                elif key == 'maxDepth':
+                    filterQueries.append('depth:[*%20TO%20'+value+']')
+                elif key == 'platform':
+                    if type(value) is list:
+                        filterQueries.append('platform:(' + '+OR+'.join(value) + ')')
+                    else:
+                        filterQueries.append('platform:'+value)
+
+        if len(queries) == 0:
+            queries.append('*:*')
+
+        query = 'q='+'+AND+'.join(queries)+'&wt=json&start='+str(startIndex)+'&rows='+str(entriesPerPage)
+
+        if len(filterQueries) > 0:
+            query += '&fq='+'+AND+'.join(filterQueries)
+
+        if sort is not None:
+            query += '&sort=' + sort
+
+        if 'stats' in parameters and parameters['stats'].lower() == 'true':
+            query += '&stats=true&stats.field={!min=true%20max=true}depth'
+
+        if 'facet' in parameters and parameters['facet'].lower() == 'true':
+            query += '&facet=true&facet.field=platform&facet.field=device&facet.limit=-1&facet.mincount=1'
+
+        logging.debug('solr query: '+query)
+
+        return query

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/spurs/json/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/spurs/json/__init__.py b/src/main/python/plugins/spurs/json/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/spurs/json/plugin.conf
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/spurs/json/plugin.conf b/src/main/python/plugins/spurs/json/plugin.conf
new file mode 100644
index 0000000..76c39c8
--- /dev/null
+++ b/src/main/python/plugins/spurs/json/plugin.conf
@@ -0,0 +1,12 @@
+[solr]
+datasetUrl=http://localhost:8983/solr/spurs
+entriesPerPage=10
+maxEntriesPerPage=100000
+defaultSearchParam=keyword
+parameters=keyword,startTime,endTime,bbox,minDepth,maxDepth,variable,stats,platform,facet
+facets={}
+variable={"sss" : ["salinity", "salinity1", "salinity_ctd"], "sst" : ["temperature", "temperature1", "TEMP", "temperature_ctd"], "wind" : ["RELWSPD"]}
+
+[service]
+url=http://localhost:8890
+template=template.json

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/spurs/json/template.json
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/spurs/json/template.json b/src/main/python/plugins/spurs/json/template.json
new file mode 100755
index 0000000..c445f12
--- /dev/null
+++ b/src/main/python/plugins/spurs/json/template.json
@@ -0,0 +1,63 @@
+{
+{% if last %}"last": "{{ last }}",{% endif %}
+{% if prev %}"prev": "{{ prev }}",{% endif %}
+{% if next %}"next": "{{ next }}",{% endif %}
+{% if first %}"first": "{{ first }}",{% endif %}
+"results":[
+{% for doc in docs %}
+{
+"id": "{{ doc['id'] }}",
+"time": "{{ doc['time'] }}",
+"point": "Point({{ doc['point_srpt'] }})",
+"sea_water_temperature": {% if 'temperature' in doc %}{{ doc['temperature'] | jsonify }}{% elif 'temperature1' in doc %}{{ doc['temperature1'] | jsonify }}{% elif 'TEMP' in doc %}{{ doc['TEMP'] | jsonify }}{% elif 'temperature_ctd' in doc %}{{ doc['temperature_ctd'] | jsonify }}{% else %}null{% endif %},
+"sea_water_temperature_depth": {% if 'temperature' in doc or 'temperature1' in doc or 'TEMP' in doc or 'temperature_ctd' in doc %}{{ doc['depth'] | jsonify }}{% else %}null{% endif %},
+"sea_water_temperature_quality": {% if 'temperature' in doc or 'temperature1' in doc or 'TEMP' in doc or 'temperature_ctd' in doc %}0{% else %}null{% endif %},
+"wind_speed": {{ doc['RELWSPD'] | jsonify }},
+"eastward_wind": {% if 'eastward_wind' in doc %}{{ doc['eastward_wind'][0] | jsonify }}{% else %}null{% endif %},
+"northward_wind": {% if 'northward_wind' in doc %}{{ doc['northward_wind'][0] | jsonify }}{% else %}null{% endif %},
+"wind_depth": {% if 'RELWSPD' in doc %}{{ doc['depth'] | jsonify }}{% else %}null{% endif %},
+"wind_quality": {% if 'RELWSPD' in doc %}0{% else %}null{% endif %},
+"sea_water_salinity": {% if 'salinity' in doc %}{{ doc['salinity'] | jsonify }}{% elif 'salinity1' in doc %}{{ doc['salinity1'] | jsonify }}{% elif 'salinity_ctd' in doc %}{{ doc['salinity_ctd'] | jsonify }}{% else %}null{% endif %},
+"sea_water_salinity_depth": {% if 'salinity' in doc or 'salinity1' in doc or 'salinity_ctd' in doc %}{{ doc['depth'] | jsonify }}{% else %}null{% endif %},
+"sea_water_salinity_quality": {% if 'salinity' in doc or 'salinity1' in doc or 'salinity_ctd' in doc %}0{% else %}null{% endif %},
+"platform": {{ doc['platform'] | jsonify }},
+"device": {{ doc['device'] | jsonify }},
+"fileurl": {{ doc['fileurl'] | jsonify }}
+}
+{% if not loop.last %}
+,
+{% endif %}
+{% endfor %}
+]
+,"totalResults":{{ numFound }}
+,"startIndex":{{ startIndex  }}
+,"itemsPerPage":{{ itemsPerPage }}
+{% if stats %}
+,
+"stats_fields": {{ stats['stats_fields'] | jsonify }}
+{% endif %}
+{% if facets %}
+,
+"facets":[
+{% for key, facet in facets['facet_fields'].iteritems() %}
+{
+"field": "{{ key }}",
+"values":[
+{% for i in range(0, facet|count, 2) %}
+{
+"count":{{facet[i+1] }},
+"value": "{{ facet[i] }}"
+}
+{% if not loop.last %}
+,
+{% endif %}
+{% endfor %}
+]
+}
+{% if not loop.last %}
+,
+{% endif %}
+{% endfor %}
+]
+{% endif %}
+}

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/spurs2/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/spurs2/__init__.py b/src/main/python/plugins/spurs2/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/spurs2/json/Writer.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/spurs2/json/Writer.py b/src/main/python/plugins/spurs2/json/Writer.py
new file mode 100644
index 0000000..126089f
--- /dev/null
+++ b/src/main/python/plugins/spurs2/json/Writer.py
@@ -0,0 +1,76 @@
+import logging
+import os
+import os.path
+import urllib
+import json
+
+from edge.writer.solrtemplateresponsewriter import SolrTemplateResponseWriter
+from edge.response.solrjsontemplateresponse import SolrJsonTemplateResponse
+
+class Writer(SolrTemplateResponseWriter):
+    def __init__(self, configFilePath):
+        super(Writer, self).__init__(configFilePath)
+        
+        self.contentType = 'application/json'
+
+        templatePath = os.path.dirname(configFilePath) + os.sep
+        templatePath += self._configuration.get('service', 'template')
+        self.template = self._readTemplate(templatePath)
+
+    def _generateOpenSearchResponse(self, solrResponse, searchText, searchUrl, searchParams, pretty):
+        response = SolrJsonTemplateResponse(searchUrl, searchParams)
+        response.setTemplate(self.template)
+
+        return response.generate(solrResponse, pretty=pretty)
+
+    def _constructSolrQuery(self, startIndex, entriesPerPage, parameters, facets):
+        variable = json.loads(self._configuration.get('solr', 'variable'))
+
+        queries = []
+        filterQueries = []
+        sort = None
+
+        for key, value in parameters.iteritems():
+            if value != "":
+                if key == 'keyword':
+                    queries.append(urllib.quote(value))
+                elif key == 'startTime':
+                    filterQueries.append('time:['+value+'%20TO%20*]')
+                elif key == 'endTime':
+                    filterQueries.append('time:[*%20TO%20'+value+']')
+                elif key == 'bbox':
+                    coordinates = value.split(",")
+                    filterQueries.append('point_srpt:[' + coordinates[1] + ',' + coordinates[0] + '%20TO%20' + coordinates[3] + ',' + coordinates[2] + ']')
+                elif key == 'variable':
+                    if value.lower() in variable:
+                        filterQueries.append("(" + "+OR+".join([x + ":[*%20TO%20*]" for x in variable[value.lower()]]) + ")")
+                elif key == 'minDepth':
+                    filterQueries.append('depth:['+value+'%20TO%20*]')
+                elif key == 'maxDepth':
+                    filterQueries.append('depth:[*%20TO%20'+value+']')
+                elif key == 'platform':
+                    if type(value) is list:
+                        filterQueries.append('platform:(' + '+OR+'.join(value) + ')')
+                    else:
+                        filterQueries.append('platform:'+value)
+
+        if len(queries) == 0:
+            queries.append('*:*')
+
+        query = 'q='+'+AND+'.join(queries)+'&wt=json&start='+str(startIndex)+'&rows='+str(entriesPerPage)
+
+        if len(filterQueries) > 0:
+            query += '&fq='+'+AND+'.join(filterQueries)
+
+        if sort is not None:
+            query += '&sort=' + sort
+
+        if 'stats' in parameters and parameters['stats'].lower() == 'true':
+            query += '&stats=true&stats.field={!min=true%20max=true}depth'
+
+        if 'facet' in parameters and parameters['facet'].lower() == 'true':
+            query += '&facet=true&facet.field=platform&facet.field=device&facet.limit=-1&facet.mincount=1'
+
+        logging.debug('solr query: '+query)
+
+        return query

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/spurs2/json/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/spurs2/json/__init__.py b/src/main/python/plugins/spurs2/json/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/spurs2/json/plugin.conf
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/spurs2/json/plugin.conf b/src/main/python/plugins/spurs2/json/plugin.conf
new file mode 100644
index 0000000..37313d1
--- /dev/null
+++ b/src/main/python/plugins/spurs2/json/plugin.conf
@@ -0,0 +1,12 @@
+[solr]
+datasetUrl=http://localhost:8983/solr/spurs2
+entriesPerPage=10
+maxEntriesPerPage=100000
+defaultSearchParam=keyword
+parameters=keyword,startTime,endTime,bbox,minDepth,maxDepth,variable,stats,platform,facet
+facets={}
+variable={"sss" : ["salinity", "salinity1", "salinity_ctd", "S_41"], "sst" : ["temperature", "temperature1", "TEMP", "temperature_ctd", "T_20"], "wind" : ["RELWSPD"]}
+
+[service]
+url=http://localhost:8890
+template=template.json

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/spurs2/json/template.json
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/spurs2/json/template.json b/src/main/python/plugins/spurs2/json/template.json
new file mode 100755
index 0000000..a629a5d
--- /dev/null
+++ b/src/main/python/plugins/spurs2/json/template.json
@@ -0,0 +1,64 @@
+{
+{% if last %}"last": "{{ last }}",{% endif %}
+{% if prev %}"prev": "{{ prev }}",{% endif %}
+{% if next %}"next": "{{ next }}",{% endif %}
+{% if first %}"first": "{{ first }}",{% endif %}
+"results":[
+{% for doc in docs %}
+{
+"id": "{{ doc['id'] }}",
+"time": "{{ doc['time'] }}",
+"point": "Point({{ doc['point_srpt'] }})",
+"sea_water_temperature": {% if 'temperature' in doc %}{{ doc['temperature'] | jsonify }}{% elif 'temperature1' in doc %}{{ doc['temperature1'] | jsonify }}{% elif 'TEMP' in doc %}{{ doc['TEMP'] | jsonify }}{% elif 'temperature_ctd' in doc %}{{ doc['temperature_ctd'] | jsonify }}{% elif 'T_20' in doc %}{{ doc['T_20'] | jsonify }}{% else %}null{% endif %},
+"sea_water_temperature_depth": {% if 'temperature' in doc or 'temperature1' in doc or 'TEMP' in doc or 'temperature_ctd' in doc or 'T_20' in doc %}{{ doc['depth'] | jsonify }}{% else %}null{% endif %},
+"sea_water_temperature_quality": {% if 'temperature' in doc or 'temperature1' in doc or 'TEMP' in doc or 'temperature_ctd' in doc or 'T_20' in doc %}0{% else %}null{% endif %},
+"wind_speed": {{ doc['RELWSPD'] | jsonify }},
+"eastward_wind": {{ doc['wind_u'] | jsonify }},
+"northward_wind": {{ doc['wind_v'] | jsonify }},
+"wind_depth": {% if 'RELWSPD' in doc %}{{ doc['depth'] | jsonify }}{% else %}null{% endif %},
+"wind_quality": {% if 'RELWSPD' in doc %}0{% else %}null{% endif %},
+"sea_water_salinity": {% if 'salinity' in doc %}{{ doc['salinity'] | jsonify }}{% elif 'salinity1' in doc %}{{ doc['salinity1'] | jsonify }}{% elif 'salinity_ctd' in doc %}{{ doc['salinity_ctd'] | jsonify }}{% elif 'S_41' in doc %}{{ doc['S_41'] | jsonify }}{% else %}null{% endif %},
+"sea_water_salinity_depth": {% if 'salinity' in doc or 'salinity1' in doc or 'salinity_ctd' in doc or 'S_41' in doc %}{{ doc['depth'] | jsonify }}{% else %}null{% endif %},
+"sea_water_salinity_quality": {% if 'salinity' in doc or 'salinity1' in doc or 'salinity_ctd' in doc or 'S_41' in doc %}0{% else %}null{% endif %},
+"platform": {{ doc['platform'] | jsonify }},
+"device": {{ doc['device'] | jsonify }},
+"fileurl": {{ doc['fileurl'] | jsonify }}
+}
+{% if not loop.last %}
+,
+{% endif %}
+{% endfor %}
+]
+,"totalResults":{{ numFound }}
+,"startIndex":{{ startIndex  }}
+,"itemsPerPage":{{ itemsPerPage }}
+{% if stats %}
+,
+"stats_fields": {{ stats['stats_fields'] | jsonify }}
+{% endif %}
+{% if facets %}
+,
+"facets":[
+{% for key, facet in facets['facet_fields'].iteritems() %}
+{
+"field": "{{ key }}",
+"values":[
+{% for i in range(0, facet|count, 2) %}
+{
+"count":{{facet[i+1] }},
+"value": "{{ facet[i] }}"
+}
+{% if not loop.last %}
+,
+{% endif %}
+{% endfor %}
+]
+}
+{% if not loop.last %}
+,
+{% endif %}
+{% endfor %}
+]
+{% endif %}
+}
+

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/tie/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/tie/__init__.py b/src/main/python/plugins/tie/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/tie/collection/Writer.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/tie/collection/Writer.py b/src/main/python/plugins/tie/collection/Writer.py
new file mode 100644
index 0000000..bad8827
--- /dev/null
+++ b/src/main/python/plugins/tie/collection/Writer.py
@@ -0,0 +1,54 @@
+import logging
+import urllib
+
+from edge.dateutility import DateUtility
+from edge.writer.proxywriter import ProxyWriter
+
+class Writer(ProxyWriter):
+    def __init__(self, configFilePath):
+        super(Writer, self).__init__(configFilePath)
+
+    def _generateUrl(self, requestHandler):
+        url = self._configuration.get('solr', 'url')
+        parameters = {}
+        parameters['wt'] = 'json'
+        parameters['group'] = 'true'
+        parameters['group.limit'] = -1
+        #parameters['facet.limit'] = 10
+        parameters['fl'] = 'time,productTypePrefix,productType'
+        parameters['group.field'] = 'crid'
+        parameters['omitHeader'] = 'true'
+        parameters['q'] = '*:*'
+        parameters['fq'] = []
+        parameters['sort'] = 'crid desc'
+        try:
+            parameters['fq'].append('collection:"' + requestHandler.get_argument('collection') + '"')
+        except:
+            pass
+        try:
+            parameters['fq'].append('productType:"' + requestHandler.get_argument('productType') + '"')
+        except:
+            pass
+        try:
+            start = requestHandler.get_argument('start')
+            if len(start) == 10:
+                start += 'T00:00:00'
+        except:
+            raise Exception('Missing start parameter.')
+        try:
+            end = requestHandler.get_argument('end')
+            if len(end) == 10:
+                end += 'T23:59:59'
+        except:
+            end = start[0:10] + 'T23:59:59'
+
+        logging.debug('start: ' + start)
+        logging.debug('end: ' + end)
+
+        start = DateUtility.convertISOToUTCTimestamp(start)
+        end = DateUtility.convertISOToUTCTimestamp(end) + 999
+        parameters['fq'].append('time:[' + str(start) + ' TO ' + str(end) + ']')
+
+        url += '/select?' + urllib.urlencode(parameters, True)
+        logging.debug("proxy to url : " + url)
+        return url

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/tie/collection/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/tie/collection/__init__.py b/src/main/python/plugins/tie/collection/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/tie/collection/plugin.conf
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/tie/collection/plugin.conf b/src/main/python/plugins/tie/collection/plugin.conf
new file mode 100644
index 0000000..d82d682
--- /dev/null
+++ b/src/main/python/plugins/tie/collection/plugin.conf
@@ -0,0 +1,2 @@
+[solr]
+url=http://localhost:8983/solr/layer

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/requestresponder.py
----------------------------------------------------------------------
diff --git a/src/main/python/requestresponder.py b/src/main/python/requestresponder.py
new file mode 100644
index 0000000..5d27c48
--- /dev/null
+++ b/src/main/python/requestresponder.py
@@ -0,0 +1,24 @@
+import logging
+import ConfigParser
+
+class RequestResponder(object):
+    def __init__(self, configFilePath):
+        #logging.debug('config: '+configFilePath)
+        self._configuration = ConfigParser.RawConfigParser()
+        self._configuration.read(configFilePath)
+        self.requestHandler = None
+
+    def get(self, requestHandler):
+        self.requestHandler = requestHandler
+
+    def post(self, requestHandler):
+        self.requestHandler = requestHandler
+
+    def put(self, requestHandler):
+        self.requestHandler = requestHandler
+
+    def delete(self, requestHandler):
+        self.requestHandler = requestHandler
+
+    def options(self, requestHandler):
+        self.requestHandler = requestHandler

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/requirements.txt
----------------------------------------------------------------------
diff --git a/src/main/python/requirements.txt b/src/main/python/requirements.txt
new file mode 100644
index 0000000..e8805c9
--- /dev/null
+++ b/src/main/python/requirements.txt
@@ -0,0 +1,3 @@
+tornado==4.2.1
+jinja2==2.8
+python-dateutil==2.4.1



[13/15] incubator-sdap-edge git commit: SDAP-1 Import all code under the SDAP SGA

Posted by le...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/opensearch/datasetatomresponse.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/opensearch/datasetatomresponse.py b/src/main/python/libraries/edge/opensearch/datasetatomresponse.py
new file mode 100644
index 0000000..dc11a93
--- /dev/null
+++ b/src/main/python/libraries/edge/opensearch/datasetatomresponse.py
@@ -0,0 +1,85 @@
+import datetime
+import urllib
+
+from edge.opensearch.atomresponsebysolr import AtomResponseBySolr
+from edge.dateutility import DateUtility
+
+class DatasetAtomResponse(AtomResponseBySolr):
+    def __init__(self, portalUrl, host, url, datasets):
+        super(DatasetAtomResponse, self).__init__()
+        self.portalUrl = portalUrl
+        self.host = host
+        self.url = url
+        self.datasets = datasets
+
+    def _populateChannel(self, solrResponse):
+        self.variables.append({'name': 'link', 'attribute': {'href': self.url+self.searchBasePath+'podaac-granule-osd.xml', 'rel': 'search', 'type': 'application/opensearchdescription+xml' }})
+
+    def _populateItem(self, solrResponse, doc, item):
+        persistentId = doc['Dataset-PersistentId'][0]
+        idTuple = ('datasetId', persistentId)
+        if persistentId == '':
+            idTuple = ('shortName', doc['Dataset-ShortName'][0])
+        item.append({'name': 'title', 'value': doc['Dataset-LongName'][0]})
+        item.append({'name': 'content', 'value': doc['Dataset-Description'][0]})
+        
+        item.append({'name': 'link', 'attribute': {'href': self.url + self.searchBasePath + 'dataset?' + urllib.urlencode(dict([idTuple, ('full', 'true')])), 'rel': 'enclosure', 'type': 'application/atom+xml', 'title': 'PO.DAAC Metadata' }})
+        item.append({'name': 'link', 'attribute': {'href': self.url + self.metadataBasePath + 'dataset?' + urllib.urlencode(dict([idTuple, ('format', 'iso')])), 'rel': 'enclosure', 'type': 'text/xml', 'title': 'ISO-19115 Metadata' }})
+        item.append({'name': 'link', 'attribute': {'href': self.url + self.metadataBasePath + 'dataset?' + urllib.urlencode(dict([idTuple, ('format', 'gcmd')])), 'rel': 'enclosure', 'type': 'text/xml', 'title': 'GCMD Metadata' }})
+        
+        #Only generate granule search link if dataset has granules
+        if (doc['Dataset-ShortName'][0] in self.datasets):
+            supportedGranuleParams = dict([(key,value) for key,value in self.parameters.iteritems() if key in ['bbox', 'startTime', 'endTime']])
+            if persistentId == '':
+                supportedGranuleParams['shortName'] = doc['Dataset-ShortName'][0]
+            else:
+                supportedGranuleParams['datasetId'] = persistentId
+            item.append({'name': 'link', 'attribute': {'href': self.url + self.searchBasePath + 'granule?' + urllib.urlencode(supportedGranuleParams), 'rel': 'search', 'type': 'application/atom+xml', 'title': 'Granule Search' }})
+        
+        if 'Dataset-ImageUrl' in doc and doc['Dataset-ImageUrl'][0] != '':
+            item.append({'name': 'link', 'attribute': {'href': doc['Dataset-ImageUrl'][0], 'rel': 'enclosure', 'type': 'image/jpg', 'title': 'Thumbnail' }})
+        
+        if 'DatasetLocationPolicy-Type' in doc and 'DatasetLocationPolicy-BasePath' in doc:
+            url = dict(zip(doc['DatasetLocationPolicy-Type'], doc['DatasetLocationPolicy-BasePath']))
+            if 'LOCAL-OPENDAP' in url:
+                item.append({'name': 'link', 'attribute': {'href': url['LOCAL-OPENDAP'], 'rel': 'enclosure', 'type': 'text/html', 'title': 'OPeNDAP URL' }})
+            elif 'REMOTE-OPENDAP' in url:
+                item.append({'name': 'link', 'attribute': {'href': url['REMOTE-OPENDAP'], 'rel': 'enclosure', 'type': 'text/html', 'title': 'OPeNDAP URL' }})
+            if 'LOCAL-FTP' in url:
+                item.append({'name': 'link', 'attribute': {'href': url['LOCAL-FTP'], 'rel': 'enclosure', 'type': 'text/plain', 'title': 'FTP URL' }})
+            elif 'REMOTE-FTP' in url:
+                item.append({'name': 'link', 'attribute': {'href': url['REMOTE-FTP'], 'rel': 'enclosure', 'type': 'text/plain', 'title': 'FTP URL' }})
+        if doc['DatasetPolicy-ViewOnline'][0] == 'Y' and doc['DatasetPolicy-AccessType-Full'][0] in ['OPEN', 'PREVIEW', 'SIMULATED', 'REMOTE']:
+            portalUrl = self.portalUrl+'/'+doc['Dataset-ShortName'][0]
+            item.append({'name': 'link', 'attribute': {'href': portalUrl, 'rel': 'enclosure', 'type': 'text/html', 'title': 'Dataset Information' }})
+        updated = None
+        if 'DatasetMetaHistory-LastRevisionDateLong' in doc and doc['DatasetMetaHistory-LastRevisionDateLong'][0] != '':
+            updated = DateUtility.convertTimeLongToIso(doc['DatasetMetaHistory-LastRevisionDateLong'][0])
+        else:
+            updated = datetime.datetime.utcnow().isoformat()+'Z'
+        
+        item.append({'name': 'updated', 'value': updated})
+        item.append({'name': 'id', 'value': persistentId})
+        item.append({'namespace': 'podaac', 'name': 'datasetId', 'value': doc['Dataset-PersistentId'][0]})
+        item.append({'namespace': 'podaac', 'name': 'shortName', 'value': doc['Dataset-ShortName'][0]})
+        
+        if doc['DatasetCoverage-WestLon'][0] != '' and doc['DatasetCoverage-SouthLat'][0] != '' and  doc['DatasetCoverage-EastLon'][0] != '' and  doc['DatasetCoverage-NorthLat'][0] != '':
+            item.append({'namespace': 'georss', 'name': 'where', 'value': {'namespace': 'gml', 'name': 'Envelope', 'value': [{'namespace': 'gml', 'name': 'lowerCorner', 'value': ' '.join([doc['DatasetCoverage-WestLon'][0], doc['DatasetCoverage-SouthLat'][0]]) }, {'namespace': 'gml', 'name': 'upperCorner', 'value': ' '.join([doc['DatasetCoverage-EastLon'][0], doc['DatasetCoverage-NorthLat'][0]])}]}})
+        
+        if 'DatasetCoverage-StartTimeLong' in doc and doc['DatasetCoverage-StartTimeLong'][0] != '':
+            item.append({'namespace': 'time', 'name': 'start', 'value': DateUtility.convertTimeLongToIso(doc['DatasetCoverage-StartTimeLong'][0])})
+        
+        if 'DatasetCoverage-StopTimeLong' in doc and doc['DatasetCoverage-StopTimeLong'][0] != '':
+            item.append({'namespace': 'time', 'name': 'end', 'value': DateUtility.convertTimeLongToIso(doc['DatasetCoverage-StopTimeLong'][0])})
+        
+        if 'full' in self.parameters and self.parameters['full']:
+            if 'DatasetLocationPolicy-Type' in doc and 'DatasetLocationPolicy-BasePath' in doc:
+                for i, x in enumerate(doc['DatasetLocationPolicy-Type']):
+                    item.append({'namespace': 'podaac', 'name': self._camelCaseStripHyphen(x.title()), 'value': doc['DatasetLocationPolicy-BasePath'][i]})
+                del doc['DatasetLocationPolicy-Type']
+                del doc['DatasetLocationPolicy-BasePath']
+            
+            multiValuedElementsKeys = ('DatasetRegion-', 'DatasetCharacter-', 'DatasetCitation-', 'DatasetContact-Contact-', 'DatasetDatetime-', 
+                                       'DatasetInteger-', 'DatasetParameter-', 'DatasetProject-', 'DatasetReal-', 'DatasetResource-', 
+                                       'DatasetSoftware-', 'DatasetSource-', 'DatasetVersion-', 'Collection-')
+            self._populateItemWithPodaacMetadata(doc, item, multiValuedElementsKeys)

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/opensearch/datasetgcmdresponse.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/opensearch/datasetgcmdresponse.py b/src/main/python/libraries/edge/opensearch/datasetgcmdresponse.py
new file mode 100644
index 0000000..002bdc9
--- /dev/null
+++ b/src/main/python/libraries/edge/opensearch/datasetgcmdresponse.py
@@ -0,0 +1,11 @@
+from edge.opensearch.gcmdresponsebysolr import GcmdResponseBySolr
+
+class DatasetGcmdResponse(GcmdResponseBySolr):
+    def __init__(self, configuration):
+        super(DatasetGcmdResponse, self).__init__(configuration)
+
+    def _populateChannel(self, solrResponse):
+        pass
+
+    def _populateItem(self, solrResponse, doc, item):
+        pass

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/opensearch/datasetgranulewriter.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/opensearch/datasetgranulewriter.py b/src/main/python/libraries/edge/opensearch/datasetgranulewriter.py
new file mode 100644
index 0000000..f9c62a1
--- /dev/null
+++ b/src/main/python/libraries/edge/opensearch/datasetgranulewriter.py
@@ -0,0 +1,233 @@
+from types import *
+import logging
+import urllib
+import urlparse
+import httplib
+from xml.dom.minidom import Document
+import json
+import xml.sax.saxutils
+import datetime
+import codecs
+
+from edge.opensearch.responsewriter import ResponseWriter
+from edge.dateutility import DateUtility
+from edge.httputility import HttpUtility
+from edge.spatialsearch import SpatialSearch
+import re
+
+class DatasetGranuleWriter(ResponseWriter):
+    def __init__(self, configFilePath, requiredParams = None):
+        super(DatasetGranuleWriter, self).__init__(configFilePath, requiredParams)
+        self.solrGranuleResponse = None
+
+    def get(self, requestHandler):
+        super(DatasetGranuleWriter, self).get(requestHandler)
+        #logging.debug('uri: '+str(requestHandler.request.headers))
+
+        startIndex = 0
+        try:
+            startIndex = requestHandler.get_argument('startIndex')
+        except:
+            pass
+
+        entriesPerPage = self._configuration.getint('solr', 'entriesPerPage')
+        try:
+            entriesPerPage = requestHandler.get_argument('itemsPerPage')
+            #cap entries per age at 400
+            if (int(entriesPerPage) > 400):
+                entriesPerPage = 400
+        except:
+            pass
+        
+        #pretty = True
+        try:
+            if requestHandler.get_argument('pretty').lower() == 'false':
+                self.pretty = False
+        except:
+            pass
+        
+        parameters = ['startTime', 'endTime', 'keyword', 'granuleName', 'datasetId', 'shortName', 'bbox', 'sortBy']
+        #variables = {}
+        for parameter in parameters:
+            try:
+                value = requestHandler.get_argument(parameter)
+                self.variables[parameter] = value
+            except:
+                pass
+
+        if 'keyword' in self.variables:
+            self.variables['keyword'] = self.variables['keyword'].replace('*', '')
+            self.variables['keyword'] = self.variables['keyword'].lower()
+        """
+        if 'bbox' in variables:
+            points = variables['bbox'].split(',')
+            if len(points) == 4:
+                spatialSearch = SpatialSearch(
+                    self._configuration.get('service', 'database')
+                )
+                spatialResult = spatialSearch.searchGranules(
+                    int(startIndex),
+                    int(entriesPerPage),
+                    float(points[0]),
+                    float(points[1]),
+                    float(points[2]),
+                    float(points[3])
+                )
+                if len(spatialResult[0]) > 0:
+                    variables['granuleIds'] = spatialResult[0]
+                    variables['granuleIdsFound'] = spatialResult[1]
+
+            del variables['bbox']
+        """
+        try:
+            self._getSolrResponse(startIndex, entriesPerPage, self.variables)
+            """
+            solrJson = json.loads(solrResponse)
+            if len(solrJson['response']['docs']) >= 1:
+                dataset = solrJson['response']['docs'][0]['Dataset-ShortName'][0];
+                logging.debug('Getting solr response for dataset ' + dataset)
+                solrDatasetResponse = self._getSingleSolrDatasetResponse({'shortName' : dataset})
+            """
+        except:
+            logging.exception('Failed to get solr response.')
+        """
+        if 'granuleIdsFound' in variables:
+            #solrJson = json.loads(solrResponse)
+            numFound = solrJson['response']['numFound']
+            solrJson['response']['numFound'] = int(variables['granuleIdsFound'])
+            solrJson['response']['start'] = int(startIndex)
+            solrJson['responseHeader']['params']['rows'] = numFound
+            solrResponse = json.dumps(solrJson)
+
+        searchText = ''
+        if 'keyword' in variables:
+            searchText = variables['keyword']
+        try:
+            openSearchResponse = self._generateOpenSearchResponse(
+                solrResponse,
+                solrDatasetResponse,
+                searchText,
+                self._configuration.get('service', 'url')+requestHandler.request.uri,
+                pretty
+            )
+            requestHandler.set_header("Content-Type", "application/xml")
+            requestHandler.write(openSearchResponse)
+        except Exception as exception:
+            logging.exception(exception)
+            requestHandler.set_status(404)
+            requestHandler.write('ERROR - ' + str(exception))
+        """
+
+    def _getSolrResponse(self, startIndex, entriesPerPage, variables):
+        query = self._constructSolrQuery(startIndex, entriesPerPage, variables)
+        url = self._configuration.get('solr', 'granuleUrl')
+
+        httpUtility = HttpUtility()
+        httpUtility.getResponse(url+'/select/?'+query, self._onSolrGranuleResponse)
+
+    def _constructSolrQuery(self, startIndex, entriesPerPage, variables):
+        #set default sort order
+        sort='Granule-StartTimeLong+desc'
+        queries = []
+        for key, value in variables.iteritems():
+            #query = ''
+            if key == 'startTime':
+                startTime = DateUtility.convertISOToUTCTimestamp(value)
+                if startTime is not None:
+                    query = 'Granule-StartTimeLong:'
+                    query += '['+str(startTime)+'%20TO%20*]'
+                    queries.append(query)
+            elif key == 'endTime':
+                stopTime = DateUtility.convertISOToUTCTimestamp(value)
+                if stopTime is not None:
+                    query = 'Granule-StartTimeLong:'
+                    query += '[*%20TO%20'+str(stopTime)+']'
+                    queries.append(query)
+            elif key == 'keyword':
+                newValue = urllib.quote(value)
+
+                query = 'SearchableText-LowerCased:('+newValue+')'
+                queries.append(query)
+            elif key == 'datasetId':
+                query = 'Dataset-PersistentId:'+self._urlEncodeSolrQueryValue(value)
+                queries.append(query)
+            elif key == 'shortName':
+                query = 'Dataset-ShortName-Full:'+self._urlEncodeSolrQueryValue(value)
+                queries.append(query)
+            elif key == 'granuleName':
+                query = 'Granule-Name-Full:'+self._urlEncodeSolrQueryValue(value)
+                queries.append(query)
+            elif key == 'granuleIds':
+                granuleIds = []
+                for granuleId in value:
+                    granuleIds.append(str(granuleId))
+                query = 'Granule-Id:('+'+OR+'.join(granuleIds)+')'
+                queries.append(query)
+
+                startIndex = 0
+            elif key == 'sortBy':
+                sortByMapping = {'timeAsc': 'Granule-StartTimeLong+asc', 'archiveTimeDesc': 'Granule-ArchiveTimeLong+desc'}
+                if value in sortByMapping.keys():
+                    sort = sortByMapping[value]
+            elif key == 'archiveTime':
+                query = 'Granule-ArchiveTimeLong:['+str(value)+'%20TO%20*]'
+                queries.append(query)
+            #if query != '':
+            #    queries.append('%2B'+query)
+
+        if len(queries) == 0:
+            queries.append('*')
+
+        query = 'q='+'+AND+'.join(queries)+'&fq=Granule-AccessType:(OPEN+OR+PREVIEW+OR+SIMULATED+OR+REMOTE)+AND+Granule-Status:ONLINE&version=2.2&start='+str(startIndex)+'&rows='+str(entriesPerPage)+'&indent=on&wt=json&sort='+sort
+        logging.debug('solr query: '+query)
+        
+        return query
+
+    def _readTemplate(self, path):
+        file = codecs.open(path, encoding='utf-8')
+        data = file.read()
+        file.close()
+
+        return data
+    
+    def _generateOpenSearchResponse(self, solrGranuleResponse, solrDatasetResponse, pretty):
+        pass
+    
+    def _onSolrGranuleResponse(self, response):
+        if response.error:
+            self._handleException(str(response.error))
+        else:
+            self.solrGranuleResponse = response.body
+            solrJson = json.loads(response.body)
+            if len(solrJson['response']['docs']) >= 1:
+                dataset = solrJson['response']['docs'][0]['Dataset-ShortName'][0];
+                logging.debug('Getting solr response for dataset ' + dataset)
+                self._getSingleSolrDatasetResponse({'shortName' : dataset}, self._onSolrDatasetResponse)
+            else:
+                try:
+                    openSearchResponse = self._generateOpenSearchResponse(
+                        None,
+                        None,
+                        self.pretty
+                    )
+                    self.requestHandler.set_header("Content-Type", "application/xml")
+                    self.requestHandler.write(openSearchResponse)
+                    self.requestHandler.finish()
+                except BaseException as exception:
+                    self._handleException(str(exception))
+
+    def _onSolrDatasetResponse(self, response):
+        if response.error:
+            self._handleException(str(response.error))
+        else:
+            try:
+                openSearchResponse = self._generateOpenSearchResponse(
+                    self.solrGranuleResponse,
+                    response.body,
+                    self.pretty
+                )
+                self.requestHandler.set_header("Content-Type", "application/xml")
+                self.requestHandler.write(openSearchResponse)
+                self.requestHandler.finish()
+            except BaseException as exception:
+                self._handleException(str(exception))

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/opensearch/datasetisoresponse.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/opensearch/datasetisoresponse.py b/src/main/python/libraries/edge/opensearch/datasetisoresponse.py
new file mode 100644
index 0000000..823d24a
--- /dev/null
+++ b/src/main/python/libraries/edge/opensearch/datasetisoresponse.py
@@ -0,0 +1,11 @@
+from edge.opensearch.isoresponsebysolr import IsoResponseBySolr
+
+class DatasetIsoResponse(IsoResponseBySolr):
+    def __init__(self):
+        super(DatasetIsoResponse, self).__init__()
+
+    def _populateChannel(self, solrResponse):
+        pass
+
+    def _populateItem(self, solrResponse, doc, item):
+        pass

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/opensearch/datasetresponsebysolr.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/opensearch/datasetresponsebysolr.py b/src/main/python/libraries/edge/opensearch/datasetresponsebysolr.py
new file mode 100644
index 0000000..53e89ae
--- /dev/null
+++ b/src/main/python/libraries/edge/opensearch/datasetresponsebysolr.py
@@ -0,0 +1,14 @@
+from edge.opensearch.responsebysolr import ResponseBySolr
+
+class DatasetResponseBySolr(ResponseBySolr):
+    def __init__(self, portalUrl):
+        super(DatasetResponseBySolr, self).__init__()
+        self.portalUrl = portalUrl
+
+    def _populateChannel(self, solrResponse):
+        pass
+
+    def _populateItem(self, solrResponse, doc, item):
+        item.append({'name': 'title', 'value': doc['Dataset-LongName'][0]})
+        item.append({'name': 'description', 'value': doc['Dataset-Description'][0]})
+        item.append({'name': 'link', 'value': self.portalUrl+'/'+doc['Dataset-ShortName'][0]})

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/opensearch/datasetrssresponse.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/opensearch/datasetrssresponse.py b/src/main/python/libraries/edge/opensearch/datasetrssresponse.py
new file mode 100644
index 0000000..e9194bc
--- /dev/null
+++ b/src/main/python/libraries/edge/opensearch/datasetrssresponse.py
@@ -0,0 +1,85 @@
+import urllib
+from edge.opensearch.rssresponsebysolr import RssResponseBySolr
+from edge.dateutility import DateUtility
+
+class DatasetRssResponse(RssResponseBySolr):
+    def __init__(self, portalUrl, url, datasets):
+        super(DatasetRssResponse, self).__init__()
+        self.portalUrl = portalUrl
+        self.url = url
+        self.datasets = datasets
+
+    def _populateChannel(self, solrResponse):
+        self.variables.append({'namespace': 'atom', 'name': 'link', 'attribute': {'href': self.url+self.searchBasePath+'podaac-granule-osd.xml', 'rel': 'search', 'type': 'application/opensearchdescription+xml' }})
+
+    def _populateItem(self, solrResponse, doc, item):
+        persistentId = doc['Dataset-PersistentId'][0]
+        idTuple = ('datasetId', persistentId)
+        if persistentId == '':
+            idTuple = ('shortName', doc['Dataset-ShortName'][0])
+        portalUrl = ""
+        if doc['DatasetPolicy-ViewOnline'][0] == 'Y' and doc['DatasetPolicy-AccessType-Full'][0] in ['OPEN', 'PREVIEW', 'SIMULATED', 'REMOTE']:
+            portalUrl = self.portalUrl+'/'+doc['Dataset-ShortName'][0]
+            item.append({'name': 'enclosure', 'attribute': {'url': portalUrl, 'type': 'text/html', 'length': '0'}})
+        item.append({'name': 'title', 'value': doc['Dataset-LongName'][0]})
+        item.append({'name': 'description', 'value': doc['Dataset-Description'][0]})
+        item.append({'name': 'link', 'value': portalUrl})
+        
+        item.append({'name': 'enclosure', 'attribute': {'url': self.url + self.searchBasePath + 'dataset?' + urllib.urlencode(dict([idTuple, ('full', 'true'), ('format', 'rss')])), 'type': 'application/rss+xml', 'length': '0'}})
+        item.append({'name': 'enclosure', 'attribute': {'url': self.url + self.metadataBasePath + 'dataset?' + urllib.urlencode(dict([idTuple, ('format', 'iso')])), 'type': 'text/xml', 'length': '0'}})
+        item.append({'name': 'enclosure', 'attribute': {'url': self.url + self.metadataBasePath + 'dataset?' + urllib.urlencode(dict([idTuple, ('format', 'gcmd')])), 'type': 'text/xml', 'length': '0'}})
+        
+        #Only generate granule search link if dataset has granules
+        if (doc['Dataset-ShortName'][0] in self.datasets):
+            supportedGranuleParams = dict([(key,value) for key,value in self.parameters.iteritems() if key in ['bbox', 'startTime', 'endTime', 'format']])
+            if persistentId == '':
+                supportedGranuleParams['shortName'] = doc['Dataset-ShortName'][0]
+            else:
+                supportedGranuleParams['datasetId'] = persistentId
+            item.append({'name': 'enclosure', 'attribute': {'url': self.url + self.searchBasePath + 'granule?' + urllib.urlencode(supportedGranuleParams), 'type': 'application/rss+xml', 'length': '0'}})
+        
+        if 'Dataset-ImageUrl' in doc and doc['Dataset-ImageUrl'][0] != '':
+            item.append({'name': 'enclosure', 'attribute': {'url': doc['Dataset-ImageUrl'][0], 'type': 'image/jpg', 'length': '0'}})
+        
+        if 'DatasetLocationPolicy-Type' in doc and 'DatasetLocationPolicy-BasePath' in doc:
+            url = dict(zip(doc['DatasetLocationPolicy-Type'], doc['DatasetLocationPolicy-BasePath']))
+            if 'LOCAL-OPENDAP' in url:
+                item.append({'name': 'enclosure', 'attribute': {'url': url['LOCAL-OPENDAP'], 'type': 'text/html', 'length': '0'}})
+            elif 'REMOTE-OPENDAP' in url:
+                item.append({'name': 'enclosure', 'attribute': {'url': url['REMOTE-OPENDAP'], 'type': 'text/html', 'length': '0'}})
+            if 'LOCAL-FTP' in url:
+                item.append({'name': 'enclosure', 'attribute': {'url': url['LOCAL-FTP'], 'type': 'text/plain', 'length': '0'}})
+            elif 'REMOTE-FTP' in url:
+                item.append({'name': 'enclosure', 'attribute': {'url': url['REMOTE-FTP'], 'type': 'text/plain', 'length': '0'}})
+                
+        updated = None
+        if 'DatasetMetaHistory-LastRevisionDateLong' in doc and doc['DatasetMetaHistory-LastRevisionDateLong'][0] != '':
+            updated = DateUtility.convertTimeLongToIso(doc['DatasetMetaHistory-LastRevisionDateLong'][0])
+        else:
+            updated = datetime.datetime.utcnow().isoformat()+'Z'
+        
+        item.append({'name': 'pubDate', 'value': updated})
+        item.append({'name': 'guid', 'value': persistentId})
+        item.append({'namespace': 'podaac', 'name': 'datasetId', 'value': doc['Dataset-PersistentId'][0]})
+        item.append({'namespace': 'podaac', 'name': 'shortName', 'value': doc['Dataset-ShortName'][0]})
+        
+        if doc['DatasetCoverage-WestLon'][0] != '' and doc['DatasetCoverage-SouthLat'][0] != '' and  doc['DatasetCoverage-EastLon'][0] != '' and  doc['DatasetCoverage-NorthLat'][0] != '':
+            item.append({'namespace': 'georss', 'name': 'where', 'value': {'namespace': 'gml', 'name': 'Envelope', 'value': [{'namespace': 'gml', 'name': 'lowerCorner', 'value': ' '.join([doc['DatasetCoverage-WestLon'][0], doc['DatasetCoverage-SouthLat'][0]]) }, {'namespace': 'gml', 'name': 'upperCorner', 'value': ' '.join([doc['DatasetCoverage-EastLon'][0], doc['DatasetCoverage-NorthLat'][0]])}]}})
+        
+        if 'DatasetCoverage-StartTimeLong' in doc and doc['DatasetCoverage-StartTimeLong'][0] != '':
+            item.append({'namespace': 'time', 'name': 'start', 'value': DateUtility.convertTimeLongToIso(doc['DatasetCoverage-StartTimeLong'][0])})
+        
+        if 'DatasetCoverage-StopTimeLong' in doc and doc['DatasetCoverage-StopTimeLong'][0] != '':
+            item.append({'namespace': 'time', 'name': 'end', 'value': DateUtility.convertTimeLongToIso(doc['DatasetCoverage-StopTimeLong'][0])})
+                
+        if 'full' in self.parameters and self.parameters['full']:
+            if 'DatasetLocationPolicy-Type' in doc and 'DatasetLocationPolicy-BasePath' in doc:
+                for i, x in enumerate(doc['DatasetLocationPolicy-Type']):
+                    item.append({'namespace': 'podaac', 'name': self._camelCaseStripHyphen(x.title()), 'value': doc['DatasetLocationPolicy-BasePath'][i]})
+                del doc['DatasetLocationPolicy-Type']
+                del doc['DatasetLocationPolicy-BasePath']
+            
+            multiValuedElementsKeys = ('DatasetRegion-', 'DatasetCharacter-', 'DatasetCitation-', 'DatasetContact-Contact-', 'DatasetDatetime-', 
+                                       'DatasetInteger-', 'DatasetParameter-', 'DatasetProject-', 'DatasetReal-', 'DatasetResource-', 
+                                       'DatasetSoftware-', 'DatasetSource-', 'DatasetVersion-', 'Collection-')
+            self._populateItemWithPodaacMetadata(doc, item, multiValuedElementsKeys)

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/opensearch/datasetwriter.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/opensearch/datasetwriter.py b/src/main/python/libraries/edge/opensearch/datasetwriter.py
new file mode 100644
index 0000000..3ec56cb
--- /dev/null
+++ b/src/main/python/libraries/edge/opensearch/datasetwriter.py
@@ -0,0 +1,192 @@
+from types import *
+import json
+import logging
+import urllib
+
+import requestresponder
+from edge.dateutility import DateUtility
+from edge.httputility import HttpUtility
+from edge.opensearch.responsewriter import ResponseWriter
+import re
+
+class DatasetWriter(ResponseWriter):
+    def __init__(self, configFilePath):
+        super(DatasetWriter, self).__init__(configFilePath)
+        self.datasets = []
+
+    def get(self, requestHandler):
+        super(DatasetWriter, self).get(requestHandler)
+        #searchParameters = {}
+        #logging.debug('uri: '+str(requestHandler.request.headers))
+
+        startIndex = 0
+        try:
+            startIndex = requestHandler.get_argument('startIndex')
+        except:
+            pass
+
+        entriesPerPage = self._configuration.getint('solr', 'entriesPerPage')
+        try:
+            entriesPerPage = requestHandler.get_argument('itemsPerPage')
+            #cap entries per age at 400
+            if (int(entriesPerPage) > 400):
+                entriesPerPage = 400
+            self.searchParameters['itemsPerPage'] = entriesPerPage
+        except:
+            pass
+
+        #pretty = True
+        try:
+            if requestHandler.get_argument('pretty').lower() == 'false':
+                self.pretty = False
+                self.searchParameters['pretty'] = 'false'
+        except:
+            pass
+
+        try:
+            if requestHandler.get_argument('full').lower() == 'true':
+                self.searchParameters['full'] = 'true'
+        except:
+            pass
+        
+        try:
+            self.searchParameters['format'] = requestHandler.get_argument('format')
+        except:
+            pass
+
+        parameters = ['startTime', 'endTime', 'keyword', 'datasetId', 'shortName', 'instrument', 'satellite', 'fileFormat', 'status', 'processLevel', 'sortBy', 'bbox', 'allowNone']
+        #variables = {}
+        for parameter in parameters:
+            try:
+                value = requestHandler.get_argument(parameter)
+                self.variables[parameter] = value
+                self.searchParameters[parameter] = value
+            except:
+                pass
+
+        if 'keyword' in self.variables:
+            self.variables['keyword'] = self.variables['keyword'].replace('*', '')
+            self.variables['keyword'] = self.variables['keyword'].lower()
+        """
+        else:
+            variables['keyword'] = '""'
+        """
+        #If generating OpenSearch response, need to make additional call to solr
+        #to determine which datasets have granules 
+        try:
+            if 'search' in requestHandler.request.path:
+                callback = self._getSolrHasGranuleResponseCallback(startIndex, entriesPerPage)
+                self._getSolrHasGranuleResponse(callback)
+            else:
+               self._getSolrResponse(startIndex, entriesPerPage, self.variables)
+        except:
+            logging.exception('Failed to get solr response.')
+        """
+        searchText = ''
+        if 'keyword' in variables:
+            searchText = variables['keyword']
+        openSearchResponse = self._generateOpenSearchResponse(
+            solrResponse,
+            searchText,
+            self._configuration.get('service', 'url') + requestHandler.request.path,
+            searchParameters,
+            pretty
+        )
+
+        requestHandler.set_header("Content-Type", "application/xml")
+        #requestHandler.set_header("Content-Type", "application/rss+xml")
+        #requestHandler.write(solrResponse)
+        requestHandler.write(openSearchResponse)
+        """
+
+    def _getSolrResponse(self, startIndex, entriesPerPage, variables):
+        query = self._constructSolrQuery(startIndex, entriesPerPage, variables)
+        url = self._configuration.get('solr', 'datasetUrl')
+
+        httpUtility = HttpUtility()
+        httpUtility.getResponse(url+'/select/?'+query, self._onSolrResponse)
+
+    def _constructSolrQuery(self, startIndex, entriesPerPage, variables):
+        queries = []
+        sort = None
+        filterQuery = None
+        for key, value in variables.iteritems():
+            #query = ''
+            if key == 'startTime':
+                startTime = DateUtility.convertISOToUTCTimestamp(value)
+                if startTime is not None:
+                    query = 'DatasetCoverage-StopTimeLong-Long:'
+                    query += '['+str(startTime)+'%20TO%20*]'
+                    queries.append(query)
+            elif key == 'endTime':
+                stopTime = DateUtility.convertISOToUTCTimestamp(value)
+                if stopTime is not None:
+                    query = 'DatasetCoverage-StartTimeLong-Long:'
+                    query += '[*%20TO%20'+str(stopTime)+']'
+                    queries.append(query)
+            elif key == 'keyword':
+                newValue = urllib.quote(value)
+
+                query = 'SearchableText-LowerCased:('+newValue+')'
+                queries.append(query)
+            elif key == 'datasetId':
+                query = 'Dataset-PersistentId:'+self._urlEncodeSolrQueryValue(value)
+                queries.append(query)
+            elif key == 'shortName':
+                query = 'Dataset-ShortName-Full:'+self._urlEncodeSolrQueryValue(value)
+                queries.append(query)
+            elif key == 'satellite':
+                query = 'DatasetSource-Source-ShortName-Full:'+self._urlEncodeSolrQueryValue(value)
+                queries.append(query)
+            elif key == 'instrument':
+                query = 'DatasetSource-Sensor-ShortName-Full:'+self._urlEncodeSolrQueryValue(value)
+                queries.append(query)
+            elif key == 'fileFormat':
+                query = 'DatasetPolicy-DataFormat-LowerCased:'+self._urlEncodeSolrQueryValue(value)
+                queries.append(query)
+            elif key == 'status':
+                query = 'DatasetPolicy-AccessType-LowerCased:'+self._urlEncodeSolrQueryValue(value)
+                queries.append(query)
+            elif key == 'processLevel':
+                query = 'Dataset-ProcessingLevel-LowerCased:'+value
+                queries.append(query)
+            elif key == 'sortBy':
+                sortByMapping = {'timeDesc': 'DatasetCoverage-StartTimeLong-Long+desc', 'timeAsc': 'DatasetCoverage-StartTimeLong-Long+asc', 
+                                 'popularityDesc': 'Dataset-AllTimePopularity+desc', 'popularityAsc': 'Dataset-AllTimePopularity+asc'}
+                if value in sortByMapping.keys():
+                    sort = sortByMapping[value]
+            elif key == 'bbox':
+                filterQuery = self._constructBoundingBoxQuery(value)
+
+            #if query != '':
+            #    queries.append('%2B'+query)
+
+        if len(queries) == 0:
+            queries.append('*')
+
+        query = 'q='+'+AND+'.join(queries)+'&fq=DatasetPolicy-AccessType-Full:(OPEN+OR+PREVIEW+OR+SIMULATED+OR+REMOTE)+AND+DatasetPolicy-ViewOnline:Y&version=2.2&start='+str(startIndex)+'&rows='+str(entriesPerPage)+'&indent=on&wt=json'
+        if sort is not None:
+            query += '&sort=' + sort
+        if filterQuery is not None:
+            query += '&' + filterQuery
+        logging.debug('solr query: '+query)
+        
+        return query
+    
+    def _getSolrHasGranuleResponse(self, callback):
+        url = self._configuration.get('solr', 'granuleUrl')
+
+        httpUtility = HttpUtility()
+        return httpUtility.getResponse(url+'/select?q=*:*&facet=true&facet.field=Dataset-ShortName-Full&facet.limit=-1&rows=0&indent=on&wt=json&version=2.2', callback)
+    
+    def _getSolrHasGranuleResponseCallback(self, startIndex, entriesPerPage):   
+        def onSolrHasGranuleResponse(response):
+            try:
+                solrJson = json.loads(response.body)
+                logging.debug("Got response for dataset facet")
+                datasetCounter = solrJson['facet_counts']['facet_fields']['Dataset-ShortName-Full']
+                self.datasets = [datasetCounter[i] for i in range(len(datasetCounter)) if i % 2 == 0]
+                self._getSolrResponse(startIndex, entriesPerPage, self.variables)
+            except:
+                logging.exception('Failed to get solr response.')
+        return onSolrHasGranuleResponse

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/opensearch/fgdcresponse.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/opensearch/fgdcresponse.py b/src/main/python/libraries/edge/opensearch/fgdcresponse.py
new file mode 100644
index 0000000..c8738ce
--- /dev/null
+++ b/src/main/python/libraries/edge/opensearch/fgdcresponse.py
@@ -0,0 +1,56 @@
+import logging
+
+from jinja2 import Environment, Template
+import re
+import xml.etree.ElementTree
+
+from edge.opensearch.response import Response
+
+class FgdcResponse(Response):
+    def __init__(self):
+        self.namespaces = {}
+        self.env = Environment()
+        self.env.trim_blocks = True
+        self.env.autoescape = True
+        self.variables = {}
+
+    def setTemplate(self, template):
+        self.template = self.env.from_string(template.replace('>\n<', '><'))
+
+    def addNamespace(self, name, uri):
+        self.namespaces[name] = uri
+
+    def removeNamespace(self, name):
+        del self.namespaces[name]
+
+    def generate(self, pretty=False, xmlDeclaration=""):
+        logging.debug('FgdcResponse.generate is called.')
+        fgdcStr = self.template.render(self.variables).encode('utf-8')
+        if fgdcStr != "" and pretty:
+            #xmlDeclaration ="<?xml version=\"1.0\" encoding=\"ISO-8859-1\"?>\n<!DOCTYPE metadata SYSTEM \"http://www.fgdc.gov/metadata/fgdc-std-001-1998.dtd\">\n"
+            tree = xml.etree.ElementTree.fromstring(fgdcStr)
+            self._indent(tree)
+            
+            for namespace in self.namespaces.keys():
+                xml.etree.ElementTree.register_namespace(namespace, self.namespaces[namespace])
+            
+            return xmlDeclaration + xml.etree.ElementTree.tostring(tree, encoding='utf-8')
+        else:
+            return fgdcStr
+
+    # Provided by http://effbot.org/zone/element-lib.htm#prettyprint
+    def _indent(self, elem, level=0):
+        i = "\n" + level * "   "
+        if len(elem):
+            if not elem.text or not elem.text.strip():
+                elem.text = i + "   "
+            if not elem.tail or not elem.tail.strip():
+                elem.tail = i
+            for elem in elem:
+                self._indent(elem, level + 1)
+            if not elem.tail or not elem.tail.strip():
+                elem.tail = i
+        else:
+            if level and (not elem.tail or not elem.tail.strip()):
+                elem.tail = i
+                

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/opensearch/fgdcresponsebysolr.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/opensearch/fgdcresponsebysolr.py b/src/main/python/libraries/edge/opensearch/fgdcresponsebysolr.py
new file mode 100644
index 0000000..562dc08
--- /dev/null
+++ b/src/main/python/libraries/edge/opensearch/fgdcresponsebysolr.py
@@ -0,0 +1,141 @@
+import json
+import logging
+
+from edge.opensearch.fgdcresponse import FgdcResponse
+from datetime import datetime
+
+class FgdcResponseBySolr(FgdcResponse):
+    def __init__(self):
+        super(FgdcResponseBySolr, self).__init__()
+
+    def generate(self, solrDatasetResponse, solrGranuleResponse = None, pretty=False):
+        self._populate(solrDatasetResponse, solrGranuleResponse)
+        return super(FgdcResponseBySolr, self).generate(pretty, "<?xml version=\"1.0\" encoding=\"ISO-8859-1\"?>\n<!DOCTYPE metadata SYSTEM \"http://www.fgdc.gov/metadata/fgdc-std-001-1998.dtd\">\n")
+
+    def _populate(self, solrDatasetResponse, solrGranuleResponse = None):
+        if solrDatasetResponse is not None:
+            solrJson = json.loads(solrDatasetResponse)
+
+            logging.debug('dataset count: '+str(len(solrJson['response']['docs'])))
+
+            if len(solrJson['response']['docs']) == 1:
+                # ok now populate variables!
+                doc = solrJson['response']['docs'][0]
+                
+                self.variables['doc'] = doc
+                
+                # Round spatial to 3 decimal places
+                doc['DatasetCoverage-WestLon'][0] = '%.3f' % round(float(doc['DatasetCoverage-WestLon'][0]), 3)
+                doc['DatasetCoverage-EastLon'][0] = '%.3f' % round(float(doc['DatasetCoverage-EastLon'][0]), 3)
+                doc['DatasetCoverage-NorthLat'][0] = '%.3f' % round(float(doc['DatasetCoverage-NorthLat'][0]), 3)
+                doc['DatasetCoverage-SouthLat'][0] = '%.3f' % round(float(doc['DatasetCoverage-SouthLat'][0]), 3)
+                
+                # Base on the value of Dataset-ProcessingLevel, we query the SOLR differently.
+                # For 2 or 2P, we look for these 2 attributes:
+                #
+                #   ACROSS_TRACK_RESOLUTION           NUMBER
+                #   ALONG_TRACK_RESOLUTION            NUMBER
+                #
+                # Because the units of 2 and 2P products are in meters, we have to convert to decimal degrees.
+                #
+                # The formula is:
+                #
+                #    1 degree = 111.16 km or 111160.0 meters 
+                # 
+                # Calculate latitude and longitude resolution for 2 and 2P products
+                if (doc['Dataset-ProcessingLevel'][0] == '2' or doc['Dataset-ProcessingLevel'][0] == '2P'):
+                    self.variables['Dataset_LatitudeResolution'] = '%.17f' % round(float(doc['Dataset-AlongTrackResolution'][0]) / 111160.0, 17)
+                    self.variables['Dataset_LongitudeResolution'] = '%.17f' % round(float(doc['Dataset-AcrossTrackResolution'][0]) / 111160.0, 17)
+                # For value of Dataset-ProcessingLevel of 3 or 4, we look for different attributes:
+                #
+                # LATIUDE_RESOLUTION
+                # LONGITUDE RESOLUTION
+                elif (doc['Dataset-ProcessingLevel'][0] == '3' or doc['Dataset-ProcessingLevel'][0] == '4'):
+                    self.variables['Dataset_LatitudeResolution'] = doc['Dataset-LatitudeResolution'][0]
+                    self.variables['Dataset_LongitudeResolution'] = doc['Dataset-LongitudeResolution'][0]
+
+                # Format dates
+                try:
+                    self.variables['DatasetCitation_ReleaseDateTime'] = self._convertTimeLongToString(doc['DatasetCitation-ReleaseDateLong'][0])
+                    self.variables['DatasetCitation_ReleaseDate'] = datetime.utcfromtimestamp(float(doc['DatasetCitation-ReleaseDateLong'][0]) / 1000).strftime('%Y%m%d')
+                    self.variables['DatasetCitation_ReleaseTime'] = datetime.utcfromtimestamp(float(doc['DatasetCitation-ReleaseDateLong'][0]) / 1000).strftime('%H%M%S')+'Z'
+                    self.variables['DatasetCoverage_StartTime'] = self._convertTimeLongToString(doc['DatasetCoverage-StartTimeLong'][0])
+                except:
+                    pass
+                
+                # Create list of unique dataset sensor
+                self.variables['UniqueDatasetSensor'] = {}
+                for i, x in enumerate(doc['DatasetSource-Sensor-ShortName']):
+                    self.variables['UniqueDatasetSensor'][x] = i
+                self.variables['UniqueDatasetSensor'] = self.variables['UniqueDatasetSensor'].values()
+                
+                # Create list of unique dataset source
+                self.variables['UniqueDatasetSource'] = {}
+                for i, x in enumerate(doc['DatasetSource-Source-ShortName']):
+                    self.variables['UniqueDatasetSource'][x] = i
+                self.variables['UniqueDatasetSource'] = self.variables['UniqueDatasetSource'].values()
+                
+                # Create dictionary for dataset_resource
+                self.variables['DatasetResource'] = dict(zip(doc['DatasetResource-Type'], doc['DatasetResource-Path']))
+                
+                # Get index of dataset Technical Contact
+                self.variables['TechnicalContactIndex'] = -1
+                for i, x in enumerate(doc['DatasetContact-Contact-Role']):
+                    if (x.upper() == 'TECHNICAL CONTACT'):
+                        logging.debug('tech contact is ' + str(i))
+                        self.variables['TechnicalContactIndex'] = i
+                        break;
+                
+                if 'Dataset-Provider-ProviderResource-Path' not in doc:
+                    doc['Dataset-Provider-ProviderResource-Path'] = ['']
+            else:
+                raise Exception('No dataset found')
+                
+        else:
+            raise Exception('No dataset found')
+        
+        if solrGranuleResponse is not None:
+            solrGranuleJson = json.loads(solrGranuleResponse)
+            
+            logging.debug('granule count: '+str(len(solrGranuleJson['response']['docs'])))
+            if (len(solrGranuleJson['response']['docs']) == 0):
+                raise Exception('No granules found')
+            
+            for doc in solrGranuleJson['response']['docs']:
+                self._populateItem(solrGranuleResponse, doc, None)
+                
+                doc['Granule-StartTimeLong'][0] = self._convertTimeLongToString(doc['Granule-StartTimeLong'][0])
+                doc['Granule-StopTimeLong'][0] = self._convertTimeLongToString(doc['Granule-StopTimeLong'][0])
+                
+                # Create dictionary for bounding box extent
+                '''
+                if ('GranuleReal-Value' in doc and 'GranuleReal-DatasetElement-Element-ShortName' in doc):
+                    # Round real value to 3 decimal places
+                    doc['GranuleReal-Value'] = ['%.3f' % round(float(value), 3) for value in doc['GranuleReal-Value']]
+                    doc['GranuleBoundingBox'] = dict(zip(doc['GranuleReal-DatasetElement-Element-ShortName'], doc['GranuleReal-Value']))
+                '''
+                if 'GranuleSpatial-NorthLat' in doc and 'GranuleSpatial-EastLon' in doc and 'GranuleSpatial-SouthLat' in doc and 'GranuleSpatial-WestLon' in doc:
+                    doc['GranuleBoundingBox'] = dict([('southernmostLatitude', '%.3f' % round(float(doc['GranuleSpatial-SouthLat'][0]), 3)), 
+                                                      ('northernmostLatitude', '%.3f' % round(float(doc['GranuleSpatial-NorthLat'][0]), 3)),
+                                                      ('westernmostLongitude', '%.3f' % round(float(doc['GranuleSpatial-WestLon'][0]), 3)),
+                                                      ('easternmostLongitude', '%.3f' % round(float(doc['GranuleSpatial-EastLon'][0]), 3))])
+                else:
+                    # Encounter granule with no bounding box so raise an exception
+                    raise Exception('granule ' + doc['Granule-Name'][0] + ' has no bounding box')
+            self.variables['granules'] = solrGranuleJson['response']['docs']
+        else:
+            raise Exception('No granules found')
+                
+    def _populateChannel(self, solrResponse):
+        pass
+
+    def _populateItem(self, solrResponse, doc, item):
+        pass
+    
+    def _convertTimeLongToString(self, time):
+        isoTime = ''
+        try:
+            isoTime = datetime.utcfromtimestamp(float(time) / 1000).strftime('%Y%m%dT%H%M%SZ')
+        except ValueError:
+            pass
+        return isoTime

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/opensearch/gcmdresponsebysolr.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/opensearch/gcmdresponsebysolr.py b/src/main/python/libraries/edge/opensearch/gcmdresponsebysolr.py
new file mode 100644
index 0000000..588fc4a
--- /dev/null
+++ b/src/main/python/libraries/edge/opensearch/gcmdresponsebysolr.py
@@ -0,0 +1,123 @@
+import json
+import logging
+
+from edge.opensearch.isoresponse import IsoResponse
+from datetime import date, datetime
+
+class GcmdResponseBySolr(IsoResponse):
+    def __init__(self, configuration):
+        super(GcmdResponseBySolr, self).__init__()
+        self._configuration = configuration
+
+    def generate(self, solrResponse, pretty=False, allowNone=False):
+        self._populate(solrResponse, allowNone)
+        return super(GcmdResponseBySolr, self).generate(pretty)
+
+    def _populate(self, solrResponse, allowNone):
+        if solrResponse is not None:
+            solrJson = json.loads(solrResponse)
+
+            logging.debug('dataset count: '+str(len(solrJson['response']['docs'])))
+
+            if len(solrJson['response']['docs']) == 1:
+                # ok now populate variables!
+                doc = solrJson['response']['docs'][0]
+
+                #self.variables['Dataset_ShortName'] = doc['Dataset-ShortName'][0]
+                #self.variables['Dataset_ShortName'] = u'unko'
+                
+                #Filter response from solr, if value contains none, N/A, null set to empty string
+                if not allowNone:
+                    for key, value in doc.iteritems():
+                        if key not in ['DatasetPolicy-AccessConstraint', 'DatasetPolicy-UseConstraint'] and isinstance(value[0], basestring) and len(value[0].strip()) <= 4 and value[0].strip().lower() in ['none', 'na', 'n/a', 'null']:
+                            doc[key][0] = ""
+                
+                self.variables['doc'] = doc
+                
+                # Entry_ID
+                self.variables['Entry_ID'] = doc['Dataset-PersistentId'][0] if doc['Dataset-PersistentId'][0] != "" else doc['Dataset-ShortName'][0]
+                
+                # Entry_Title
+                self.variables['Entry_Title'] = doc['Dataset-LongName'][0]
+                
+                # Dataset_Citation
+                datasetCitationCol = ['Dataset_Creator', 'Dataset_Title', 'Dataset_Series_Name', 'Dataset_Release_Date', 'Dataset_Release_Place', 'Dataset_Publisher', 'Version', 'Other_Citation_Details', 'Online_Resource']
+                if 'DatasetCitation-Creator' in doc:
+                    for i, x in enumerate(doc['DatasetCitation-ReleaseDateLong']):
+                        try:
+                            doc['DatasetCitation-ReleaseDateLong'][i] = datetime.utcfromtimestamp(float(x) / 1000).strftime('%Y-%m-%d')
+                        except:
+                            pass
+                    self.variables['Dataset_Citation'] = [dict(zip(datasetCitationCol,x)) for x in zip(doc['DatasetCitation-Creator'], doc['DatasetCitation-Title'], doc['DatasetCitation-SeriesName'], doc['DatasetCitation-ReleaseDateLong'], doc['DatasetCitation-ReleasePlace'], doc['DatasetCitation-Publisher'], doc['DatasetCitation-Version'], doc['DatasetCitation-CitationDetail'], doc['DatasetCitation-OnlineResource'])]
+                
+                # Personnel
+                datasetPersonnelCol = ['Role', 'First_Name', 'Middle_Name', 'Last_Name', 'Email', 'Phone', 'Fax', 'Provider_Short_Name']
+                if 'DatasetContact-Contact-Role' in doc:
+                    self.variables['Personnel'] = [dict(zip(datasetPersonnelCol, x)) for x in zip(doc['DatasetContact-Contact-Role'], doc['DatasetContact-Contact-FirstName'], doc['DatasetContact-Contact-MiddleName'], doc['DatasetContact-Contact-LastName'], doc['DatasetContact-Contact-Email'], doc['DatasetContact-Contact-Phone'], doc['DatasetContact-Contact-Fax'], doc['DatasetContact-Contact-Provider-ShortName'])]
+                
+                # Locate dataset provider contact
+                self.variables['Provider_Personnel'] = next((item for item in self.variables['Personnel'] if item["Provider_Short_Name"] == doc['Dataset-Provider-ShortName'][0]), None)
+                
+                # Parameter
+                datasetParameterCol = ['Category', 'Topic', 'Term', 'Variable_Level_1', 'Detailed_Variable']
+                if 'DatasetParameter-Category' in doc:
+                    # Replace all none, None values with empty string
+                    doc['DatasetParameter-VariableDetail'] = [self._filterString(variableDetail) for variableDetail in doc['DatasetParameter-VariableDetail']]
+                    self.variables['Parameters'] = [dict(zip(datasetParameterCol, x)) for x in zip(doc['DatasetParameter-Category'], doc['DatasetParameter-Topic'], doc['DatasetParameter-Term'], doc['DatasetParameter-Variable'], doc['DatasetParameter-VariableDetail'])]
+                
+                # Format dates
+                try:
+                    self.variables['Start_Date'] = datetime.utcfromtimestamp(float(doc['DatasetCoverage-StartTimeLong'][0]) / 1000).strftime('%Y-%m-%d')
+                    self.variables['Stop_Date'] = datetime.utcfromtimestamp(float(doc['DatasetCoverage-StopTimeLong'][0]) / 1000).strftime('%Y-%m-%d')
+                except:
+                    pass
+                
+                
+                # Project
+                projectCol = ['Short_Name', 'Long_Name']
+                if 'DatasetProject-Project-ShortName' in doc:
+                    self.variables['Project'] = [dict(zip(projectCol, x)) for x in zip(doc['DatasetProject-Project-ShortName'], doc['DatasetProject-Project-LongName'])]
+                
+                # Create list of unique dataset sensor
+                self.variables['UniqueDatasetSensor'] = {}
+                if 'DatasetSource-Sensor-ShortName' in doc:
+                    for i, x in enumerate(doc['DatasetSource-Sensor-ShortName']):
+                        self.variables['UniqueDatasetSensor'][x] = i
+                    self.variables['UniqueDatasetSensor'] = self.variables['UniqueDatasetSensor'].values()
+                
+                # Create list of unique dataset source
+                self.variables['UniqueDatasetSource'] = {}
+                if 'DatasetSource-Source-ShortName' in doc:
+                    for i, x in enumerate(doc['DatasetSource-Source-ShortName']):
+                        self.variables['UniqueDatasetSource'][x] = i
+                    self.variables['UniqueDatasetSource'] = self.variables['UniqueDatasetSource'].values()
+                
+                # Last_DIF_Revision_Date
+                self.variables['Last_DIF_Revision_Date'] = datetime.utcfromtimestamp(float(doc['DatasetMetaHistory-LastRevisionDateLong'][0]) / 1000).strftime('%Y-%m-%d')
+                
+                # DIF_Revision_History
+                self.variables['DIF_Revision_History'] = doc['DatasetMetaHistory-RevisionHistory'][0]
+                
+                
+                
+                # DIF_Creation_Date
+                self.variables['DIF_Creation_Date'] = datetime.utcnow().strftime('%Y-%m-%d')
+                
+                # Set configurable DIF Author contact information
+                self.variables['author'] = dict(self._configuration.items('author'))
+
+                # Set configurable PO.DAAC and NODC contact information
+                self.variables['podaac'] = dict(self._configuration.items('podaac'))
+                self.variables['nodc'] = dict(self._configuration.items('nodc'))
+                
+    def _populateChannel(self, solrResponse):
+        pass
+
+    def _populateItem(self, solrResponse, doc, item):
+        pass
+    
+    def _filterString(self, str):
+        if str.lower() == 'none':
+            return ''
+        else:
+            return str

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/opensearch/granuleatomresponse.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/opensearch/granuleatomresponse.py b/src/main/python/libraries/edge/opensearch/granuleatomresponse.py
new file mode 100644
index 0000000..9b38347
--- /dev/null
+++ b/src/main/python/libraries/edge/opensearch/granuleatomresponse.py
@@ -0,0 +1,110 @@
+import datetime
+import urllib
+
+from edge.opensearch.atomresponsebysolr import AtomResponseBySolr
+from edge.dateutility import DateUtility
+
+class GranuleAtomResponse(AtomResponseBySolr):
+    def __init__(self, linkToGranule, host, url):
+        super(GranuleAtomResponse, self).__init__()
+
+        self.linkToGranule = linkToGranule.split(',')
+        self.host = host
+        self.url = url
+
+    def _populateChannel(self, solrResponse):
+        self.variables.append({'name': 'link', 'attribute': {'href': self.url+self.searchBasePath+'podaac-dataset-osd.xml', 'rel': 'search', 'type': 'application/opensearchdescription+xml' }})
+
+    def _populateItem(self, solrResponse, doc, item):
+        item.append({'name': 'title', 'value': doc['Granule-Name'][0]})
+        #item.append({'name': 'content', 'value': doc['Granule-Name'][0]})
+        
+        updated = None
+        startTime = None
+        if 'Granule-StartTimeLong' in doc and doc['Granule-StartTimeLong'][0] != '':
+            updated = DateUtility.convertTimeLongToIso(doc['Granule-StartTimeLong'][0])
+            startTime = updated
+        else:
+            updated = datetime.datetime.utcnow().isoformat()+'Z'
+        
+        item.append({'name': 'updated', 'value': updated})
+        item.append({'name': 'id', 'value': doc['Dataset-PersistentId'][0] + ':' + doc['Granule-Name'][0]})
+        
+        parameters = {'datasetId': doc['Dataset-PersistentId'][0], 'granuleName': doc['Granule-Name'][0]}
+        parameters['full'] = 'true'
+        item.append({'name': 'link', 'attribute': {'href': self.url+self.searchBasePath + 'granule?' + urllib.urlencode(parameters), 'rel': 'enclosure', 'type': 'application/atom+xml', 'title': 'PO.DAAC Metadata' }})
+        del parameters['full']
+        parameters['format'] = 'iso'
+        item.append({'name': 'link', 'attribute': {'href': self.url+self.metadataBasePath + 'granule?' +  urllib.urlencode(parameters), 'rel': 'enclosure', 'type': 'text/xml', 'title': 'ISO-19115 Metadata' }})
+        parameters['format'] = 'fgdc'
+        item.append({'name': 'link', 'attribute': {'href': self.url+self.metadataBasePath + 'granule?' +  urllib.urlencode(parameters), 'rel': 'enclosure', 'type': 'text/xml', 'title': 'FGDC Metadata' }})
+        
+        #item.append({'name': 'description', 'value': doc['Dataset-Description'][0]})
+        #item.append({'name': 'link', 'value': self.portalUrl+'/'+doc['Dataset-ShortName'][0]})
+        #link = self._getLinkToGranule(doc)
+        #if link['href'] is not None:
+        #    item.append({'name': 'link', 'attribute': link})
+        if 'GranuleReference-Type' in doc:
+            if 'Granule-DataFormat' in doc:
+                type = 'application/x-' + doc['Granule-DataFormat'][0].lower()
+            else:
+                type = 'text/plain'
+            #Look for ONLINE reference only
+            granuleRefDict = dict([(doc['GranuleReference-Type'][i], doc['GranuleReference-Path'][i]) for i,x in enumerate(doc['GranuleReference-Status']) if x=="ONLINE"])
+            if 'LOCAL-OPENDAP' in granuleRefDict:
+                item.append({'name': 'link', 'attribute': {'href': granuleRefDict['LOCAL-OPENDAP'], 'rel': 'enclosure', 'type': 'text/html', 'title': 'OPeNDAP URL' }})
+            elif 'REMOTE-OPENDAP' in granuleRefDict:
+                item.append({'name': 'link', 'attribute': {'href': granuleRefDict['REMOTE-OPENDAP'], 'rel': 'enclosure', 'type': 'text/html', 'title': 'OPeNDAP URL' }})
+            if 'LOCAL-FTP' in granuleRefDict:
+                item.append({'name': 'link', 'attribute': {'href': granuleRefDict['LOCAL-FTP'], 'rel': 'enclosure', 'type': type, 'title': 'FTP URL' }})
+            elif 'REMOTE-FTP' in granuleRefDict:
+                item.append({'name': 'link', 'attribute': {'href': granuleRefDict['REMOTE-FTP'], 'rel': 'enclosure', 'type': type, 'title': 'FTP URL' }})
+
+        item.append({'namespace': 'podaac', 'name': 'datasetId', 'value': doc['Dataset-PersistentId'][0]})
+        item.append({'namespace': 'podaac', 'name': 'shortName', 'value': doc['Dataset-ShortName'][0]})
+        
+        if 'GranuleSpatial-NorthLat' in doc and 'GranuleSpatial-EastLon' in doc and 'GranuleSpatial-SouthLat' in doc and 'GranuleSpatial-WestLon' in doc:
+            item.append({'namespace': 'georss', 'name': 'where', 'value': {'namespace': 'gml', 'name': 'Envelope', 'value': [{'namespace': 'gml', 'name': 'lowerCorner', 'value': ' '.join([doc['GranuleSpatial-WestLon'][0], doc['GranuleSpatial-SouthLat'][0]])}, {'namespace': 'gml', 'name': 'upperCorner', 'value': ' '.join([doc['GranuleSpatial-EastLon'][0], doc['GranuleSpatial-NorthLat'][0]])}]}})
+        
+        if startTime is not None:
+            item.append({'namespace': 'time', 'name': 'start', 'value': startTime})
+
+        if 'Granule-StopTimeLong' in doc and doc['Granule-StopTimeLong'][0] != '':
+            item.append({'namespace': 'time', 'name': 'end', 'value': DateUtility.convertTimeLongToIso(doc['Granule-StopTimeLong'][0])})
+
+        if 'full' in self.parameters and self.parameters['full']:
+            multiValuedElementsKeys = ('GranuleArchive-', 'GranuleReference-')
+            self._populateItemWithPodaacMetadata(doc, item, multiValuedElementsKeys)
+
+    '''
+    def _getLinkToGranule(self, doc):
+        attr = {}
+        link = None
+
+        if 'GranuleReference-Type' in doc and len(self.linkToGranule) > 0:
+            granuleRefDict = dict(zip(doc['GranuleReference-Type'], zip(doc['GranuleReference-Path'], doc['GranuleReference-Status'])))
+
+            for type in self.linkToGranule:
+                # check if reference type exists
+                if type in granuleRefDict:
+                    # check if reference is online
+                    if granuleRefDict[type][1] == 'ONLINE':
+                        link = granuleRefDict[type][0]
+                        break
+            if link is not None:
+                attr['rel'] = 'http://esipfed.org/ns/discovery/1.1/data#'
+                attr['title'] = 'Granule File'
+                
+                if 'GranuleArchive-Name' in doc and 'GranuleArchive-Type' in doc and 'GranuleArchive-FileSize':
+                    granuleArchiveDict  = dict(zip(doc['GranuleArchive-Type'], zip(doc['GranuleArchive-Name'], doc['GranuleArchive-FileSize']))) 
+                    if link.endswith(granuleArchiveDict['DATA'][0]):
+                        attr['size'] = granuleArchiveDict['DATA'][1]
+                
+                if 'Granule-DataFormat' in doc:
+                    attr['type'] = 'application/x-' + doc['Granule-DataFormat'][0].lower()
+        else:
+            #No link to granule download provided so create link back to opensearch to retrieve granule metadata
+            link = "http://" + self.host + "/granule/opensearch.atom?granule=" + doc['Granule-Name'][0]
+        attr['href'] = link
+        return attr
+    '''

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/opensearch/granuledatacastingresponse.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/opensearch/granuledatacastingresponse.py b/src/main/python/libraries/edge/opensearch/granuledatacastingresponse.py
new file mode 100644
index 0000000..24b5dc0
--- /dev/null
+++ b/src/main/python/libraries/edge/opensearch/granuledatacastingresponse.py
@@ -0,0 +1,41 @@
+import logging
+
+from edge.dateutility import DateUtility
+from edge.opensearch.datacastingresponsebysolr import DatacastingResponseBySolr
+
+class GranuleDatacastingResponse(DatacastingResponseBySolr):
+    def __init__(self, portalUrl, linkToGranule, archivedWithin):
+        super(GranuleDatacastingResponse, self).__init__(portalUrl, archivedWithin)
+
+        self.linkToGranule = linkToGranule.split(',')
+
+    def _populateChannel(self, solrResponse):
+        pass
+
+    def _populateItem(self, solrResponse, doc, item):
+        doc['Granule-StartTimeLong'][0] = DateUtility.convertTimeLongToRFC822(doc['Granule-StartTimeLong'][0])
+        doc['Granule-StopTimeLong'][0] = DateUtility.convertTimeLongToRFC822(doc['Granule-StopTimeLong'][0])
+        doc['Granule-ArchiveTimeLong'][0] = DateUtility.convertTimeLongToRFC822(doc['Granule-ArchiveTimeLong'][0])
+        
+        doc['GranuleLink'] = self._getLinkToGranule(doc)
+        
+        doc['GranuleFileSize'] = dict(zip(doc['GranuleArchive-Type'], doc['GranuleArchive-FileSize']))
+        
+        if 'GranuleReference-Type' in doc:
+            doc['GranuleReference'] = dict([(doc['GranuleReference-Type'][i], doc['GranuleReference-Path'][i]) for i,x in enumerate(doc['GranuleReference-Status']) if x=="ONLINE"])
+
+    def _getLinkToGranule(self, doc):
+        link = None
+
+        if 'GranuleReference-Type' in doc and len(self.linkToGranule) > 0:
+            granuleRefDict = dict(zip(doc['GranuleReference-Type'], zip(doc['GranuleReference-Path'], doc['GranuleReference-Status'])))
+
+            for type in self.linkToGranule:
+                # check if reference type exists
+                if type in granuleRefDict:
+                    # check if reference is online
+                    if granuleRefDict[type][1] == 'ONLINE':
+                        link = granuleRefDict[type][0]
+                        break
+
+        return link

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/opensearch/granulefgdcresponse.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/opensearch/granulefgdcresponse.py b/src/main/python/libraries/edge/opensearch/granulefgdcresponse.py
new file mode 100644
index 0000000..0582f60
--- /dev/null
+++ b/src/main/python/libraries/edge/opensearch/granulefgdcresponse.py
@@ -0,0 +1,13 @@
+import datetime
+
+from edge.opensearch.fgdcresponsebysolr import FgdcResponseBySolr
+
+class GranuleFgdcResponse(FgdcResponseBySolr):
+    def __init__(self):
+        super(GranuleFgdcResponse, self).__init__()
+
+    def _populateChannel(self, solrResponse):
+        pass
+
+    def _populateItem(self, solrResponse, doc, item):
+        pass

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/opensearch/granuleisoresponse.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/opensearch/granuleisoresponse.py b/src/main/python/libraries/edge/opensearch/granuleisoresponse.py
new file mode 100644
index 0000000..7b9b0a7
--- /dev/null
+++ b/src/main/python/libraries/edge/opensearch/granuleisoresponse.py
@@ -0,0 +1,33 @@
+import datetime
+
+from edge.opensearch.isoresponsebysolr import IsoResponseBySolr
+
+class GranuleIsoResponse(IsoResponseBySolr):
+    def __init__(self, linkToGranule):
+        super(GranuleIsoResponse, self).__init__()
+
+        self.linkToGranule = linkToGranule.split(',')
+
+    def _populateChannel(self, solrResponse):
+        pass
+
+    def _populateItem(self, solrResponse, doc, item):
+        link = self._getLinkToGranule(doc)
+        if link is not None:
+            doc['link'] = link
+
+    def _getLinkToGranule(self, doc):
+        link = None
+
+        if 'GranuleReference-Type' in doc and len(self.linkToGranule) > 0:
+            granuleRefDict = dict(zip(doc['GranuleReference-Type'], zip(doc['GranuleReference-Path'], doc['GranuleReference-Status'])))
+
+            for type in self.linkToGranule:
+                # check if reference type exists
+                if type in granuleRefDict:
+                    # check if reference is online
+                    if granuleRefDict[type][1] == 'ONLINE':
+                        link = granuleRefDict[type][0]
+                        break
+
+        return link

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/opensearch/granuleresponsebysolr.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/opensearch/granuleresponsebysolr.py b/src/main/python/libraries/edge/opensearch/granuleresponsebysolr.py
new file mode 100644
index 0000000..1b1ca80
--- /dev/null
+++ b/src/main/python/libraries/edge/opensearch/granuleresponsebysolr.py
@@ -0,0 +1,37 @@
+from edge.opensearch.responsebysolr import ResponseBySolr
+
+class GranuleResponseBySolr(ResponseBySolr):
+    def __init__(self, linkToGranule):
+        super(GranuleResponseBySolr, self).__init__()
+
+        self.linkToGranule = linkToGranule
+
+    def _populateChannel(self, solrResponse):
+        pass
+
+    def _populateItem(self, solrResponse, doc, item):
+        item.append({'name': 'title', 'value': doc['Granule-Name'][0]})
+        item.append({'name': 'description', 'value': doc['Granule-Name'][0]})
+        #item.append({'name': 'description', 'value': doc['Dataset-Description'][0]})
+        #item.append({'name': 'link', 'value': self.portalUrl+'/'+doc['Dataset-ShortName'][0]})
+        link = self._getLinkToGranule(doc)
+        if link is not None:
+            item.append({'name': 'link', 'value': link})
+
+    def _getLinkToGranule(self, doc):
+        link = None
+
+        if 'GranuleReference-Type' in doc:
+            types = doc['GranuleReference-Type']
+
+            typeIndex = -1
+            for index, type in enumerate(types):
+                if type == self.linkToGranule:
+                    typeIndex = index
+                    break
+
+            if typeIndex >= 0:
+                if ('GranuleReference-Path' in doc) and (len(doc['GranuleReference-Path']) > typeIndex):
+                    link = doc['GranuleReference-Path'][typeIndex]
+
+        return link

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/opensearch/granulerssresponse.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/opensearch/granulerssresponse.py b/src/main/python/libraries/edge/opensearch/granulerssresponse.py
new file mode 100644
index 0000000..a514cca
--- /dev/null
+++ b/src/main/python/libraries/edge/opensearch/granulerssresponse.py
@@ -0,0 +1,96 @@
+import urllib
+from edge.opensearch.rssresponsebysolr import RssResponseBySolr
+from edge.dateutility import DateUtility
+
+class GranuleRssResponse(RssResponseBySolr):
+    def __init__(self, linkToGranule, host, url):
+        super(GranuleRssResponse, self).__init__()
+
+        self.linkToGranule = linkToGranule.split(',')
+        self.host = host
+        self.url = url
+
+    def _populateChannel(self, solrResponse):
+        self.variables.append({'namespace':'atom', 'name': 'link', 'attribute': {'href': self.url+self.searchBasePath+'podaac-dataset-osd.xml', 'rel': 'search', 'type': 'application/opensearchdescription+xml' }})
+
+    def _populateItem(self, solrResponse, doc, item):
+        item.append({'name': 'title', 'value': doc['Granule-Name'][0]})
+        item.append({'name': 'description', 'value': doc['Granule-Name'][0]})
+        #item.append({'name': 'description', 'value': doc['Dataset-Description'][0]})
+        #item.append({'name': 'link', 'value': self.portalUrl+'/'+doc['Dataset-ShortName'][0]})
+        
+        updated = None
+        startTime = None
+        if 'Granule-StartTimeLong' in doc and doc['Granule-StartTimeLong'][0] != '':
+            updated = DateUtility.convertTimeLongToIso(doc['Granule-StartTimeLong'][0])
+            startTime = updated
+        else:
+            updated = datetime.datetime.utcnow().isoformat()+'Z'
+        
+        item.append({'name': 'pubDate', 'value': updated})
+        item.append({'name': 'guid', 'value': doc['Dataset-PersistentId'][0] + ':' + doc['Granule-Name'][0]})
+        
+        link = self._getLinkToGranule(doc)
+        if link is not None:
+            item.append({'name': 'link', 'value': link})
+        
+        parameters = {'datasetId': doc['Dataset-PersistentId'][0], 'granuleName': doc['Granule-Name'][0]}
+        parameters['full'] = 'true'
+        parameters['format'] = 'rss'
+        item.append({'name': 'enclosure', 'attribute': {'url': self.url+self.searchBasePath + 'granule?' + urllib.urlencode(parameters), 'type': 'application/rss+xml', 'length': '0'}})
+        del parameters['full']
+        parameters['format'] = 'iso'
+        item.append({'name': 'enclosure', 'attribute': {'url': self.url+self.metadataBasePath + 'granule?' +  urllib.urlencode(parameters), 'type': 'text/xml', 'length': '0'}})
+        parameters['format'] = 'fgdc'
+        item.append({'name': 'enclosure', 'attribute': {'url': self.url+self.metadataBasePath + 'granule?' +  urllib.urlencode(parameters), 'type': 'text/xml', 'length': '0'}})
+        
+        if 'GranuleReference-Type' in doc:
+            if 'Granule-DataFormat' in doc:
+                type = 'application/x-' + doc['Granule-DataFormat'][0].lower()
+            else:
+                type = 'text/plain'
+            #Look for ONLINE reference only
+            granuleRefDict = dict([(doc['GranuleReference-Type'][i], doc['GranuleReference-Path'][i]) for i,x in enumerate(doc['GranuleReference-Status']) if x=="ONLINE"])
+            if 'LOCAL-OPENDAP' in granuleRefDict:
+                item.append({'name': 'enclosure', 'attribute': {'url': granuleRefDict['LOCAL-OPENDAP'], 'type': 'text/html', 'length': '0'}})
+            elif 'REMOTE-OPENDAP' in granuleRefDict:
+                item.append({'name': 'enclosure', 'attribute': {'url': granuleRefDict['REMOTE-OPENDAP'], 'type': 'text/html', 'length': '0'}})
+            if 'LOCAL-FTP' in granuleRefDict:
+                item.append({'name': 'enclosure', 'attribute': {'url': granuleRefDict['LOCAL-FTP'], 'type': type, 'length': '0'}})
+            elif 'REMOTE-FTP' in granuleRefDict:
+                item.append({'name': 'enclosure', 'attribute': {'url': granuleRefDict['REMOTE-FTP'], 'type': type, 'length': '0'}})
+
+        item.append({'namespace': 'podaac', 'name': 'datasetId', 'value': doc['Dataset-PersistentId'][0]})
+        item.append({'namespace': 'podaac', 'name': 'shortName', 'value': doc['Dataset-ShortName'][0]})
+        
+        if 'GranuleSpatial-NorthLat' in doc and 'GranuleSpatial-EastLon' in doc and 'GranuleSpatial-SouthLat' in doc and 'GranuleSpatial-WestLon' in doc:
+            item.append({'namespace': 'georss', 'name': 'where', 'value': {'namespace': 'gml', 'name': 'Envelope', 'value': [{'namespace': 'gml', 'name': 'lowerCorner', 'value': ' '.join([doc['GranuleSpatial-WestLon'][0], doc['GranuleSpatial-SouthLat'][0]])}, {'namespace': 'gml', 'name': 'upperCorner', 'value': ' '.join([doc['GranuleSpatial-EastLon'][0], doc['GranuleSpatial-NorthLat'][0]])}]}})
+        
+        if 'Granule-StartTimeLong' in doc and doc['Granule-StartTimeLong'][0] != '':
+            item.append({'namespace': 'time', 'name': 'start', 'value': DateUtility.convertTimeLongToIso(doc['Granule-StartTimeLong'][0])})
+
+        if 'Granule-StopTimeLong' in doc and doc['Granule-StopTimeLong'][0] != '':
+            item.append({'namespace': 'time', 'name': 'end', 'value': DateUtility.convertTimeLongToIso(doc['Granule-StopTimeLong'][0])})
+
+        if 'full' in self.parameters and self.parameters['full']:
+            multiValuedElementsKeys = ('GranuleArchive-', 'GranuleReference-')
+            self._populateItemWithPodaacMetadata(doc, item, multiValuedElementsKeys)
+
+    def _getLinkToGranule(self, doc):
+        link = None
+
+        if 'GranuleReference-Type' in doc and len(self.linkToGranule) > 0:
+            granuleRefDict = dict(zip(doc['GranuleReference-Type'], zip(doc['GranuleReference-Path'], doc['GranuleReference-Status'])))
+
+            for type in self.linkToGranule:
+                # check if reference type exists
+                if type in granuleRefDict:
+                    # check if reference is online
+                    if granuleRefDict[type][1] == 'ONLINE':
+                        link = granuleRefDict[type][0]
+                        break
+        else:
+            #No link to granule download provided so create link back to opensearch to retrieve granule metadata
+            link = "http://" + self.host + "/granule/opensearch.rss?granule=" + doc['Granule-Name'][0]
+
+        return link

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/opensearch/granulewriter.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/opensearch/granulewriter.py b/src/main/python/libraries/edge/opensearch/granulewriter.py
new file mode 100644
index 0000000..ddbb194
--- /dev/null
+++ b/src/main/python/libraries/edge/opensearch/granulewriter.py
@@ -0,0 +1,251 @@
+from types import *
+import logging
+import urllib
+import json
+
+from edge.opensearch.responsewriter import ResponseWriter
+from edge.dateutility import DateUtility
+from edge.httputility import HttpUtility
+from edge.spatialsearch import SpatialSearch
+import re
+
+class GranuleWriter(ResponseWriter):
+    def __init__(self, configFilePath, requiredParams = None):
+        super(GranuleWriter, self).__init__(configFilePath, requiredParams)
+        self.startIndex = 0
+        self.entriesPerPage = self._configuration.getint('solr', 'entriesPerPage')
+
+    def get(self, requestHandler):
+        super(GranuleWriter, self).get(requestHandler)
+        #searchParameters = {}
+        #logging.debug('uri: '+str(requestHandler.request.headers))
+        
+        #startIndex = 0
+        try:
+            self.startIndex = requestHandler.get_argument('startIndex')
+        except:
+            pass
+
+        #entriesPerPage = self._configuration.getint('solr', 'entriesPerPage')
+        try:
+            self.entriesPerPage = requestHandler.get_argument('itemsPerPage')
+            #cap entries per age at 400
+            if (int(self.entriesPerPage) > 400):
+                self.entriesPerPage = 400
+            self.searchParameters['itemsPerPage'] = self.entriesPerPage
+        except:
+            pass
+
+        #pretty = True
+        try:
+            if requestHandler.get_argument('pretty').lower() == 'false':
+                self.pretty = False
+                self.searchParameters['pretty'] = 'false'
+        except:
+            pass
+
+        try:
+            if requestHandler.get_argument('full').lower() == 'true':
+                self.searchParameters['full'] = 'true'
+        except:
+            pass
+        
+        try:
+            self.searchParameters['format'] = requestHandler.get_argument('format')
+        except:
+            pass
+
+        parameters = ['startTime', 'endTime', 'keyword', 'granuleName', 'datasetId', 'shortName', 'bbox', 'sortBy']
+        #variables = {}
+        for parameter in parameters:
+            try:
+                value = requestHandler.get_argument(parameter)
+                self.variables[parameter] = value
+                self.searchParameters[parameter] = value
+            except:
+                pass
+
+        if 'keyword' in self.variables:
+            self.variables['keyword'] = self.variables['keyword'].replace('*', '')
+            self.variables['keyword'] = self.variables['keyword'].lower()
+
+        #Fetch dataset metadata from Solr
+        datasetVariables = {}
+        if 'datasetId' in self.variables:
+            datasetVariables['datasetId'] = self.variables['datasetId']
+        if 'shortName' in self.variables:
+            datasetVariables['shortName'] = self.variables['shortName']
+        self._getSingleSolrDatasetResponse(datasetVariables, self._onSolrDetermineProcessLevelResponse)
+
+    def _getSolrResponse(self, startIndex, entriesPerPage, variables):
+        query = self._constructSolrQuery(startIndex, entriesPerPage, variables)
+        url = self._configuration.get('solr', 'granuleUrl')
+
+        httpUtility = HttpUtility()
+        httpUtility.getResponse(url+'/select/?'+query, self._onSolrResponse)
+
+    def _constructSolrQuery(self, startIndex, entriesPerPage, variables):
+        #set default sort order
+        sort='Granule-StartTimeLong+desc'
+        filterQuery = None
+        queries = []
+        for key, value in variables.iteritems():
+            #query = ''
+            if key == 'startTime':
+                startTime = DateUtility.convertISOToUTCTimestamp(value)
+                if startTime is not None:
+                    query = 'Granule-StopTimeLong:'
+                    query += '['+str(startTime)+'%20TO%20*]'
+                    queries.append(query)
+            elif key == 'endTime':
+                stopTime = DateUtility.convertISOToUTCTimestamp(value)
+                if stopTime is not None:
+                    query = 'Granule-StartTimeLong:'
+                    query += '[*%20TO%20'+str(stopTime)+']'
+                    queries.append(query)
+            elif key == 'keyword':
+                newValue = urllib.quote(value)
+
+                query = 'SearchableText-LowerCased:('+newValue+')'
+                queries.append(query)
+            elif key == 'datasetId':
+                query = 'Dataset-PersistentId:'+self._urlEncodeSolrQueryValue(value)
+                queries.append(query)
+            elif key == 'shortName':
+                query = 'Dataset-ShortName-Full:'+self._urlEncodeSolrQueryValue(value)
+                queries.append(query)
+            elif key == 'granuleName':
+                query = 'Granule-Name-Full:'+self._urlEncodeSolrQueryValue(value)
+                queries.append(query)
+            elif key == 'granuleIds':
+                granuleIds = []
+                for granuleId in value:
+                    granuleIds.append(str(granuleId))
+                query = 'Granule-Id:('+'+OR+'.join(granuleIds)+')'
+                queries.append(query)
+
+                startIndex = 0
+            elif key == 'sortBy':
+                sortByMapping = {'timeAsc': 'Granule-StartTimeLong+asc'}
+                if value in sortByMapping.keys():
+                    sort = sortByMapping[value]
+            elif key == 'bbox':
+                filterQuery = self._constructBoundingBoxQuery(value)
+            #if query != '':
+            #    queries.append('%2B'+query)
+
+        if len(queries) == 0:
+            queries.append('*')
+
+        query = 'q='+'+AND+'.join(queries)+'&fq=Granule-AccessType:(OPEN+OR+PREVIEW+OR+SIMULATED+OR+REMOTE)+AND+Granule-Status:ONLINE&version=2.2&start='+str(startIndex)+'&rows='+str(entriesPerPage)+'&indent=on&wt=json&sort='+sort
+        if filterQuery is not None:
+            query += '&' + filterQuery
+        logging.debug('solr query: '+query)
+        
+        return query
+    
+    def _onSolrDetermineProcessLevelResponse(self, response):
+        try:
+            #Determine dataset processing level
+            processingLevel = None
+            solrJson = json.loads(response.body)
+            if len(solrJson['response']['docs']) >= 1:
+                if 'bbox' in self.variables:
+                    processingLevel = solrJson['response']['docs'][0]['Dataset-ProcessingLevel-Full'][0]
+                
+                    if processingLevel is not None and processingLevel.find('2') != -1:
+                        if self._configuration.get('service', 'bbox') == 'l2':
+                            #Call Matt's L2 Search Service
+                            #raise Exception(self._configuration.get('service', 'l2')+'?'+requestHandler.request.query)
+                            httpUtility = HttpUtility()
+                            url = self._configuration.get('service', 'l2') + '?'
+                            if 'format' not in self.requestHandler.request.arguments:
+                                url += 'format=atom&'
+                            url += self.requestHandler.request.query
+                            logging.debug("Calling L2 Service: " + url)
+                            result = httpUtility.getResponse(url, self._onL2Response)
+                        else:
+                            points = self.variables['bbox'].split(',')
+                            if len(points) == 4:
+                                spatialSearch = SpatialSearch(
+                                    self._configuration.get('service', 'database')
+                                )
+                                spatialResult = spatialSearch.searchGranules(
+                                    int(self.startIndex),
+                                    int(self.entriesPerPage),
+                                    float(points[0]),
+                                    float(points[1]),
+                                    float(points[2]),
+                                    float(points[3])
+                                )
+                                logging.debug("Granule spatial search returned")
+                                #if len(spatialResult[0]) > 0:
+                                self.variables['granuleIds'] = spatialResult[0]
+                                self.variables['granuleIdsFound'] = spatialResult[1]
+                
+                            del self.variables['bbox']
+                            solrJson = {'responseHeader': {'params': {}}, 'response': {}}
+                            solrJson['response']['numFound'] = int(self.variables['granuleIdsFound'])
+                            solrJson['response']['start'] = int(self.startIndex)
+                            solrJson['responseHeader']['params']['rows'] = int(self.entriesPerPage)
+                            solrJson['response']['docs'] = []
+                            for name in self.variables['granuleIds']:
+                               solrJson['response']['docs'].append({'Granule-Name': [name]})
+                            solrResponse = json.dumps(solrJson)
+                            
+                            searchText = ''
+                            if 'keyword' in self.variables:
+                                searchText = self.variables['keyword']
+                            openSearchResponse = self._generateOpenSearchResponse(
+                                solrResponse,
+                                searchText,
+                                self._configuration.get('service', 'url')+self.requestHandler.request.path,
+                                self.searchParameters,
+                                self.pretty
+                            )
+                            
+                            self.requestHandler.set_header("Content-Type", "application/xml")
+                            #requestHandler.set_header("Content-Type", "application/rss+xml")
+                            #requestHandler.write(solrResponse)
+                            self.requestHandler.write(openSearchResponse)
+                            self.requestHandler.finish()
+                    else:
+                        #Dataset is not an L2 dataset so handle search via Solr
+                        try:
+                            self._getSolrResponse(self.startIndex, self.entriesPerPage, self.variables)
+                        except:
+                            logging.exception('Failed to get solr response.')
+                else:
+                    #Not a bounding box search so handle search via Solr
+                    try:
+                        self._getSolrResponse(self.startIndex, self.entriesPerPage, self.variables)
+                    except:
+                        logging.exception('Failed to get solr response.')
+            else:
+                #Dataset metadata cannot be retreived so return empty search result
+                solrJson = {'responseHeader': {'params': {}}, 'response': {}}
+                solrJson['response']['numFound'] = 0
+                solrJson['response']['start'] = int(self.startIndex)
+                solrJson['responseHeader']['params']['rows'] = int(self.entriesPerPage)
+                solrJson['response']['docs'] = []
+                solrResponse = json.dumps(solrJson)
+                
+                self._writeResponse(solrResponse)
+        except BaseException as exception:
+            logging.exception('Failed to determine dataset processing level for bbox search ' + str(exception))
+            self._handleException(str(exception))
+
+    def _onL2Response(self, response):
+        if response.error:
+            self._handleException(str(response.error))
+        else:
+            try:
+                logging.debug('header: Content-Type '+response.headers['Content-Type'])
+                self.requestHandler.set_header('Content-Type', response.headers['Content-Type'])
+                logging.debug('header: Content-Length '+response.headers['Content-Length'])
+                self.requestHandler.set_header('Content-Length', response.headers['Content-Length'])
+            except:
+                pass
+            self.requestHandler.write(response.body)
+            self.requestHandler.finish()
+    



[14/15] incubator-sdap-edge git commit: SDAP-1 Import all code under the SDAP SGA

Posted by le...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/database/src/create_product_view.sql
----------------------------------------------------------------------
diff --git a/src/main/database/src/create_product_view.sql b/src/main/database/src/create_product_view.sql
new file mode 100644
index 0000000..471e9c3
--- /dev/null
+++ b/src/main/database/src/create_product_view.sql
@@ -0,0 +1,635 @@
+--*********************************************************************************************
+--**  Product Model 
+--**
+--**  The product model is comprised of the following data models:
+--**
+--**     Product and Granule Model
+--**        - product_granule_view
+--**            - granule_imagery
+--**            - granule
+--**        - product_operation_view
+--**            - product
+--**            - product_operation
+--**
+--**     Product Archive Model
+--**        - product_meta_history_view
+--**           - product
+--**           - product_meta_history
+--**        - product_archive_view
+--**            - product_archive
+--**            - archive_view
+--**                - product_archive
+--**                - product_archive_reference
+--**        - product_reference_view
+--**           - product
+--**           - product_reference
+--**        - product_data_day_view
+--**           - product
+--**           - product_data_day
+--**
+--**     Product Contact Model
+--**        - product_contact_view
+--**            - product_contact
+--**            - contact_provider_view (see create_imagery_provider.sql)
+--**                 - contact 
+--**                 - provider
+--**                 - provider_resource_view
+--**                     - provider
+--**                     - provider_resource
+--**
+--**     Product Elements Model
+--**        - product_element_view
+--**            - product_element
+--**            - product_element_dd_view
+--**                - product_element
+--**                - element_dd
+--**        - product_datetime_view
+--**            - product
+--**            - product_datetime
+--**        - product_character_view
+--**            - product
+--**            - product_character
+--**        - product_integer_view
+--**            - product
+--**            - product_integer
+--**        - product_real_view
+--**            - product
+--**            - product_real
+--*********************************************************************************************
+
+
+--*********************************************************************************************
+-- Product and Granule Model
+--*********************************************************************************************
+
+--------------------------------------------------
+-- product_granule_view
+--------------------------------------------------
+DROP VIEW IF EXISTS product_granule_view CASCADE;
+CREATE VIEW product_granule_view AS
+SELECT
+
+   -- granule_imagery
+   granule_imagery.product_id as product_id,
+
+   -- granule
+   string_agg(granule.id::int8::text,         ',' order by granule.id) as product_granule_id_list,
+   string_agg(granule.version::int8::text,    ',' order by granule.id) as product_granule_version_list,
+   string_agg(granule.dataset_id::int8::text, ',' order by granule.id) as product_granule_dataset_id_list,
+   string_agg(granule.metadata_endpoint,      ',' order by granule.id) as product_granule_metadata_endpoint_list,
+   string_agg(granule.remote_granule_ur,      ',' order by granule.id) as product_granule_remote_granule_ur_list
+
+FROM granule_imagery
+LEFT JOIN granule ON granule.id = granule_imagery.granule_id
+GROUP BY granule_imagery.product_id;
+SELECT COUNT(*) AS product_granule_view_count FROM product_granule_view;
+SELECT * FROM product_granule_view LIMIT 5;
+
+---------------------------------------------------------------------------
+-- product_operation_view
+---------------------------------------------------------------------------
+DROP VIEW IF EXISTS product_operation_view CASCADE;
+CREATE VIEW product_operation_view AS
+SELECT
+
+   -- product
+   product.id as product_id,
+
+   -- product_operation
+   string_agg(product_operation.version::int8::text,    ',' order by product_operation.id) as product_operation_version_list,
+   string_agg(product_operation.agent,                  ',' order by product_operation.id) as product_operation_agent_list,
+   string_agg(product_operation.operation,              ',' order by product_operation.id) as product_operation_list,
+   string_agg(product_operation.command,                ',' order by product_operation.id) as product_operation_command_list,
+   string_agg(product_operation.arguments,              ',' order by product_operation.id) as product_operation_arguments_list,
+   string_agg(product_operation.start_time::int8::text, ',' order by product_operation.id) as product_operation_start_time_list,
+   string_agg(product_operation.stop_time::int8::text,  ',' order by product_operation.id) as product_operation_stop_time_list,
+   string_agg(('1970-01-01 00:00:00 GMT'::timestamp + ((product_operation.start_time/1000)::text)::interval)::timestamp::text,
+                                                        ',' order by product_operation.id) as product_operation_start_time_string_list,
+   string_agg(('1970-01-01 00:00:00 GMT'::timestamp + ((product_operation.stop_time/1000)::text)::interval)::timestamp::text,
+                                                        ',' order by product_operation.id) as product_operation_stop_time_string_list
+FROM product
+LEFT JOIN product_operation ON product_operation.product_id = product.id
+GROUP BY product.id;
+SELECT COUNT(*) AS product_operation_view_count FROM product_operation_view;
+SELECT * FROM product_operation_view LIMIT 5;
+
+--*********************************************************************************************
+-- Product Archive Model 
+--*********************************************************************************************
+
+--------------------------------------------------
+-- product_meta_history_view
+--------------------------------------------------
+DROP VIEW IF EXISTS product_meta_history_view CASCADE;
+CREATE VIEW product_meta_history_view AS
+SELECT
+
+   -- product
+   product.id as product_id,
+
+   -- product_meta_history
+   product_meta_history.version                     as product_meta_history_version,
+   product_meta_history.version_id                  as product_meta_history_version_id,
+   product_meta_history.revision_history            as product_meta_history_revision_history,
+   product_meta_history.last_revision_date          as product_meta_history_last_revision_date,
+   product_meta_history.creation_date               as product_meta_history_creation_date,
+   ('1970-01-01 00:00:00 GMT'::timestamp + ((product_meta_history.last_revision_date/1000)::text)::interval)
+                                                    as product_meta_history_last_revision_date_string,
+   ('1970-01-01 00:00:00 GMT'::timestamp + ((product_meta_history.creation_date/1000)::text)::interval)
+                                                    as product_meta_history_creation_date_string
+FROM product 
+LEFT JOIN product_meta_history ON product_meta_history.product_id = product.id
+GROUP BY product.id,
+         product_meta_history.version,
+         product_meta_history.version_id,
+         product_meta_history.revision_history,
+         product_meta_history.last_revision_date,
+         product_meta_history.creation_date;
+   
+SELECT COUNT(*) AS product_meta_history_view_count FROM product_meta_history_view;
+SELECT * FROM product_meta_history_view LIMIT 5;
+
+--------------------------------------------------
+-- product_archive_view
+--------------------------------------------------
+DROP VIEW IF EXISTS archive_view CASCADE;
+CREATE VIEW archive_view AS
+SELECT
+
+   -- product_archive
+   product_archive.id,
+   product_archive.product_id      as product_id,
+   product_archive.version         as version,     
+   product_archive.name            as name,     
+   product_archive.type            as type,     
+   product_archive.file_size       as file_size,     
+   product_archive.checksum        as checksum,     
+   product_archive.compress_flag   as compress_flag,     
+   product_archive.status          as status,     
+
+   -- product_archive_reference
+   string_agg(product_archive_reference.description, ';' order by product_archive_reference.id) as reference_descriptions,
+   string_agg(product_archive_reference.name,        ';' order by product_archive_reference.id) as reference_names,
+   string_agg(product_archive_reference.type,        ';' order by product_archive_reference.id) as reference_types,
+   string_agg(product_archive_reference.status,      ';' order by product_archive_reference.id) as reference_status
+
+FROM product_archive LEFT JOIN product_archive_reference ON product_archive_reference.product_archive_id = product_archive.id
+GROUP BY product_archive.id;
+SELECT COUNT(*) AS archive_view_count FROM archive_view;
+SELECT * FROM archive_view LIMIT 5;
+
+DROP VIEW IF EXISTS product_archive_view CASCADE;
+CREATE VIEW product_archive_view AS
+SELECT
+
+   -- product
+   product.id as product_id,
+
+   -- archive_view
+   string_agg(archive_view.name,                         ',' order by archive_view.id) as product_archive_name_list,
+   string_agg(archive_view.type,                         ',' order by archive_view.id) as product_archive_type_list,
+   string_agg(archive_view.version::int8::text,          ',' order by archive_view.id) as product_archive_version_list,
+   string_agg(archive_view.file_size::int8::text,        ',' order by archive_view.id) as product_archive_file_size_list,
+   string_agg(archive_view.checksum,                     ',' order by archive_view.id) as product_archive_checksum_list,
+   string_agg(archive_view.compress_flag::boolean::text, ',' order by archive_view.id) as product_archive_compress_flag_list, 
+   string_agg(archive_view.status,                       ',' order by archive_view.id) as product_archive_status_list, 
+   string_agg(archive_view.reference_descriptions,       ',' order by archive_view.id) as product_archive_reference_description_list, 
+   string_agg(archive_view.reference_names,              ',' order by archive_view.id) as product_archive_reference_name_list, 
+   string_agg(archive_view.reference_types,              ',' order by archive_view.id) as product_archive_reference_type_list, 
+   string_agg(archive_view.reference_status,             ',' order by archive_view.id) as product_archive_reference_status_list
+FROM product
+LEFT JOIN archive_view ON archive_view.product_id = product.id
+GROUP BY product.id;
+SELECT COUNT(*) AS product_archive_view_count FROM product_archive_view;
+SELECT * FROM product_archive_view LIMIT 5;
+
+---------------------------------------------------------------------------
+-- product_reference_view
+---------------------------------------------------------------------------
+DROP VIEW IF EXISTS product_reference_view;
+CREATE VIEW product_reference_view AS
+SELECT
+
+   -- product
+   product.id as product_id,
+
+   -- product_reference
+   string_agg(product_reference.version::int8::text, ',' order by product_reference.id) as product_reference_version_list,
+   string_agg(product_reference.type,                ',' order by product_reference.id) as product_reference_type_list,
+   string_agg(product_reference.name,                ',' order by product_reference.id) as product_reference_name_list,
+   string_agg(product_reference.path,                ',' order by product_reference.id) as product_reference_path_list,
+   string_agg(product_reference.description,         ',' order by product_reference.id) as product_reference_description_list,
+   string_agg(product_reference.status,              ',' order by product_reference.id) as product_reference_status_list
+
+FROM product
+LEFT JOIN product_reference ON product_reference.product_id = product.id
+GROUP BY product.id;
+SELECT COUNT(*) AS product_reference_view_count FROM product_reference_view;
+SELECT * FROM product_reference_view LIMIT 5;
+
+--------------------------------------------------
+-- product_data_day_view
+--------------------------------------------------
+DROP VIEW IF EXISTS product_data_day_view CASCADE;
+CREATE VIEW product_data_day_view AS
+SELECT
+
+   -- product
+   product.id as product_id,
+
+   -- product_data_day
+   string_agg(product_data_day.version::int8::text, ',' order by product_data_day.id) as product_data_day_version_list,
+   string_agg(product_data_day.data_day::int8::text, ',' order by product_data_day.id) as product_data_day_list,
+   string_agg(('1970-01-01 00:00:00 GMT'::timestamp + ((product_data_day.data_day/1000)::text)::interval)::timestamp::text,
+               ',' order by product_data_day.id) as product_data_day_string_list
+FROM product 
+LEFT JOIN product_data_day ON product_data_day.product_id = product.id
+GROUP BY product.id;
+SELECT COUNT(*) AS product_data_day_view_count FROM product_data_day_view;
+SELECT * FROM product_data_day_view LIMIT 5;
+
+--*********************************************************************************************
+-- Contact Provider Model 
+--*********************************************************************************************
+
+--------------------------------------------------
+-- product_contact_view
+--------------------------------------------------
+DROP VIEW IF EXISTS product_contact_view CASCADE;
+CREATE VIEW product_contact_view AS
+SELECT
+
+   -- product_contact
+   product_contact.product_id as product_id,
+
+   -- contact_provider_view 
+   string_agg(contact_provider_view.contact_version::int8::text,                   
+              ',' order by contact_provider_view.contact_id) as product_contact_version_list,
+   string_agg(contact_provider_view.contact_role,                   
+              ',' order by contact_provider_view.contact_id) as product_contact_role_list,
+   string_agg(contact_provider_view.contact_first_name,             
+              ',' order by contact_provider_view.contact_id) as product_contact_first_name_list,
+   string_agg(contact_provider_view.contact_last_name,              
+              ',' order by contact_provider_view.contact_id) as product_contact_last_name_list,
+   string_agg(contact_provider_view.contact_middle_name,            
+              ',' order by contact_provider_view.contact_id) as product_contact_middle_name_list,
+   string_agg(contact_provider_view.contact_address,                
+              ',' order by contact_provider_view.contact_id) as product_contact_address_list,
+   string_agg(contact_provider_view.contact_notify_type,            
+              ',' order by contact_provider_view.contact_id) as product_contact_notify_type_list,
+   string_agg(contact_provider_view.contact_email,                  
+              ',' order by contact_provider_view.contact_id) as product_contact_email_list,
+   string_agg(contact_provider_view.contact_phone,                  
+              ',' order by contact_provider_view.contact_id) as product_contact_phone_list,
+   string_agg(contact_provider_view.contact_fax,                    
+              ',' order by contact_provider_view.contact_id) as product_contact_fax_list,
+   string_agg(contact_provider_view.provider_long_name,             
+              ',' order by contact_provider_view.contact_id) as product_contact_provider_long_name_list,
+   string_agg(contact_provider_view.provider_short_name,            
+              ',' order by contact_provider_view.contact_id) as product_contact_provider_short_name_list,
+   string_agg(contact_provider_view.provider_type,                  
+              ',' order by contact_provider_view.contact_id) as product_contact_provider_type_list,
+   string_agg(contact_provider_view.provider_resource_description_list, 
+              ',' order by contact_provider_view.contact_id) as product_contact_provider_resource_descriptions_list,
+   string_agg(contact_provider_view.provider_resource_name_list,        
+              ',' order by contact_provider_view.contact_id) as product_contact_provider_resource_names_list,
+   string_agg(contact_provider_view.provider_resource_path_list,        
+              ',' order by contact_provider_view.contact_id) as product_contact_provider_resource_paths_list,
+   string_agg(contact_provider_view.provider_resource_type_list,        
+              ',' order by contact_provider_view.contact_id) as product_contact_provider_resource_types_list
+
+FROM product_contact
+LEFT JOIN contact_provider_view ON contact_provider_view.contact_id = product_contact.contact_id
+GROUP BY product_contact.product_id;
+SELECT COUNT(*) AS product_contact_view_count FROM product_contact_view;
+SELECT * FROM product_contact_view LIMIT 5;
+
+--*********************************************************************************************
+-- Products Elements Model
+--*********************************************************************************************
+
+--------------------------------------------------
+-- product_element_view
+--------------------------------------------------
+DROP VIEW IF EXISTS product_element_dd_view CASCADE;
+CREATE VIEW product_element_dd_view AS
+SELECT
+
+   -- product_element
+   product_element.id,
+   product_element.product_id,
+   product_element.version                as product_element_version,
+   product_element.obligation_flag        as product_element_obligation_flag,
+   product_element.scope                  as product_element_scope,
+
+   -- element_dd
+   string_agg(element_dd.version::int8::text,    ';' order by element_dd.id) as product_element_dd_versions,
+   string_agg(element_dd.type,                   ';' order by element_dd.id) as product_element_dd_types,
+   string_agg(element_dd.description,            ';' order by element_dd.id) as product_element_dd_descriptions,
+   string_agg(element_dd.scope,                  ';' order by element_dd.id) as product_element_dd_scopes,
+   string_agg(element_dd.long_name,              ';' order by element_dd.id) as product_element_dd_long_names,
+   string_agg(element_dd.short_name,             ';' order by element_dd.id) as product_element_dd_short_names,
+   string_agg(element_dd.max_length::int8::text, ';' order by element_dd.id) as product_element_dd_max_lengths
+
+FROM product_element
+LEFT JOIN element_dd ON product_element.element_id = element_dd.id
+GROUP BY product_element.id;
+SELECT COUNT(*) AS product_element_dd_view_count FROM product_element_dd_view;
+SELECT * FROM product_element_dd_view LIMIT 5;
+
+DROP VIEW IF EXISTS product_element_view CASCADE;
+CREATE VIEW product_element_view AS
+SELECT
+
+   -- product
+   product.id as product_id,
+
+   -- product_element_dd_view
+   string_agg(product_element_dd_view.product_element_version::int8::text,
+              ',' order by product_element_dd_view.id) as product_element_version_list,
+   string_agg(product_element_dd_view.product_element_obligation_flag::boolean::text,
+              ',' order by product_element_dd_view.id) as product_element_obligation_flag_list,
+   string_agg(product_element_dd_view.product_element_scope,
+              ',' order by product_element_dd_view.id) as product_element_scope_list,
+   string_agg(product_element_dd_view.product_element_dd_versions,                   
+              ',' order by product_element_dd_view.id) as product_element_dd_version_list,
+   string_agg(product_element_dd_view.product_element_dd_types,                   
+              ',' order by product_element_dd_view.id) as product_element_dd_type_list,
+   string_agg(product_element_dd_view.product_element_dd_descriptions,            
+              ',' order by product_element_dd_view.id) as product_element_dd_description_list,
+   string_agg(product_element_dd_view.product_element_dd_scopes,                  
+              ',' order by product_element_dd_view.id) as product_element_dd_scope_list,
+   string_agg(product_element_dd_view.product_element_dd_long_names,              
+              ',' order by product_element_dd_view.id) as product_element_dd_long_name_list,
+   string_agg(product_element_dd_view.product_element_dd_short_names,             
+              ',' order by product_element_dd_view.id) as product_element_dd_short_name_list,
+   string_agg(product_element_dd_view.product_element_dd_max_lengths, 
+              ',' order by product_element_dd_view.id) as product_element_dd_max_length_list
+
+FROM product
+LEFT JOIN product_element_dd_view ON product_element_dd_view.product_id = product.id
+GROUP BY product.id;
+SELECT COUNT(*) AS product_element_view_count FROM product_element_view;
+SELECT * FROM product_element_view LIMIT 5;
+
+--------------------------------------------------
+-- product_datetime_view
+--------------------------------------------------
+
+DROP VIEW IF EXISTS product_datetime_view CASCADE;
+CREATE VIEW product_datetime_view AS
+SELECT
+
+   -- product
+   product.id as product_id,
+
+   -- product_datetime
+   string_agg(product_datetime.version::int8::text,
+              ',' order by product_datetime.id) as product_datetime_version_list,
+   string_agg(product_datetime.value_long::int8::text,
+              ',' order by product_datetime.id) as product_datetime_value_list,
+   string_agg(('1970-01-01 00:00:00 GMT'::timestamp +
+              ((product_datetime.value_long/1000)::text)::interval)::timestamp::text,
+              ',' order by product_datetime.id) as product_datetime_value_string_list
+
+FROM product
+LEFT JOIN product_datetime ON product_datetime.product_id = product.id
+GROUP BY product.id;
+SELECT COUNT(*) AS product_datetime_view_count FROM product_datetime_view;
+SELECT * FROM product_datetime_view LIMIT 5;
+
+--------------------------------------------------
+-- product_character_view
+--------------------------------------------------
+DROP VIEW IF EXISTS product_character_view CASCADE;
+CREATE VIEW product_character_view AS
+SELECT
+
+   -- product
+   product.id as product_id,
+
+   -- product_character
+   string_agg(product_character.version::int8::text,
+              ',' order by product_character.id) as product_character_version_list,
+   string_agg(product_character.value,
+              ',' order by product_character.id) as product_character_value_list
+
+FROM product
+LEFT JOIN product_character ON product_character.product_id = product.id
+GROUP BY product.id;
+SELECT COUNT(*) AS product_character_view_count FROM product_character_view;
+SELECT * FROM product_character_view LIMIT 5;
+
+--------------------------------------------------
+-- product_integer_view
+--------------------------------------------------
+DROP VIEW IF EXISTS product_integer_view CASCADE;
+CREATE VIEW product_integer_view AS
+SELECT
+
+   -- product
+   product.id as product_id,
+
+   -- product_integer
+   string_agg(product_integer.version::int8::text,
+              ',' order by product_integer.id) as product_integer_version_list,
+   string_agg(product_integer.units,
+              ',' order by product_integer.id) as product_integer_units_list,
+   string_agg(product_integer.value::int::text,
+              ',' order by product_integer.id) as product_integer_value_list
+
+FROM product
+LEFT JOIN product_integer ON product_integer.product_id = product.id
+GROUP BY product.id;
+SELECT COUNT(*) AS product_integer_view_count FROM product_integer_view;
+SELECT * FROM product_integer_view LIMIT 5;
+
+--------------------------------------------------
+-- product_real_view
+--------------------------------------------------
+DROP VIEW IF EXISTS product_real_view CASCADE;
+CREATE VIEW product_real_view AS
+SELECT
+
+   -- product
+   product.id as product_id,
+
+   -- product_real
+   string_agg(product_real.version::int8::text,
+              ',' order by product_real.id) as product_real_version_list,
+   string_agg(product_real.units,
+              ',' order by product_real.id) as product_real_units_list,
+   string_agg(product_real.value::numeric::text,
+              ',' order by product_real.id) as product_real_value_list
+
+FROM product
+LEFT JOIN product_real ON product_real.product_id = product.id
+GROUP BY product.id;
+SELECT COUNT(*) AS product_real_view_count FROM product_real_view;
+SELECT * FROM product_real_view LIMIT 5;
+
+
+--*********************************************************************************************
+-- Product
+--*********************************************************************************************
+
+-----------------------------------------------------------------------------------------------
+-- product_view
+-----------------------------------------------------------------------------------------------
+DROP VIEW IF EXISTS product_view CASCADE;
+CREATE VIEW product_view AS 
+SELECT 
+
+   -- product
+   product.id,
+   product.id as product_id, 
+   product.pt_id        as product_pt_id,
+   product.partial_id   as product_partial_id,
+   product.version      as product_version,
+   product.revision     as product_revision,
+   product.name         as product_name,
+   product.rel_path     as product_rel_path,
+   product.root_path    as product_root_path,
+   product.status       as product_status,
+   product.start_time   as product_start_time,
+   product.stop_time    as product_stop_time,
+   product.create_time  as product_create_time,
+   product.archive_time as product_archive_time,
+   '1970-01-01 00:00:00 GMT'::timestamp + ((product.start_time/1000)::text)::interval   AS product_start_time_string,
+   '1970-01-01 00:00:00 GMT'::timestamp + ((product.stop_time/1000)::text)::interval    AS product_stop_time_string,
+   '1970-01-01 00:00:00 GMT'::timestamp + ((product.create_time/1000)::text)::interval  AS product_create_time_string,
+   '1970-01-01 00:00:00 GMT'::timestamp + ((product.archive_time/1000)::text)::interval AS product_archive_time_string,
+
+   -- product_granule_view
+   product_granule_id_list,
+   product_granule_version_list,
+   product_granule_dataset_id_list,
+   product_granule_metadata_endpoint_list,
+   product_granule_remote_granule_ur_list,
+
+   -- product_operation_view
+   product_operation_version_list,
+   product_operation_agent_list,
+   product_operation_list,
+   product_operation_command_list,
+   product_operation_arguments_list,
+   product_operation_start_time_list,
+   product_operation_stop_time_list,
+   product_operation_start_time_string_list,
+   product_operation_stop_time_string_list,
+
+   -- product_meta_history_view
+   product_meta_history_version,
+   product_meta_history_version_id,
+   product_meta_history_revision_history,
+   product_meta_history_last_revision_date,
+   product_meta_history_creation_date,
+   product_meta_history_last_revision_date_string,
+   product_meta_history_creation_date_string,
+
+   -- product_archive_view
+   product_archive_name_list,
+   product_archive_type_list,
+   product_archive_version_list,
+   product_archive_file_size_list,
+   product_archive_checksum_list,
+   product_archive_compress_flag_list,
+   product_archive_status_list,
+   product_archive_reference_description_list,
+   product_archive_reference_name_list,
+   product_archive_reference_type_list,
+   product_archive_reference_status_list,
+
+   -- product_reference_view
+   product_reference_version_list,
+   product_reference_type_list,
+   product_reference_name_list,
+   product_reference_path_list,
+   product_reference_description_list,
+   product_reference_status_list,
+
+   -- product_data_day
+   product_data_day_version_list,
+   product_data_day_list,
+   product_data_day_string_list,
+
+   -- product_contact_view
+   product_contact_role_list,
+   product_contact_version_list,
+   product_contact_first_name_list,
+   product_contact_last_name_list,
+   product_contact_middle_name_list,
+   product_contact_address_list,
+   product_contact_notify_type_list,
+   product_contact_email_list,
+   product_contact_phone_list,
+   product_contact_fax_list,
+   product_contact_provider_long_name_list,
+   product_contact_provider_short_name_list,
+   product_contact_provider_type_list,
+   product_contact_provider_resource_descriptions_list,
+   product_contact_provider_resource_names_list,
+   product_contact_provider_resource_paths_list,
+   product_contact_provider_resource_types_list,
+
+   -- product_element_view
+   product_element_version_list,
+   product_element_obligation_flag_list,
+   product_element_scope_list,
+   product_element_dd_version_list,
+   product_element_dd_type_list,
+   product_element_dd_description_list,
+   product_element_dd_scope_list,
+   product_element_dd_long_name_list,
+   product_element_dd_short_name_list,
+   product_element_dd_max_length_list,
+
+   -- product_datetime_view
+   product_datetime_version_list,
+   product_datetime_value_list,
+   product_datetime_value_string_list,
+
+   -- product_character_view
+   product_character_version_list,
+   product_character_value_list,
+
+   -- product_integer_view
+   product_integer_version_list,
+   product_integer_value_list,
+   product_integer_units_list,
+
+   -- product_real_view
+   product_real_version_list,
+   product_real_value_list,
+   product_real_units_list
+
+FROM
+   product,
+   product_granule_view,
+   product_operation_view,
+   product_meta_history_view,
+   product_archive_view,
+   product_reference_view,
+   product_data_day_view,
+   product_contact_view,
+   product_element_view,
+   product_datetime_view,
+   product_character_view,
+   product_integer_view,
+   product_real_view
+WHERE
+   product.id = product_granule_view.product_id AND
+   product.id = product_operation_view.product_id AND
+   product.id = product_meta_history_view.product_id AND
+   product.id = product_archive_view.product_id AND
+   product.id = product_reference_view.product_id AND
+   product.id = product_data_day_view.product_id AND
+   product.id = product_contact_view.product_id AND
+   product.id = product_element_view.product_id AND
+   product.id = product_datetime_view.product_id AND
+   product.id = product_character_view.product_id AND
+   product.id = product_integer_view.product_id AND
+   product.id = product_real_view.product_id;
+
+SELECT COUNT(*) AS product_view_count FROM product_view;
+SELECT * FROM product_view LIMIT 5;

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/config.conf
----------------------------------------------------------------------
diff --git a/src/main/python/config.conf b/src/main/python/config.conf
new file mode 100644
index 0000000..194b517
--- /dev/null
+++ b/src/main/python/config.conf
@@ -0,0 +1,3 @@
+[server]
+port=8890
+host=localhost

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/edge-env.bash
----------------------------------------------------------------------
diff --git a/src/main/python/edge-env.bash b/src/main/python/edge-env.bash
new file mode 100644
index 0000000..8ddcd34
--- /dev/null
+++ b/src/main/python/edge-env.bash
@@ -0,0 +1,7 @@
+#!/bin/bash
+
+if [ -n "$PYTHONPATH" ]; then
+    export PYTHONPATH=${PYTHONPATH}:${PWD}/libraries
+else
+    export PYTHONPATH=${PWD}/libraries
+fi

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/edge-env.csh
----------------------------------------------------------------------
diff --git a/src/main/python/edge-env.csh b/src/main/python/edge-env.csh
new file mode 100644
index 0000000..1c42c52
--- /dev/null
+++ b/src/main/python/edge-env.csh
@@ -0,0 +1,7 @@
+#!/bin/csh
+
+if $?PYTHONPATH then
+    setenv PYTHONPATH ${PYTHONPATH}:${PWD}/libraries
+else
+    setenv PYTHONPATH ${PWD}/libraries
+endif

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/__init__.py b/src/main/python/libraries/edge/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/dateutility.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/dateutility.py b/src/main/python/libraries/edge/dateutility.py
new file mode 100644
index 0000000..a0e519d
--- /dev/null
+++ b/src/main/python/libraries/edge/dateutility.py
@@ -0,0 +1,57 @@
+from datetime import date, datetime, timedelta
+import dateutil.parser
+import calendar
+"""
+Utility class for date and time conversion.
+"""
+class DateUtility(object):
+    
+    RFC_822_GMT_FORMAT = "%a, %d %b %Y %H:%M:%S GMT"
+    
+    @staticmethod
+    def convertTimeLongToIso(time):
+        isoTime = ''
+        try:
+            isoTime = datetime.utcfromtimestamp(float(time) / 1000).isoformat() + 'Z'
+        except ValueError:
+            pass
+        return isoTime
+    
+    @staticmethod
+    def convertISOToUTCTimestamp(isoTime):
+        try:
+            #parse ISO date to datetime object
+            dt = dateutil.parser.parse(isoTime)
+            
+            #return timestamp in milliseconds
+            return calendar.timegm(dt.utctimetuple()) * 1000
+        except:
+            return None
+    
+    @staticmethod
+    def pastDateRFC822(hoursAgo):
+        return (datetime.utcnow() - timedelta(hours=hoursAgo)).strftime(DateUtility.RFC_822_GMT_FORMAT)
+    
+    @staticmethod
+    def convertTimeLongToRFC822(time):
+        return DateUtility.convertTimeLong(time, DateUtility.RFC_822_GMT_FORMAT)
+    
+    @staticmethod
+    def convertTimeLong(time, format):
+        strTime = ''
+        try:
+            strTime = datetime.utcfromtimestamp(float(time) / 1000).strftime(format)
+        except ValueError:
+            pass
+        return strTime
+
+    @staticmethod
+    def convertISOTime(isoTime, format):
+        try:
+            #parse ISO date to datetime object
+            dt = dateutil.parser.parse(isoTime)
+            
+            #return timestamp in specified format
+            return dt.strftime(format)
+        except:
+            return None

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/elasticsearch/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/elasticsearch/__init__.py b/src/main/python/libraries/edge/elasticsearch/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/elasticsearch/datasetwriter.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/elasticsearch/datasetwriter.py b/src/main/python/libraries/edge/elasticsearch/datasetwriter.py
new file mode 100644
index 0000000..34dd063
--- /dev/null
+++ b/src/main/python/libraries/edge/elasticsearch/datasetwriter.py
@@ -0,0 +1,192 @@
+from types import *
+import json
+import logging
+import urllib
+
+import requestresponder
+from edge.dateutility import DateUtility
+from edge.httputility import HttpUtility
+from edge.opensearch.responsewriter import ResponseWriter
+import re
+
+class DatasetWriter(ResponseWriter):
+    def __init__(self, configFilePath):
+        super(DatasetWriter, self).__init__(configFilePath)
+        self.datasets = []
+
+    def get(self, requestHandler):
+        super(DatasetWriter, self).get(requestHandler)
+        #searchParameters = {}
+        #logging.debug('uri: '+str(requestHandler.request.headers))
+
+        startIndex = 0
+        try:
+            startIndex = requestHandler.get_argument('startIndex')
+        except:
+            pass
+        self.searchParameters['startIndex'] = startIndex
+
+        entriesPerPage = self._configuration.getint('solr', 'entriesPerPage')
+        try:
+            entriesPerPage = requestHandler.get_argument('itemsPerPage')
+            #cap entries per age at 400
+            if (int(entriesPerPage) > 400):
+                entriesPerPage = 400
+        except:
+            pass
+        self.searchParameters['itemsPerPage'] = entriesPerPage
+
+        #pretty = True
+        try:
+            if requestHandler.get_argument('pretty').lower() == 'false':
+                self.pretty = False
+                self.searchParameters['pretty'] = 'false'
+        except:
+            pass
+
+        try:
+            if requestHandler.get_argument('full').lower() == 'true':
+                self.searchParameters['full'] = 'true'
+        except:
+            pass
+        
+        try:
+            self.searchParameters['format'] = requestHandler.get_argument('format')
+        except:
+            pass
+
+        parameters = ['startTime', 'endTime', 'keyword', 'identifier', 'shortName', 'instrument', 'platform', 'fileFormat', 'status', 'processLevel', 'sortBy', 'bbox', 'allowNone']
+        #variables = {}
+        for parameter in parameters:
+            try:
+                value = requestHandler.get_argument(parameter)
+                self.variables[parameter] = value
+                self.searchParameters[parameter] = value
+            except:
+                pass
+
+        if 'keyword' in self.variables:
+            self.variables['keyword'] = self.variables['keyword'].replace('*', '')
+            self.variables['keyword'] = self.variables['keyword'].lower()
+        """
+        else:
+            variables['keyword'] = '""'
+        """
+        #If generating OpenSearch response, need to make additional call to solr
+        #to determine which datasets have granules
+        try:
+            if 'search' in requestHandler.request.path:
+                callback = self._getHasGranuleResponseCallback(startIndex, entriesPerPage)
+                self._getHasGranuleResponse(callback)
+            else:
+                self._getResponse(startIndex, entriesPerPage, self.variables)
+        except:
+            logging.exception('Failed to get solr response.')
+        """
+        searchText = ''
+        if 'keyword' in variables:
+            searchText = variables['keyword']
+        openSearchResponse = self._generateOpenSearchResponse(
+            solrResponse,
+            searchText,
+            self._configuration.get('service', 'url') + requestHandler.request.path,
+            searchParameters,
+            pretty
+        )
+
+        requestHandler.set_header("Content-Type", "application/xml")
+        #requestHandler.set_header("Content-Type", "application/rss+xml")
+        #requestHandler.write(solrResponse)
+        requestHandler.write(openSearchResponse)
+        """
+
+    def _getResponse(self, startIndex, entriesPerPage, variables):
+        query = self._constructSolrQuery(startIndex, entriesPerPage, variables)
+        url = self._configuration.get('solr', 'datasetUrl')
+
+        httpUtility = HttpUtility()
+        httpUtility.getResponse(url+'/_search/?'+query, self._onSolrResponse)
+
+    def _constructSolrQuery(self, startIndex, entriesPerPage, variables):
+        queries = []
+        sort = None
+        filterQuery = None
+        for key, value in variables.iteritems():
+            #query = ''
+            if key == 'startTime':
+                startTime = DateUtility.convertISOToUTCTimestamp(value)
+                if startTime is not None:
+                    query = 'stop_time:'
+                    query += '['+str(startTime)+'%20TO%20*]'
+                    queries.append(query)
+            elif key == 'endTime':
+                stopTime = DateUtility.convertISOToUTCTimestamp(value)
+                if stopTime is not None:
+                    query = 'start_time:'
+                    query += '[*%20TO%20'+str(stopTime)+']'
+                    queries.append(query)
+            elif key == 'keyword':
+                newValue = urllib.quote(value)
+
+                query = newValue
+                queries.append(query)
+            elif key == 'identifier':
+                query = 'identifier:'+self._urlEncodeSolrQueryValue(value)
+                queries.append(query)
+            elif key == 'shortName':
+                query = 'Dataset-ShortName-Full:'+self._urlEncodeSolrQueryValue(value)
+                queries.append(query)
+            elif key == 'platform':
+                query = 'platform:'+self._urlEncodeSolrQueryValue(value)
+                queries.append(query)
+            elif key == 'instrument':
+                query = 'instrument:'+self._urlEncodeSolrQueryValue(value)
+                queries.append(query)
+            elif key == 'fileFormat':
+                query = 'DatasetPolicy-DataFormat-LowerCased:'+self._urlEncodeSolrQueryValue(value)
+                queries.append(query)
+            elif key == 'status':
+                query = 'DatasetPolicy-AccessType-LowerCased:'+self._urlEncodeSolrQueryValue(value)
+                queries.append(query)
+            elif key == 'processLevel':
+                query = 'Dataset-ProcessingLevel-LowerCased:'+value
+                queries.append(query)
+            elif key == 'sortBy':
+                sortByMapping = {'timeDesc': 'start_time:desc', 'timeAsc': 'start_time:asc'}
+                if value in sortByMapping.keys():
+                    sort = sortByMapping[value]
+            elif key == 'bbox':
+                filterQuery = self._constructBoundingBoxQuery(value)
+
+            #if query != '':
+            #    queries.append('%2B'+query)
+
+        if len(queries) == 0:
+            queries.append('*')
+
+        query = 'q='+'+AND+'.join(queries)+'&from='+str(startIndex)+'&size='+str(entriesPerPage)
+        if sort is not None:
+            query += '&sort=' + sort
+        if filterQuery is not None:
+            query += '&' + filterQuery
+        logging.debug('solr query: '+query)
+        
+        return query
+    
+    def _getHasGranuleResponse(self, callback):
+        url = self._configuration.get('solr', 'granuleUrl')
+
+        httpUtility = HttpUtility()
+        return httpUtility.getResponse(url+'/_search', callback, '{"query" : {"match_all" : {}}, "size" : 0, "facets" : { "identifier" : { "terms" : {"field" : "identifier"}}}}')
+    
+    def _getHasGranuleResponseCallback(self, startIndex, entriesPerPage):
+        def onSolrHasGranuleResponse(response):
+            try:
+                solrJson = json.loads(response.body)
+                logging.debug("Got response for dataset facet")
+                facets = solrJson['facets']['identifier']['terms']
+                self.datasets = [facet['term'] for facet in facets]
+                self._getResponse(startIndex, entriesPerPage, self.variables)
+            except:
+                logging.exception('Failed to get solr response.')
+        return onSolrHasGranuleResponse

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/elasticsearch/granulewriter.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/elasticsearch/granulewriter.py b/src/main/python/libraries/edge/elasticsearch/granulewriter.py
new file mode 100644
index 0000000..d999fe1
--- /dev/null
+++ b/src/main/python/libraries/edge/elasticsearch/granulewriter.py
@@ -0,0 +1,142 @@
+from types import *
+import logging
+import urllib
+import json
+
+from edge.opensearch.responsewriter import ResponseWriter
+from edge.dateutility import DateUtility
+from edge.httputility import HttpUtility
+import re
+
+class GranuleWriter(ResponseWriter):
+    def __init__(self, configFilePath, requiredParams = None):
+        super(GranuleWriter, self).__init__(configFilePath, requiredParams)
+        self.startIndex = 0
+        self.entriesPerPage = self._configuration.getint('solr', 'entriesPerPage')
+
+    def get(self, requestHandler):
+        super(GranuleWriter, self).get(requestHandler)
+        #searchParameters = {}
+        #logging.debug('uri: '+str(requestHandler.request.headers))
+        
+        #startIndex = 0
+        try:
+            self.startIndex = requestHandler.get_argument('startIndex')
+        except:
+            pass
+        self.searchParameters['startIndex'] = self.startIndex
+
+        #entriesPerPage = self._configuration.getint('solr', 'entriesPerPage')
+        try:
+            self.entriesPerPage = requestHandler.get_argument('itemsPerPage')
+            #cap entries per age at 400
+            if (int(self.entriesPerPage) > 400):
+                self.entriesPerPage = 400
+        except:
+            pass
+        self.searchParameters['itemsPerPage'] = self.entriesPerPage
+
+        #pretty = True
+        try:
+            if requestHandler.get_argument('pretty').lower() == 'false':
+                self.pretty = False
+                self.searchParameters['pretty'] = 'false'
+        except:
+            pass
+
+        try:
+            if requestHandler.get_argument('full').lower() == 'true':
+                self.searchParameters['full'] = 'true'
+        except:
+            pass
+        
+        try:
+            self.searchParameters['format'] = requestHandler.get_argument('format')
+        except:
+            pass
+
+        parameters = ['startTime', 'endTime', 'keyword', 'name', 'identifier', 'shortName', 'bbox', 'sortBy']
+        #variables = {}
+        for parameter in parameters:
+            try:
+                value = requestHandler.get_argument(parameter)
+                self.variables[parameter] = value
+                self.searchParameters[parameter] = value
+            except:
+                pass
+
+        if 'keyword' in self.variables:
+            self.variables['keyword'] = self.variables['keyword'].replace('*', '')
+            self.variables['keyword'] = self.variables['keyword'].lower()
+
+        try:
+            self._getSolrResponse(self.startIndex, self.entriesPerPage, self.variables)
+        except:
+            logging.exception('Failed to get solr response.')
+
+    def _getSolrResponse(self, startIndex, entriesPerPage, variables):
+        query = self._constructSolrQuery(startIndex, entriesPerPage, variables)
+        url = self._configuration.get('solr', 'granuleUrl')
+
+        httpUtility = HttpUtility()
+        httpUtility.getResponse(url+'/_search', self._onSolrResponse, query)
+
+    def _constructSolrQuery(self, startIndex, entriesPerPage, variables):
+        #set default sort order
+        sort='desc'
+        filterQuery = None
+        queries = []
+        for key, value in variables.iteritems():
+            #query = ''
+            if key == 'startTime':
+                startTime = DateUtility.convertISOToUTCTimestamp(value)
+                if startTime is not None:
+                    query = 'stop_time:'
+                    query += '['+str(startTime)+' TO *]'
+                    queries.append(query)
+            elif key == 'endTime':
+                stopTime = DateUtility.convertISOToUTCTimestamp(value)
+                if stopTime is not None:
+                    query = 'start_time:'
+                    query += '[* TO '+str(stopTime)+']'
+                    queries.append(query)
+            elif key == 'keyword':
+                newValue = urllib.quote(value)
+
+                query = 'SearchableText-LowerCased:('+newValue+')'
+                queries.append(query)
+            elif key == 'identifier':
+                query = 'identifier:"'+value+'"'
+                queries.append(query)
+            elif key == 'shortName':
+                query = 'Dataset-ShortName-Full:'+self._urlEncodeSolrQueryValue(value)
+                queries.append(query)
+            elif key == 'name':
+                query = 'name:"'+value+'"'
+                queries.append(query)
+            elif key == 'granuleIds':
+                granuleIds = []
+                for granuleId in value:
+                    granuleIds.append(str(granuleId))
+                query = 'Granule-Id:('+'+OR+'.join(granuleIds)+')'
+                queries.append(query)
+
+                startIndex = 0
+            elif key == 'sortBy':
+                sortByMapping = {'timeAsc': 'asc'}
+                if value in sortByMapping.keys():
+                    sort = sortByMapping[value]
+            elif key == 'bbox':
+                filterQuery = self._constructBoundingBoxQuery(value)
+            #if query != '':
+            #    queries.append('%2B'+query)
+
+        if len(queries) == 0:
+            queries.append('*')
+
+        query = 'q='+'+AND+'.join(queries)+'&from='+str(startIndex)+'&size='+str(entriesPerPage)
+        if filterQuery is not None:
+            query += '&' + filterQuery
+        logging.debug('solr query: '+query)
+        
+        return json.dumps({'query' : {'filtered' : { 'query' : {'query_string' : {'query' : ' AND '.join(queries)}}, 'filter' : {'term' : {'status' : 'online'}}}}, 'from' : startIndex, 'size' : entriesPerPage, 'sort' : [{'start_time' : {'order' : sort}}]})

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/elasticsearch/opensearch/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/elasticsearch/opensearch/__init__.py b/src/main/python/libraries/edge/elasticsearch/opensearch/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/elasticsearch/opensearch/atomresponsebyelasticsearch.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/elasticsearch/opensearch/atomresponsebyelasticsearch.py b/src/main/python/libraries/edge/elasticsearch/opensearch/atomresponsebyelasticsearch.py
new file mode 100644
index 0000000..a79c9c6
--- /dev/null
+++ b/src/main/python/libraries/edge/elasticsearch/opensearch/atomresponsebyelasticsearch.py
@@ -0,0 +1,87 @@
+import json
+import urllib
+
+from edge.opensearch.atomresponse import AtomResponse
+from collections import defaultdict
+
+class AtomResponseByElasticsearch(AtomResponse):
+    def __init__(self):
+        super(AtomResponseByElasticsearch, self).__init__()
+        self.addNamespace("gibs", "http://gibs.jpl.nasa.gov/opensearch/")
+
+    def generate(self, response, pretty=False):
+        self._populate(response)
+        return super(AtomResponseByElasticsearch, self).generate(pretty)
+
+    def _populate(self, response):
+        self._populateChannel(response)
+
+        if response is None:
+            self.variables.append(
+                {'namespace': 'opensearch', 'name': 'totalResults', 'value': 1}
+            )
+            self.variables.append(
+                {'namespace': 'opensearch', 'name': 'startIndex', 'value': 1}
+            )
+            self.variables.append(
+                {'namespace': 'opensearch', 'name': 'itemsPerPage', 'value': 1}
+            )
+            self.parameters['startIndex'] = 0
+            url = self.link + '?' + urllib.urlencode(self.parameters)
+            self.variables.append({'name': 'link', 'attribute': {'href': url, 'rel': 'self', 'type': 'application/atom+xml'}})
+            self.variables.append({'name': 'link', 'attribute': {'href': url, 'rel': 'first', 'type': 'application/atom+xml'}})
+            item = [
+                {'name': 'title', 'value': 'Error'},
+                {'name': 'content', 'value': 'error'}
+            ]
+            self.items.append(item)
+        else:
+            #logging.debug(response)
+            jsonResponse = json.loads(response)
+            numFound = int(jsonResponse['hits']['total'])
+            start = int(self.parameters['startIndex'])
+            rows = int(self.parameters['itemsPerPage'])
+
+            self.parameters['startIndex'] = start
+            self.variables.append({'name': 'link', 'attribute': {'href': self.link + '?' + urllib.urlencode(self.parameters), 'rel': 'self', 'type': 'application/atom+xml'}})
+            self.parameters['startIndex'] = 0
+            self.variables.append({'name': 'link', 'attribute': {'href': self.link + '?' + urllib.urlencode(self.parameters), 'rel': 'first', 'type': 'application/atom+xml'}})
+            if start > 0:
+                if (start - rows > 0):
+                    self.parameters['startIndex'] = start - rows
+                self.variables.append({'name': 'link', 'attribute': {'href': self.link + '?' + urllib.urlencode(self.parameters), 'rel': 'previous', 'type': 'application/atom+xml'}})
+            if start + rows < numFound:
+                self.parameters['startIndex'] = start + rows
+                self.variables.append({'name': 'link', 'attribute': {'href': self.link + '?' + urllib.urlencode(self.parameters), 'rel': 'next', 'type': 'application/atom+xml'}})
+            
+            self.variables.append(
+                {'namespace': 'opensearch', 'name': 'totalResults', 'value': numFound}
+            )
+            self.variables.append(
+                {'namespace': 'opensearch', 'name': 'startIndex', 'value': start}
+            )
+            self.variables.append(
+                {'namespace': 'opensearch', 'name': 'itemsPerPage', 'value': rows}
+            )
+
+            for doc in jsonResponse['hits']['hits']:
+                item = []
+                self._populateItem(response, doc, item)
+                self.items.append(item)
+
+    def _populateChannel(self, response):
+        pass
+
+    def _populateItem(self, response, doc, item):
+        pass
+    
+    def _populateItemWithAllMetadata(self, doc, item):
+        for docKey in doc.keys():
+            if isinstance(doc[docKey], list):
+                for child in doc[docKey]:
+                    childItem = []
+                    for childKey in child.keys():
+                        childItem.append({'namespace': 'gibs', 'name': childKey, 'value': child[childKey]})
+                    item.append({'namespace': 'gibs', 'name': docKey, 'value': childItem})
+            else:
+                item.append({'namespace': 'gibs', 'name': docKey, 'value': doc[docKey]})

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/elasticsearch/opensearch/datasetatomresponse.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/elasticsearch/opensearch/datasetatomresponse.py b/src/main/python/libraries/edge/elasticsearch/opensearch/datasetatomresponse.py
new file mode 100644
index 0000000..a8d10d8
--- /dev/null
+++ b/src/main/python/libraries/edge/elasticsearch/opensearch/datasetatomresponse.py
@@ -0,0 +1,79 @@
+import logging
+import datetime
+import urllib
+
+from edge.elasticsearch.opensearch.atomresponsebyelasticsearch import AtomResponseByElasticsearch
+from edge.dateutility import DateUtility
+
+class DatasetAtomResponse(AtomResponseByElasticsearch):
+    def __init__(self, portalUrl, host, url, datasets):
+        super(DatasetAtomResponse, self).__init__()
+        self.portalUrl = portalUrl
+        self.host = host
+        self.url = url
+        self.datasets = datasets
+
+    def _populateChannel(self, solrResponse):
+        self.variables.append({'name': 'link', 'attribute': {'href': self.url+self.searchBasePath+'podaac-granule-osd.xml', 'rel': 'search', 'type': 'application/opensearchdescription+xml' }})
+
+    def _populateItem(self, solrResponse, doc, item):
+        persistentId = doc['_source']['identifier']
+        idTuple = ('identifier', persistentId)
+        """
+        if persistentId == '':
+            idTuple = ('shortName', doc['Dataset-ShortName'][0])
+        """
+        item.append({'name': 'title', 'value': doc['_source']['title']})
+        item.append({'name': 'content', 'value': doc['_source']['description']})
+        
+        item.append({'name': 'link', 'attribute': {'href': self.url + self.searchBasePath + 'dataset?' + urllib.urlencode(dict([idTuple, ('full', 'true')])), 'rel': 'enclosure', 'type': 'application/atom+xml', 'title': 'GIBS Metadata' }})
+        """
+        item.append({'name': 'link', 'attribute': {'href': self.url + self.metadataBasePath + 'dataset?' + urllib.urlencode(dict([idTuple, ('format', 'iso')])), 'rel': 'enclosure', 'type': 'text/xml', 'title': 'ISO-19115 Metadata' }})
+        item.append({'name': 'link', 'attribute': {'href': self.url + self.metadataBasePath + 'dataset?' + urllib.urlencode(dict([idTuple, ('format', 'gcmd')])), 'rel': 'enclosure', 'type': 'text/xml', 'title': 'GCMD Metadata' }})
+        """
+        #Only generate granule search link if dataset has granules
+        if (doc['_source']['identifier'].lower() in self.datasets):
+            supportedGranuleParams = dict([(key,value) for key,value in self.parameters.iteritems() if key in ['bbox', 'startTime', 'endTime']])
+            supportedGranuleParams['identifier'] = persistentId
+            item.append({'name': 'link', 'attribute': {'href': self.url + self.searchBasePath + 'granule?' + urllib.urlencode(supportedGranuleParams), 'rel': 'search', 'type': 'application/atom+xml', 'title': 'Product Search' }})
+        """
+        if 'Dataset-ImageUrl' in doc and doc['Dataset-ImageUrl'][0] != '':
+            item.append({'name': 'link', 'attribute': {'href': doc['Dataset-ImageUrl'][0], 'rel': 'enclosure', 'type': 'image/jpg', 'title': 'Thumbnail' }})
+        
+        if 'DatasetLocationPolicy-Type' in doc and 'DatasetLocationPolicy-BasePath' in doc:
+            url = dict(zip(doc['DatasetLocationPolicy-Type'], doc['DatasetLocationPolicy-BasePath']))
+            if 'LOCAL-OPENDAP' in url:
+                item.append({'name': 'link', 'attribute': {'href': url['LOCAL-OPENDAP'], 'rel': 'enclosure', 'type': 'text/html', 'title': 'OPeNDAP URL' }})
+            elif 'REMOTE-OPENDAP' in url:
+                item.append({'name': 'link', 'attribute': {'href': url['REMOTE-OPENDAP'], 'rel': 'enclosure', 'type': 'text/html', 'title': 'OPeNDAP URL' }})
+            if 'LOCAL-FTP' in url:
+                item.append({'name': 'link', 'attribute': {'href': url['LOCAL-FTP'], 'rel': 'enclosure', 'type': 'text/plain', 'title': 'FTP URL' }})
+            elif 'REMOTE-FTP' in url:
+                item.append({'name': 'link', 'attribute': {'href': url['REMOTE-FTP'], 'rel': 'enclosure', 'type': 'text/plain', 'title': 'FTP URL' }})
+        if doc['DatasetPolicy-ViewOnline'][0] == 'Y' and doc['DatasetPolicy-AccessType-Full'][0] in ['OPEN', 'PREVIEW', 'SIMULATED', 'REMOTE']:
+            portalUrl = self.portalUrl+'/'+doc['Dataset-ShortName'][0]
+            item.append({'name': 'link', 'attribute': {'href': portalUrl, 'rel': 'enclosure', 'type': 'text/html', 'title': 'Dataset Information' }})
+        updated = None
+        if 'DatasetMetaHistory-LastRevisionDateLong' in doc and doc['DatasetMetaHistory-LastRevisionDateLong'][0] != '':
+            updated = DateUtility.convertTimeLongToIso(doc['DatasetMetaHistory-LastRevisionDateLong'][0])
+        else:
+            updated = datetime.datetime.utcnow().isoformat()+'Z'
+        
+        item.append({'name': 'updated', 'value': updated})
+        """
+        item.append({'name': 'id', 'value': doc['_source']['identifier']})
+        """
+        item.append({'namespace': 'podaac', 'name': 'datasetId', 'value': doc['Dataset-PersistentId'][0]})
+        item.append({'namespace': 'podaac', 'name': 'shortName', 'value': doc['Dataset-ShortName'][0]})
+        """
+        if doc['_source']['west_longitude'] is not None and doc['_source']['south_latitude'] is not None and doc['_source']['east_longitude'] is not None and doc['_source']['north_latitude'] is not None:
+            item.append({'namespace': 'georss', 'name': 'where', 'value': {'namespace': 'gml', 'name': 'Envelope', 'value': [{'namespace': 'gml', 'name': 'lowerCorner', 'value': ' '.join([str(doc['_source']['west_longitude']), str(doc['_source']['south_latitude'])]) }, {'namespace': 'gml', 'name': 'upperCorner', 'value': ' '.join([str(doc['_source']['east_longitude']), str(doc['_source']['north_latitude'])])}]}})
+        
+        if 'start_time' in doc['_source'] and doc['_source']['start_time'] is not None:
+            item.append({'namespace': 'time', 'name': 'start', 'value': DateUtility.convertTimeLongToIso(doc['_source']['start_time'])})
+        
+        if 'stop_time' in doc['_source'] and doc['_source']['stop_time'] is not None:
+            item.append({'namespace': 'time', 'name': 'end', 'value': DateUtility.convertTimeLongToIso(doc['_source']['stop_time'])})
+        
+        if 'full' in self.parameters and self.parameters['full']:
+            self._populateItemWithAllMetadata(doc['_source'], item)

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/elasticsearch/opensearch/granuleatomresponse.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/elasticsearch/opensearch/granuleatomresponse.py b/src/main/python/libraries/edge/elasticsearch/opensearch/granuleatomresponse.py
new file mode 100644
index 0000000..a4d8cb7
--- /dev/null
+++ b/src/main/python/libraries/edge/elasticsearch/opensearch/granuleatomresponse.py
@@ -0,0 +1,78 @@
+import datetime
+import urllib
+
+from edge.elasticsearch.opensearch.atomresponsebyelasticsearch import AtomResponseByElasticsearch
+from edge.dateutility import DateUtility
+
+class GranuleAtomResponse(AtomResponseByElasticsearch):
+    def __init__(self, linkToGranule, host, url):
+        super(GranuleAtomResponse, self).__init__()
+
+        self.linkToGranule = linkToGranule.split(',')
+        self.host = host
+        self.url = url
+
+    def _populateChannel(self, solrResponse):
+        self.variables.append({'name': 'link', 'attribute': {'href': self.url+self.searchBasePath+'podaac-dataset-osd.xml', 'rel': 'search', 'type': 'application/opensearchdescription+xml' }})
+
+    def _populateItem(self, solrResponse, doc, item):
+        item.append({'name': 'title', 'value': doc['_source']['name']})
+        #item.append({'name': 'content', 'value': doc['Granule-Name'][0]})
+        
+        updated = None
+        startTime = None
+        if 'start_time' in doc['_source'] and doc['_source']['start_time'] is not None:
+            updated = DateUtility.convertTimeLongToIso(doc['_source']['start_time'])
+            startTime = updated
+        else:
+            updated = datetime.datetime.utcnow().isoformat()+'Z'
+        
+        item.append({'name': 'updated', 'value': updated})
+        item.append({'name': 'id', 'value': doc['_source']['identifier'] + ':' + doc['_source']['name']})
+        
+        parameters = {'identifier': doc['_source']['identifier'], 'name': doc['_source']['name']}
+        parameters['full'] = 'true'
+        item.append({'name': 'link', 'attribute': {'href': self.url+self.searchBasePath + 'granule?' + urllib.urlencode(parameters), 'rel': 'enclosure', 'type': 'application/atom+xml', 'title': 'GIBS Metadata' }})
+        del parameters['full']
+        '''
+        parameters['format'] = 'iso'
+        item.append({'name': 'link', 'attribute': {'href': self.url+self.metadataBasePath + 'granule?' +  urllib.urlencode(parameters), 'rel': 'enclosure', 'type': 'text/xml', 'title': 'ISO-19115 Metadata' }})
+        parameters['format'] = 'fgdc'
+        item.append({'name': 'link', 'attribute': {'href': self.url+self.metadataBasePath + 'granule?' +  urllib.urlencode(parameters), 'rel': 'enclosure', 'type': 'text/xml', 'title': 'FGDC Metadata' }})
+        
+        #item.append({'name': 'description', 'value': doc['Dataset-Description'][0]})
+        #item.append({'name': 'link', 'value': self.portalUrl+'/'+doc['Dataset-ShortName'][0]})
+        #link = self._getLinkToGranule(doc)
+        #if link['href'] is not None:
+        #    item.append({'name': 'link', 'attribute': link})
+        if 'GranuleReference-Type' in doc:
+            if 'Granule-DataFormat' in doc:
+                type = 'application/x-' + doc['Granule-DataFormat'][0].lower()
+            else:
+                type = 'text/plain'
+            #Look for ONLINE reference only
+            granuleRefDict = dict([(doc['GranuleReference-Type'][i], doc['GranuleReference-Path'][i]) for i,x in enumerate(doc['GranuleReference-Status']) if x=="ONLINE"])
+            if 'LOCAL-OPENDAP' in granuleRefDict:
+                item.append({'name': 'link', 'attribute': {'href': granuleRefDict['LOCAL-OPENDAP'], 'rel': 'enclosure', 'type': 'text/html', 'title': 'OPeNDAP URL' }})
+            elif 'REMOTE-OPENDAP' in granuleRefDict:
+                item.append({'name': 'link', 'attribute': {'href': granuleRefDict['REMOTE-OPENDAP'], 'rel': 'enclosure', 'type': 'text/html', 'title': 'OPeNDAP URL' }})
+            if 'LOCAL-FTP' in granuleRefDict:
+                item.append({'name': 'link', 'attribute': {'href': granuleRefDict['LOCAL-FTP'], 'rel': 'enclosure', 'type': type, 'title': 'FTP URL' }})
+            elif 'REMOTE-FTP' in granuleRefDict:
+                item.append({'name': 'link', 'attribute': {'href': granuleRefDict['REMOTE-FTP'], 'rel': 'enclosure', 'type': type, 'title': 'FTP URL' }})
+        '''
+        item.append({'namespace': 'gibs', 'name': 'identifier', 'value': doc['_source']['identifier']})
+        '''
+        item.append({'namespace': 'podaac', 'name': 'shortName', 'value': doc['Dataset-ShortName'][0]})
+        
+        if 'GranuleSpatial-NorthLat' in doc and 'GranuleSpatial-EastLon' in doc and 'GranuleSpatial-SouthLat' in doc and 'GranuleSpatial-WestLon' in doc:
+            item.append({'namespace': 'georss', 'name': 'where', 'value': {'namespace': 'gml', 'name': 'Envelope', 'value': [{'namespace': 'gml', 'name': 'lowerCorner', 'value': ' '.join([doc['GranuleSpatial-WestLon'][0], doc['GranuleSpatial-SouthLat'][0]])}, {'namespace': 'gml', 'name': 'upperCorner', 'value': ' '.join([doc['GranuleSpatial-EastLon'][0], doc['GranuleSpatial-NorthLat'][0]])}]}})
+        '''
+        if startTime is not None:
+            item.append({'namespace': 'time', 'name': 'start', 'value': startTime})
+
+        if 'stop_time' in doc['_source'] and doc['_source']['stop_time'] is not None:
+            item.append({'namespace': 'time', 'name': 'end', 'value': DateUtility.convertTimeLongToIso(doc['_source']['stop_time'])})
+
+        if 'full' in self.parameters and self.parameters['full']:
+            self._populateItemWithAllMetadata(doc['_source'], item)

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/httputility.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/httputility.py b/src/main/python/libraries/edge/httputility.py
new file mode 100644
index 0000000..d3fd650
--- /dev/null
+++ b/src/main/python/libraries/edge/httputility.py
@@ -0,0 +1,13 @@
+import tornado.httpclient
+
+class HttpUtility(object):
+    def getResponse(self, url, callback, body=None, headers=None):
+        requestHeaders = {'Connection': 'close'}
+        if headers is not None:
+            requestHeaders.update(headers)
+        if body is not None:
+            request = tornado.httpclient.HTTPRequest(url, method='POST', headers=requestHeaders, request_timeout=30, body=body)
+        else:
+            request = tornado.httpclient.HTTPRequest(url, method='GET', headers=requestHeaders, request_timeout=30)
+        httpClient = tornado.httpclient.AsyncHTTPClient()
+        httpClient.fetch(request,callback=callback)

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/opensearch/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/opensearch/__init__.py b/src/main/python/libraries/edge/opensearch/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/opensearch/atomresponse.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/opensearch/atomresponse.py b/src/main/python/libraries/edge/opensearch/atomresponse.py
new file mode 100644
index 0000000..ddf8bdb
--- /dev/null
+++ b/src/main/python/libraries/edge/opensearch/atomresponse.py
@@ -0,0 +1,145 @@
+import logging
+
+from xml.dom.minidom import Document
+import xml.sax.saxutils
+
+from edge.opensearch.response import Response
+
+class AtomResponse(Response):
+    def __init__(self):
+        super(AtomResponse, self).__init__()
+        self.namespaces = {
+            '': 'http://www.w3.org/2005/Atom',
+            'opensearch': 'http://a9.com/-/spec/opensearch/1.1/',
+            'podaac': 'http://podaac.jpl.nasa.gov/opensearch/',
+            'georss': 'http://www.georss.org/georss',
+            'gml': 'http://www.opengis.net/gml',
+            'time': 'http://a9.com/-/opensearch/extensions/time/1.0/'
+        }
+
+        self.title = None
+        self.link = None
+        self.update = None
+        self.authors = []
+        self.variables = []
+        self.items = []
+        self.id = None
+        self.updated = None
+        self.parameters = {}
+
+    def addNamespace(self, name, uri):
+        self.namespaces[name] = uri
+
+    def removeNamespace(self, name):
+        del self.namespaces[name]
+
+    def generate(self, pretty=False):
+        logging.debug('AtomResponse.generate is called.')
+
+        document = Document()
+        feed = document.createElement('feed')
+        for namespace in self.namespaces.keys():
+            namespaceAttr = 'xmlns'
+            if namespace != '':
+                namespaceAttr += ':'+namespace
+            feed.setAttribute(namespaceAttr, self.namespaces[namespace])
+        document.appendChild(feed)
+
+        title = document.createElement('title')
+        feed.appendChild(title)
+        title.appendChild(document.createTextNode(xml.sax.saxutils.escape(self.title)))
+        '''
+        link = document.createElement('link')
+        feed.appendChild(link)
+        link.appendChild(document.createTextNode(xml.sax.saxutils.escape(self.link)))
+        '''
+
+        updated = document.createElement('updated')
+        feed.appendChild(updated)
+        updated.appendChild(document.createTextNode(xml.sax.saxutils.escape(self.updated)))
+
+        id = document.createElement('id')
+        feed.appendChild(id)
+        id.appendChild(document.createTextNode(xml.sax.saxutils.escape(self.id)))
+
+        author = document.createElement('author')
+        feed.appendChild(author)
+        for authorName in self.authors:
+            authorElement = document.createElement('name')
+            author.appendChild(authorElement)
+            authorElement.appendChild(document.createTextNode(xml.sax.saxutils.escape(authorName)))
+
+        for variable in self.variables:
+            '''
+            elementName = variable['name']
+            if 'namespace' in variable:
+                elementName = variable['namespace']+':'+elementName
+
+            variableElement = document.createElement(elementName)
+            feed.appendChild(variableElement)
+            variableElement.appendChild(document.createTextNode(xml.sax.saxutils.escape(str(variable['value']))))
+            '''
+            self._createNode(document, variable, feed)
+
+        for item in self.items:
+            itemElement = document.createElement('entry')
+            feed.appendChild(itemElement)
+
+            for itemEntry in item:
+                self._createNode(document, itemEntry, itemElement);
+                '''
+                elementName = itemEntry['name']
+                if 'namespace' in itemEntry:
+                    elementName = itemEntry['namespace']+':'+elementName
+
+                variableElement = document.createElement(elementName)
+                itemElement.appendChild(variableElement)
+
+                if 'value' in itemEntry:
+                    value = itemEntry['value']
+                    if isinstance(value, list):
+                        if len(value) > 1:
+                            for valueEntry in value:
+                                valueName = 'value'
+                                if 'namespace' in itemEntry:
+                                    valueName = itemEntry['namespace']+':'+valueName
+                                valueElement = document.createElement(valueName)
+                                variableElement.appendChild(valueElement)
+                                valueElement.appendChild(document.createTextNode(xml.sax.saxutils.escape(str(valueEntry))))
+                        else:
+                            variableElement.appendChild(document.createTextNode(xml.sax.saxutils.escape(str(value[0]))))
+                    elif isinstance(value, dict):
+                        for key in value.keys():
+                            valueName = key
+                            if 'namespace' in itemEntry:
+                                valueName = itemEntry['namespace']+':'+valueName
+                            valueElement = document.createElement(valueName)
+                            variableElement.appendChild(valueElement)
+                            valueElement.appendChild(document.createTextNode(xml.sax.saxutils.escape(str(value[key]))))
+                    else:
+                        variableElement.appendChild(document.createTextNode(xml.sax.saxutils.escape(str(value))))
+                else:
+                    if 'attribute' in itemEntry:
+                        for attr in itemEntry['attribute'].keys():
+                            variableElement.setAttribute(attr, itemEntry['attribute'][attr])
+                '''
+        return document.toprettyxml() if pretty else document.toxml('utf-8')
+
+    def _createNode(self, document, itemEntry, itemElement):
+        elementName = itemEntry['name']
+        if 'namespace' in itemEntry:
+            elementName = itemEntry['namespace']+':'+elementName
+        variableElement = document.createElement(elementName)
+        itemElement.appendChild(variableElement)
+        if 'value' in itemEntry:
+            value = itemEntry['value']
+            if isinstance(value, list):
+                for valueEntry in value:
+                    self._createNode(document, valueEntry, variableElement)
+            elif isinstance(value, dict):
+                self._createNode(document, value, variableElement)
+            else:
+                variableElement.appendChild(document.createTextNode(xml.sax.saxutils.escape(str(value))))
+        if 'attribute' in itemEntry:
+            for attr in itemEntry['attribute'].keys():
+                variableElement.setAttribute(attr, itemEntry['attribute'][attr])

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/opensearch/atomresponsebysolr.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/opensearch/atomresponsebysolr.py b/src/main/python/libraries/edge/opensearch/atomresponsebysolr.py
new file mode 100644
index 0000000..c63fd5f
--- /dev/null
+++ b/src/main/python/libraries/edge/opensearch/atomresponsebysolr.py
@@ -0,0 +1,134 @@
+import json
+import urllib
+
+from edge.opensearch.atomresponse import AtomResponse
+from collections import defaultdict
+
+class AtomResponseBySolr(AtomResponse):
+    def __init__(self):
+        super(AtomResponseBySolr, self).__init__()
+
+    def generate(self, solrResponse, pretty=False):
+        self._populate(solrResponse)
+        return super(AtomResponseBySolr, self).generate(pretty)
+
+    def _populate(self, solrResponse):
+        #response.title = 'OCSI Dataset Search: '+searchText
+        #response.description = 'Search result for "'+searchText+'"'
+        #response.link = searchUrl
+        self._populateChannel(solrResponse)
+
+        if solrResponse is None:
+            self.variables.append(
+                {'namespace': 'opensearch', 'name': 'totalResults', 'value': 1}
+            )
+            self.variables.append(
+                {'namespace': 'opensearch', 'name': 'startIndex', 'value': 1}
+            )
+            self.variables.append(
+                {'namespace': 'opensearch', 'name': 'itemsPerPage', 'value': 1}
+            )
+            self.parameters['startIndex'] = 0
+            url = self.link + '?' + urllib.urlencode(self.parameters)
+            self.variables.append({'name': 'link', 'attribute': {'href': url, 'rel': 'self', 'type': 'application/atom+xml'}})
+            self.variables.append({'name': 'link', 'attribute': {'href': url, 'rel': 'first', 'type': 'application/atom+xml'}})
+            item = [
+                {'name': 'title', 'value': 'Error'},
+                {'name': 'content', 'value': 'error'}
+            ]
+            self.items.append(item)
+        else:
+            #logging.debug(solrResponse)
+            solrJson = json.loads(solrResponse)
+            numFound = int(solrJson['response']['numFound'])
+            start = int(solrJson['response']['start'])
+            rows = int(solrJson['responseHeader']['params']['rows'])
+
+            self.parameters['startIndex'] = start
+            self.variables.append({'name': 'link', 'attribute': {'href': self.link + '?' + urllib.urlencode(self.parameters), 'rel': 'self', 'type': 'application/atom+xml'}})
+            self.parameters['startIndex'] = 0
+            self.variables.append({'name': 'link', 'attribute': {'href': self.link + '?' + urllib.urlencode(self.parameters), 'rel': 'first', 'type': 'application/atom+xml'}})
+            if start > 0:
+                if (start - rows > 0):
+                    self.parameters['startIndex'] = start - rows
+                self.variables.append({'name': 'link', 'attribute': {'href': self.link + '?' + urllib.urlencode(self.parameters), 'rel': 'previous', 'type': 'application/atom+xml'}})
+            if start + rows < numFound:
+                self.parameters['startIndex'] = start + rows
+                self.variables.append({'name': 'link', 'attribute': {'href': self.link + '?' + urllib.urlencode(self.parameters), 'rel': 'next', 'type': 'application/atom+xml'}})
+            
+            self.variables.append(
+                {'namespace': 'opensearch', 'name': 'totalResults', 'value': solrJson['response']['numFound']}
+            )
+            self.variables.append(
+                {'namespace': 'opensearch', 'name': 'startIndex', 'value': solrJson['response']['start']}
+            )
+            self.variables.append(
+                {'namespace': 'opensearch', 'name': 'itemsPerPage', 'value': solrJson['responseHeader']['params']['rows']}
+            )
+
+            for doc in solrJson['response']['docs']:
+                """
+                item = [
+                    {'name': 'title', 'value': doc['Dataset-LongName'][0]},
+                    {'name': 'description', 'value': doc['Dataset-Description'][0]},
+                    {'name': 'link', 'value': self._configuration.get('portal', 'datasetUrl')+'/'+doc['Dataset-ShortName'][0]}
+                ]
+                """
+                item = []
+                '''
+                #Handle dataset_location_policy values differently
+                if 'DatasetLocationPolicy-Type' in doc and 'DatasetLocationPolicy-BasePath' in doc:
+                    for i, x in enumerate(doc['DatasetLocationPolicy-Type']):
+                        item.append({'namespace': 'podaac', 'name': self._camelCaseStripHyphen(x.title()), 'value': doc['DatasetLocationPolicy-BasePath'][i]})
+                    del doc['DatasetLocationPolicy-Type']
+                    del doc['DatasetLocationPolicy-BasePath']
+                
+                multiValuedElementsKeys = ('DatasetRegion-', 'DatasetCharacter-', 'DatasetCitation-', 'DatasetContact-Contact-', 'DatasetDatetime-', 
+                                           'DatasetInteger-', 'DatasetParameter-', 'DatasetProject-', 'DatasetReal-', 'DatasetResource-', 
+                                           'DatasetSoftware-', 'DatasetSource-', 'DatasetVersion-', 'Collection-',
+                                           'GranuleArchive-', 'GranuleReference-', 'GranuleReal-')
+                multiValuedElements = defaultdict(list)
+                for docKey in doc.keys():
+                    if docKey.startswith(multiValuedElementsKeys):
+                        multiValuedElements[docKey.split('-', 1)[0]].append(docKey)
+                    else:
+                        item.append({'namespace': 'podaac', 'name': self._camelCaseStripHyphen(docKey), 'value': doc[docKey]})
+                for multiValuedKey in multiValuedElements:
+                    for i, x in enumerate(doc[multiValuedElements[multiValuedKey][0]]):
+                        values = {}
+                        for key in multiValuedElements[multiValuedKey]:
+                            values[self._camelCaseStripHyphen(key.split('-', 1)[1])] = doc[key][i]
+                        item.append({'namespace': 'podaac', 'name': self._camelCaseStripHyphen(multiValuedKey), 'value': values})
+                '''
+                self._populateItem(solrResponse, doc, item)
+                self.items.append(item)
+
+    def _populateChannel(self, solrResponse):
+        pass
+
+    def _populateItem(self, solrResponse, doc, item):
+        pass
+    
+    def _populateItemWithPodaacMetadata(self, doc, item, multiValuedElementsKeys):
+        ignoreElementsEndingWith = ('-Full', '-Long')
+        multiValuedElements = defaultdict(list)
+        for docKey in doc.keys():
+            if docKey.startswith(multiValuedElementsKeys):
+                multiValuedElements[docKey.split('-', 1)[0]].append(docKey)
+            elif not docKey.endswith(ignoreElementsEndingWith):
+                if len(doc[docKey]) > 1:
+                    item.append({'namespace': 'podaac', 'name': self._camelCaseStripHyphen(docKey), 'value': [{'namespace': 'podaac', 'name': 'value', 'value': x} for x in doc[docKey]]})
+                else:
+                    item.append({'namespace': 'podaac', 'name': self._camelCaseStripHyphen(docKey), 'value': doc[docKey][0]})
+        for multiValuedKey in multiValuedElements:
+            for i, x in enumerate(doc[multiValuedElements[multiValuedKey][0]]):
+                values = []
+                for key in multiValuedElements[multiValuedKey]:
+                    if not key.endswith(ignoreElementsEndingWith):
+                        values.append({'namespace': 'podaac', 'name': self._camelCaseStripHyphen(key.split('-', 1)[1]), 'value': doc[key][i]})
+                item.append({'namespace': 'podaac', 'name': self._camelCaseStripHyphen(multiValuedKey), 'value': values})
+
+    def _camelCaseStripHyphen(self, key):
+        #special case to remove duplicate element, contact from element tag
+        key = key.replace('-Element-', '', 1).replace('Contact-', '', 1)
+        return key[0].lower() + key[1:].replace('-', '')

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/libraries/edge/opensearch/datacastingresponsebysolr.py
----------------------------------------------------------------------
diff --git a/src/main/python/libraries/edge/opensearch/datacastingresponsebysolr.py b/src/main/python/libraries/edge/opensearch/datacastingresponsebysolr.py
new file mode 100644
index 0000000..b560d7e
--- /dev/null
+++ b/src/main/python/libraries/edge/opensearch/datacastingresponsebysolr.py
@@ -0,0 +1,71 @@
+import json
+import logging
+
+from edge.opensearch.fgdcresponse import FgdcResponse
+from edge.dateutility import DateUtility
+
+class DatacastingResponseBySolr(FgdcResponse):
+    def __init__(self, portalUrl, archivedWithin):
+        super(DatacastingResponseBySolr, self).__init__()
+        
+        self.addNamespace("datacasting", "http://datacasting.jpl.nasa.gov/datacasting")
+        self.addNamespace("georss", "http://www.georss.org/georss")
+        self.addNamespace("gml", "http://www.opengis.net/gml")
+        
+        self.portalUrl = portalUrl
+        self.archivedWithin = archivedWithin
+
+    def generate(self, solrDatasetResponse, solrGranuleResponse = None, pretty=False):
+        self._populate(solrDatasetResponse, solrGranuleResponse)
+        return super(DatacastingResponseBySolr, self).generate(pretty)
+
+    def _populate(self, solrDatasetResponse, solrGranuleResponse = None):
+        if solrDatasetResponse is not None:
+            solrJson = json.loads(solrDatasetResponse)
+
+            logging.debug('dataset count: '+str(len(solrJson['response']['docs'])))
+
+            if len(solrJson['response']['docs']) == 1:
+                # ok now populate variables!
+                doc = solrJson['response']['docs'][0]
+                
+                self.variables['doc'] = doc
+                
+                # Format dates
+                try:
+                    self.variables['DatasetCitation_ReleaseYear'] = DateUtility.convertTimeLong(doc['DatasetCitation-ReleaseDateLong'][0], '%Y')
+                except:
+                    pass
+                
+                # Link to dataset portal page
+                self.variables['DatasetPortalPage'] = self.portalUrl+'/'+doc['Dataset-ShortName'][0]
+                
+                # Set default pub date to x hours ago because we cast all granules archived within the last x hours
+                self.variables['PubDate'] = DateUtility.pastDateRFC822(self.archivedWithin)
+            else:
+                raise Exception('No dataset found')
+                
+        if solrGranuleResponse is not None:
+            solrGranuleJson = json.loads(solrGranuleResponse)
+            
+            logging.debug('granule count: '+str(len(solrGranuleJson['response']['docs'])))
+            
+            pubDate = 0
+            for doc in solrGranuleJson['response']['docs']:
+                if (doc['Granule-ArchiveTimeLong'][0] > pubDate):
+                    pubDate = doc['Granule-ArchiveTimeLong'][0]
+                self._populateItem(solrGranuleResponse, doc, None)
+            
+            if pubDate != 0:
+                # Set pub date to latest granule archive date
+                self.variables['PubDate'] = DateUtility.convertTimeLongToRFC822(pubDate)
+                
+            self.variables['granules'] = solrGranuleJson['response']['docs']
+        else:
+            raise Exception('No granules found')
+                
+    def _populateChannel(self, solrResponse):
+        pass
+
+    def _populateItem(self, solrResponse, doc, item):
+        pass


[05/15] incubator-sdap-edge git commit: SDAP-1 Import all code under the SDAP SGA

Posted by le...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/solr/product/conf/schema.xml
----------------------------------------------------------------------
diff --git a/src/main/solr/product/conf/schema.xml b/src/main/solr/product/conf/schema.xml
new file mode 100644
index 0000000..969d51f
--- /dev/null
+++ b/src/main/solr/product/conf/schema.xml
@@ -0,0 +1,1201 @@
+<?xml version="1.0" encoding="UTF-8" ?>
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<!--  
+ This is the Solr schema file. This file should be named "schema.xml" and
+ should be in the conf directory under the solr home
+ (i.e. ./solr/conf/schema.xml by default) 
+ or located where the classloader for the Solr webapp can find it.
+
+ This example schema is the recommended starting point for users.
+ It should be kept correct and concise, usable out-of-the-box.
+
+ For more information, on how to customize this file, please see
+ http://wiki.apache.org/solr/SchemaXml
+
+ PERFORMANCE NOTE: this schema includes many optional features and should not
+ be used for benchmarking.  To improve performance one could
+  - set stored="false" for all fields possible (esp large fields) when you
+    only need to search on the field but don't need to return the original
+    value.
+  - set indexed="false" if you don't need to search on the field, but only
+    return the field as a result of searching on other indexed fields.
+  - remove all unneeded copyField statements
+  - for best index size and searching performance, set "index" to false
+    for all general text fields, use copyField to copy them to the
+    catchall "text" field, and use that for searching.
+  - For maximum indexing performance, use the ConcurrentUpdateSolrServer
+    java client.
+  - Remember to run the JVM in server mode, and use a higher logging level
+    that avoids logging every request
+-->
+
+<schema name="example-data-driven-schema" version="1.5">
+  <!-- attribute "name" is the name of this schema and is only used for display purposes.
+       version="x.y" is Solr's version number for the schema syntax and 
+       semantics.  It should not normally be changed by applications.
+
+       1.0: multiValued attribute did not exist, all fields are multiValued 
+            by nature
+       1.1: multiValued attribute introduced, false by default 
+       1.2: omitTermFreqAndPositions attribute introduced, true by default 
+            except for text fields.
+       1.3: removed optional field compress feature
+       1.4: autoGeneratePhraseQueries attribute introduced to drive QueryParser
+            behavior when a single string produces multiple tokens.  Defaults 
+            to off for version >= 1.4
+       1.5: omitNorms defaults to true for primitive field types 
+            (int, float, boolean, string...)
+     -->
+
+    <!-- Valid attributes for fields:
+     name: mandatory - the name for the field
+     type: mandatory - the name of a field type from the 
+       <types> fieldType section
+     indexed: true if this field should be indexed (searchable or sortable)
+     stored: true if this field should be retrievable
+     docValues: true if this field should have doc values. Doc values are
+       useful for faceting, grouping, sorting and function queries. Although not
+       required, doc values will make the index faster to load, more
+       NRT-friendly and more memory-efficient. They however come with some
+       limitations: they are currently only supported by StrField, UUIDField
+       and all Trie*Fields, and depending on the field type, they might
+       require the field to be single-valued, be required or have a default
+       value (check the documentation of the field type you're interested in
+       for more information)
+     multiValued: true if this field may contain multiple values per document
+     omitNorms: (expert) set to true to omit the norms associated with
+       this field (this disables length normalization and index-time
+       boosting for the field, and saves some memory).  Only full-text
+       fields or fields that need an index-time boost need norms.
+       Norms are omitted for primitive (non-analyzed) types by default.
+     termVectors: [false] set to true to store the term vector for a
+       given field.
+       When using MoreLikeThis, fields used for similarity should be
+       stored for best performance.
+     termPositions: Store position information with the term vector.  
+       This will increase storage costs.
+     termOffsets: Store offset information with the term vector. This 
+       will increase storage costs.
+     required: The field is required.  It will throw an error if the
+       value does not exist
+     default: a value that should be used if no value is specified
+       when adding a document.
+   -->
+
+    <!-- field names should consist of alphanumeric or underscore characters only and
+      not start with a digit.  This is not currently strictly enforced,
+      but other field names will not have first class support from all components
+      and back compatibility is not guaranteed.  Names with both leading and
+      trailing underscores (e.g. _version_) are reserved.
+      -->
+
+    <!-- In this data_driven_schema_configs configset, only three fields are pre-declared:
+         id, _version_, and _text_.  All other fields will be type guessed and added via the
+         "add-unknown-fields-to-the-schema" update request processor chain declared
+         in solrconfig.xml.
+
+         Note that many dynamic fields are also defined - you can use them to specify a
+         field's type via field naming conventions - see below.
+ 
+         WARNING: The _text_ catch-all field will significantly increase your index size.
+           If you don't need it, consider removing it and the corresponding copyField directive.
+      -->
+
+    <field name="_version_" type="long" indexed="true" stored="true"/>
+
+    <!--*******************************************************************************************************************************************-->
+    <!-- GIBS: product schema                                                                                                                      -->
+    <!--*******************************************************************************************************************************************-->
+    <!-- product                        -->
+    <!-- granule_imagery         *      -->
+    <!--     granule             *      -->
+    <!-- product_archive         *      -->
+    <!-- product_contact         0      -->
+    <!--     contact             0      -->
+    <!-- product_data_day        1      -->
+    <!-- product_meta_history    1      -->
+    <!-- product_operation       *      -->
+    <!-- product_reference       0      -->
+
+    <!-- product_element         0      -->
+    <!--     product_character   0      -->
+    <!--     product_datetime    0      -->
+    <!--     product_integer     0      -->
+    <!--     product_real        0      -->
+
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <!-- product -->
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <field name="id"                                             type="string"  indexed="true"  stored="true"  required="true"  multiValued="false" />
+    <field name="product_id"                                     type="long"    indexed="true"  stored="true"  required="false" multiValued="false" />
+    <field name="product_pt_id"                                  type="long"    indexed="true"  stored="true"  required="false" multiValued="false" />
+    <field name="product_partial_id"                             type="string"  indexed="true"  stored="true"  required="false" multiValued="false" />
+    <field name="product_revision"                               type="long"    indexed="true"  stored="true"  required="false" multiValued="false" />
+    <field name="product_version"                                type="long"    indexed="true"  stored="true"  required="false" multiValued="false" />
+    <field name="product_name"                                   type="string"  indexed="true"  stored="true"  required="false" multiValued="false" />
+    <field name="product_rel_path"                               type="string"  indexed="true"  stored="true"  required="false" multiValued="false" />
+    <field name="product_root_path"                              type="string"  indexed="true"  stored="true"  required="false" multiValued="false" />
+    <field name="product_status"                                 type="string"  indexed="true"  stored="true"  required="false" multiValued="false" />
+    <field name="product_start_time"                             type="long"    indexed="true"  stored="true"  required="false" multiValued="false" />
+    <field name="product_start_time_string"                      type="string"  indexed="true"  stored="true"  required="false" multiValued="false" />
+    <field name="product_stop_time"                              type="long"    indexed="true"  stored="true"  required="false" multiValued="false" />
+    <field name="product_stop_time_string"                       type="string"  indexed="true"  stored="true"  required="false" multiValued="false" />
+    <field name="product_create_time"                            type="long"    indexed="true"  stored="true"  required="false" multiValued="false" />
+    <field name="product_create_time_string"                     type="string"  indexed="true"  stored="true"  required="false" multiValued="false" />
+    <field name="product_archive_time"                           type="long"    indexed="true"  stored="true"  required="false" multiValued="false" />
+    <field name="product_archive_time_string"                    type="string"  indexed="true"  stored="true"  required="false" multiValued="false" />
+
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <!-- product_granule_view -->
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <field name="product_granule_id_list"                        type="long"    indexed="false" stored="true" required="false"  multiValued="true" />
+    <field name="product_granule_version_list"                   type="long"    indexed="false" stored="true" required="false"  multiValued="true" />
+    <field name="product_granule_dataset_id_list"                type="long"    indexed="true"  stored="true" required="false"  multiValued="true" />
+    <field name="product_granule_metadata_endpoint_list"         type="string"  indexed="false" stored="true" required="false"  multiValued="true" />
+    <field name="product_granule_remote_granule_ur_list"         type="string"  indexed="true"  stored="true" required="false"  multiValued="true" />
+
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <!-- product_operation_view -->
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <field name="product_operation_version_list"                 type="long"    indexed="false" stored="true" required="false"  multiValued="true" />
+    <field name="product_operation_agent_list"                   type="string"  indexed="false" stored="true" required="false"  multiValued="true" />
+    <field name="product_operation_list"                         type="string"  indexed="false" stored="true" required="false"  multiValued="true" />
+    <field name="product_operation_command_list"                 type="string"  indexed="false" stored="true" required="false"  multiValued="true" />
+    <field name="product_operation_arguments_list"               type="string"  indexed="false" stored="true" required="false"  multiValued="true" />
+    <field name="product_operation_start_time_list"              type="long"    indexed="false" stored="true" required="false"  multiValued="true" />
+    <field name="product_operation_stop_time_list"               type="long"    indexed="false" stored="true" required="false"  multiValued="true" />
+    <field name="product_operation_start_time_string_list"       type="string"  indexed="true"  stored="true" required="false"  multiValued="true" />
+    <field name="product_operation_stop_time_string_list"        type="string"  indexed="true"  stored="true" required="false"  multiValued="true" />
+
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <!-- product_meta_history_view -->
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <field name="product_meta_history_version"                   type="long"    indexed="true" stored="true"  required="false"  multiValued="false" />
+    <field name="product_meta_history_version_id"                type="long"    indexed="true" stored="true"  required="false"  multiValued="false" />
+    <field name="product_meta_history_revision_history"          type="string"  indexed="true" stored="true"  required="false"  multiValued="false" />
+    <field name="product_meta_history_creation_date"             type="long"    indexed="true" stored="true"  required="false"  multiValued="false" />
+    <field name="product_meta_history_last_revision_date"        type="long"    indexed="true" stored="true"  required="false"  multiValued="false" />
+    <field name="product_meta_history_creation_date_string"      type="string"  indexed="true" stored="true"  required="false"  multiValued="false" />
+    <field name="product_meta_history_last_revision_date_string" type="string"  indexed="true" stored="true"  required="false"  multiValued="false" />
+
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <!-- product_archive -->
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <field name="product_archive_version_list"                   type="long"    indexed="false" stored="true"  required="false"  multiValued="true" />
+    <field name="product_archive_name_list"                      type="string"  indexed="false" stored="true"  required="false"  multiValued="true" /> 
+    <field name="product_archive_type_list"                      type="string"  indexed="false" stored="true"  required="false"  multiValued="true" />
+    <field name="product_archive_file_size_list"                 type="long"    indexed="false" stored="true"  required="false"  multiValued="true" />
+    <field name="product_archive_checksum_list"                  type="string"  indexed="false" stored="true"  required="false"  multiValued="true" />
+    <field name="product_archive_compress_flag_list"             type="boolean" indexed="false" stored="true"  required="false"  multiValued="true" />
+    <field name="product_archive_status_list"                    type="string"  indexed="false" stored="true"  required="false"  multiValued="true" />
+    <field name="product_archive_reference_description_list"     type="string"  indexed="false" stored="true"  required="false"  multiValued="true" />
+    <field name="product_archive_reference_name_list"            type="string"  indexed="false" stored="true"  required="false"  multiValued="true" />
+    <field name="product_archive_reference_type_list"            type="string"  indexed="false" stored="true"  required="false"  multiValued="true" />
+    <field name="product_archive_reference_status_list"          type="string"  indexed="false" stored="true"  required="false"  multiValued="true" />
+
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <!-- product_reference_view -->
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <field name="product_reference_version_list"                 type="long"    indexed="false" stored="true"  required="false"  multiValued="true" />
+    <field name="product_reference_type_list"                    type="string"  indexed="false" stored="true"  required="false"  multiValued="true" />
+    <field name="product_reference_name_list"                    type="string"  indexed="false" stored="true"  required="false"  multiValued="true" />
+    <field name="product_reference_path_list"                    type="string"  indexed="false" stored="true"  required="false"  multiValued="true" />
+    <field name="product_reference_description_list"             type="string"  indexed="false" stored="true"  required="false"  multiValued="true" />
+    <field name="product_reference_status_list"                  type="string"  indexed="false" stored="true"  required="false"  multiValued="true" />
+
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <!-- product_data_day_view -->
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <field name="product_data_day_version_list"                  type="long"    indexed="false"  stored="true" required="false"  multiValued="true" />
+    <field name="product_data_day_list"                          type="long"    indexed="false"  stored="true" required="false"  multiValued="true" /> 
+    <field name="product_data_day_string_list"                   type="string"  indexed="false"  stored="true" required="false"  multiValued="true" /> 
+
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <!-- product_contact_view -->
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <field name="product_contact_version_list"                        type="long"    indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_contact_role_list"                           type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_contact_first_name_list"                     type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_contact_last_name_list"                      type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_contact_middle_name_list"                    type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_contact_address_list"                        type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_contact_notify_type_list"                    type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_contact_email_list"                          type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_contact_phone_list"                          type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_contact_fax_list"                            type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_contact_provider_long_name_list"             type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_contact_provider_short_name_list"            type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_contact_provider_type_list"                  type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_contact_provider_resource_descriptions_list" type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_contact_provider_resource_names_list"        type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_contact_provider_resource_paths_list"        type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+    <field name="product_contact_provider_resource_types_list"        type="string"  indexed="true"  stored="true"  required="false"  multiValued="true" />
+
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <!-- product_element (do not set the required fields just in case there is no record) -->
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <field name="product_element_scope_list"                     type="string"  indexed="true"  stored="true" required="false"  multiValued="true" />
+    <field name="product_element_obligation_flag_list"           type="boolean" indexed="true"  stored="true" required="false"  multiValued="true" />
+    <field name="product_element_dd_version_list"                type="long"    indexed="true"  stored="true" required="false"  multiValued="true" />
+    <field name="product_element_dd_type_list"                   type="boolean" indexed="true"  stored="true" required="false"  multiValued="true" />
+    <field name="product_element_dd_description_list"            type="boolean" indexed="true"  stored="true" required="false"  multiValued="true" />
+    <field name="product_element_dd_scope_list"                  type="boolean" indexed="true"  stored="true" required="false"  multiValued="true" />
+    <field name="product_element_dd_long_name_list"              type="boolean" indexed="true"  stored="true" required="false"  multiValued="true" />
+    <field name="product_element_dd_short_name_list"             type="boolean" indexed="true"  stored="true" required="false"  multiValued="true" />
+    <field name="product_element_dd_max_length_list"             type="boolean" indexed="true"  stored="true" required="false"  multiValued="true" />
+
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <!-- product_character (do not set the required fields just in case there is no record) -->
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <field name="product_character_version_list"                 type="int"  indexed="true"  stored="true"  required="false"  multiValued="true"/>
+    <field name="product_character_value_list"                   type="string"  indexed="true"  stored="true"  required="false"  multiValued="true"/>
+
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <!-- product_datetime (do not set the required fields just in case there is no record) -->
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <field name="product_datetime_version_list"                  type="int"     indexed="true"  stored="true"  required="false"  multiValued="true"/>
+    <field name="product_datetime_value_list"                    type="long"    indexed="true"  stored="true"  required="false"  multiValued="true"/>
+    <field name="product_datetime_value_string_list"             type="string"  indexed="true"  stored="true"  required="false"  multiValued="true"/>
+
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <!-- product_integer (do not set the required fields just in case there is no record) -->
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <field name="product_integer_version_list"                   type="int"     indexed="true"  stored="true"  required="false"  multiValued="true"/>
+    <field name="product_integer_value_list"                     type="int"     indexed="true"  stored="true"  required="false"  multiValued="true"/>
+    <field name="product_integer_units_list"                     type="string"  indexed="true"  stored="true"  required="false"  multiValued="true"/>
+
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <!-- product_real (do not set the required fields just in case there is no record) -->
+    <!--+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-->
+    <field name="product_real_version_list"                      type="string"  indexed="true"  stored="true"  required="false"  multiValued="true"/>
+    <field name="product_real_units_list"                        type="string"  indexed="true"  stored="true"  required="false"  multiValued="true"/>
+    <field name="product_real_value_list"                        type="float"   indexed="true"  stored="true"  required="false"  multiValued="true"/>
+
+    <!--*******************************************************************************************************************************************-->
+
+    <!-- Dynamic field definitions allow using convention over configuration
+       for fields via the specification of patterns to match field names. 
+       EXAMPLE:  name="*_i" will match any field ending in _i (like myid_i, z_i)
+       RESTRICTION: the glob-like pattern in the name attribute must have
+       a "*" only at the start or the end.  -->
+   
+    <dynamicField name="*_i"  type="int"    indexed="true"  stored="true"/>
+    <dynamicField name="*_is" type="ints"    indexed="true"  stored="true"/>
+    <dynamicField name="*_s"  type="string"  indexed="true"  stored="true" />
+    <dynamicField name="*_ss" type="strings"  indexed="true"  stored="true"/>
+    <dynamicField name="*_l"  type="long"   indexed="true"  stored="true"/>
+    <dynamicField name="*_ls" type="longs"   indexed="true"  stored="true"/>
+    <dynamicField name="*_t"   type="text_general" indexed="true" stored="true"/>
+    <dynamicField name="*_txt" type="text_general" indexed="true" stored="true"/>
+    <dynamicField name="*_b"  type="boolean" indexed="true" stored="true"/>
+    <dynamicField name="*_bs" type="booleans" indexed="true" stored="true"/>
+    <dynamicField name="*_f"  type="float"  indexed="true"  stored="true"/>
+    <dynamicField name="*_fs" type="floats"  indexed="true"  stored="true"/>
+    <dynamicField name="*_d"  type="double" indexed="true"  stored="true"/>
+    <dynamicField name="*_ds" type="doubles" indexed="true"  stored="true"/>
+
+    <!-- Type used to index the lat and lon components for the "location" FieldType -->
+    <dynamicField name="*_coordinate"  type="tdouble" indexed="true"  stored="false" />
+
+    <dynamicField name="*_dt"  type="date"    indexed="true"  stored="true"/>
+    <dynamicField name="*_dts" type="date"    indexed="true"  stored="true" multiValued="true"/>
+    <dynamicField name="*_p"  type="location" indexed="true" stored="true"/>
+    <dynamicField name="*_srpt"  type="location_rpt" indexed="true" stored="true"/>
+
+    <!-- some trie-coded dynamic fields for faster range queries -->
+    <dynamicField name="*_ti" type="tint"    indexed="true"  stored="true"/>
+    <dynamicField name="*_tis" type="tints"    indexed="true"  stored="true"/>
+    <dynamicField name="*_tl" type="tlong"   indexed="true"  stored="true"/>
+    <dynamicField name="*_tls" type="tlongs"   indexed="true"  stored="true"/>
+    <dynamicField name="*_tf" type="tfloat"  indexed="true"  stored="true"/>
+    <dynamicField name="*_tfs" type="tfloats"  indexed="true"  stored="true"/>
+    <dynamicField name="*_td" type="tdouble" indexed="true"  stored="true"/>
+    <dynamicField name="*_tds" type="tdoubles" indexed="true"  stored="true"/>
+    <dynamicField name="*_tdt" type="tdate"  indexed="true"  stored="true"/>
+    <dynamicField name="*_tdts" type="tdates"  indexed="true"  stored="true"/>
+
+    <dynamicField name="*_c"   type="currency" indexed="true"  stored="true"/>
+
+    <dynamicField name="ignored_*" type="ignored" multiValued="true"/>
+    <dynamicField name="attr_*" type="text_general" indexed="true" stored="true" multiValued="true"/>
+
+    <dynamicField name="random_*" type="random" />
+
+    <!-- uncomment the following to ignore any fields that don't already match an existing 
+        field name or dynamic field, rather than reporting them as an error. 
+        alternately, change the type="ignored" to some other type e.g. "text" if you want 
+        unknown fields indexed and/or stored by default 
+        
+        NB: use of "*" dynamic fields will disable field type guessing and adding
+        unknown fields to the schema. --> 
+    <!--dynamicField name="*" type="ignored" multiValued="true" /-->
+   
+
+
+  <!-- Field to use to determine and enforce document uniqueness. 
+      Unless this field is marked with required="false", it will be a required field
+   -->
+  <uniqueKey>id</uniqueKey>
+
+  <!-- copyField commands copy one field to another at the time a document
+       is added to the index.  It's used either to index the same field differently,
+       or to add multiple fields to the same field for easier/faster searching.
+
+   <copyField source="cat" dest="text"/>
+   <copyField source="name" dest="text"/>
+   <copyField source="manu" dest="text"/>
+   <copyField source="features" dest="text"/>
+   <copyField source="includes" dest="text"/>
+   <copyField source="manu" dest="manu_exact"/>
+   -->
+
+  <!-- Copy the price into a currency enabled field (default USD)
+   <copyField source="price" dest="price_c"/>
+   -->
+
+  <!-- Text fields from SolrCell to search by default in our catch-all field
+   <copyField source="title" dest="text"/>
+   <copyField source="author" dest="text"/>
+   <coopyField source="description" dest="text"/>
+   <copyField source="keywords" dest="text"/>
+	   <copyField source="content" dest="text"/>
+	   <copyField source="content_type" dest="text"/>
+   <copyField source="resourcename" dest="text"/>
+   <copyField source="url" dest="text"/>
+   -->
+
+  <!-- Create a string version of author for faceting
+   <copyField source="author" dest="author_s"/>
+   -->
+	
+  <!-- Above, multiple source fields are copied to the [text] field. 
+	  Another way to map multiple source fields to the same 
+	  destination field is to use the dynamic field syntax. 
+	  copyField also supports a maxChars to copy setting.  -->
+	   
+  <!-- <copyField source="*_t" dest="text" maxChars="3000"/> -->
+
+  <!-- copy name to alphaNameSort, a field designed for sorting by name -->
+  <!-- <copyField source="name" dest="alphaNameSort"/> -->
+ 
+
+    <!-- field type definitions. The "name" attribute is
+       just a label to be used by field definitions.  The "class"
+       attribute and any other attributes determine the real
+       behavior of the fieldType.
+         Class names starting with "solr" refer to java classes in a
+       standard package such as org.apache.solr.analysis
+    -->
+
+    <!-- The StrField type is not analyzed, but indexed/stored verbatim.
+       It supports doc values but in that case the field needs to be
+       single-valued and either required or have a default value.
+      -->
+    <fieldType name="string" class="solr.StrField" sortMissingLast="true" />
+    <fieldType name="strings" class="solr.StrField" sortMissingLast="true" multiValued="true"/>
+
+    <!-- boolean type: "true" or "false" -->
+    <fieldType name="boolean" class="solr.BoolField" sortMissingLast="true"/>
+
+    <fieldType name="booleans" class="solr.BoolField" sortMissingLast="true" multiValued="true"/>
+
+    <!-- sortMissingLast and sortMissingFirst attributes are optional attributes are
+         currently supported on types that are sorted internally as strings
+         and on numeric types.
+	     This includes "string","boolean", and, as of 3.5 (and 4.x),
+	     int, float, long, date, double, including the "Trie" variants.
+       - If sortMissingLast="true", then a sort on this field will cause documents
+         without the field to come after documents with the field,
+         regardless of the requested sort order (asc or desc).
+       - If sortMissingFirst="true", then a sort on this field will cause documents
+         without the field to come before documents with the field,
+         regardless of the requested sort order.
+       - If sortMissingLast="false" and sortMissingFirst="false" (the default),
+         then default lucene sorting will be used which places docs without the
+         field first in an ascending sort and last in a descending sort.
+    -->    
+
+    <!--
+      Default numeric field types. For faster range queries, consider the tint/tfloat/tlong/tdouble types.
+
+      These fields support doc values, but they require the field to be
+      single-valued and either be required or have a default value.
+    -->
+    <fieldType name="int" class="solr.TrieIntField" precisionStep="0" positionIncrementGap="0"/>
+    <fieldType name="float" class="solr.TrieFloatField" precisionStep="0" positionIncrementGap="0"/>
+    <fieldType name="long" class="solr.TrieLongField" precisionStep="0" positionIncrementGap="0"/>
+    <fieldType name="double" class="solr.TrieDoubleField" precisionStep="0" positionIncrementGap="0"/>
+
+    <fieldType name="ints" class="solr.TrieIntField" precisionStep="0" positionIncrementGap="0" multiValued="true"/>
+    <fieldType name="floats" class="solr.TrieFloatField" precisionStep="0" positionIncrementGap="0" multiValued="true"/>
+    <fieldType name="longs" class="solr.TrieLongField" precisionStep="0" positionIncrementGap="0" multiValued="true"/>
+    <fieldType name="doubles" class="solr.TrieDoubleField" precisionStep="0" positionIncrementGap="0" multiValued="true"/>
+
+    <!--
+     Numeric field types that index each value at various levels of precision
+     to accelerate range queries when the number of values between the range
+     endpoints is large. See the javadoc for NumericRangeQuery for internal
+     implementation details.
+
+     Smaller precisionStep values (specified in bits) will lead to more tokens
+     indexed per value, slightly larger index size, and faster range queries.
+     A precisionStep of 0 disables indexing at different precision levels.
+    -->
+    <fieldType name="tint" class="solr.TrieIntField" precisionStep="8" positionIncrementGap="0"/>
+    <fieldType name="tfloat" class="solr.TrieFloatField" precisionStep="8" positionIncrementGap="0"/>
+    <fieldType name="tlong" class="solr.TrieLongField" precisionStep="8" positionIncrementGap="0"/>
+    <fieldType name="tdouble" class="solr.TrieDoubleField" precisionStep="8" positionIncrementGap="0"/>
+    
+    <fieldType name="tints" class="solr.TrieIntField" precisionStep="8" positionIncrementGap="0" multiValued="true"/>
+    <fieldType name="tfloats" class="solr.TrieFloatField" precisionStep="8" positionIncrementGap="0" multiValued="true"/>
+    <fieldType name="tlongs" class="solr.TrieLongField" precisionStep="8" positionIncrementGap="0" multiValued="true"/>
+    <fieldType name="tdoubles" class="solr.TrieDoubleField" precisionStep="8" positionIncrementGap="0" multiValued="true"/>
+
+    <!-- The format for this date field is of the form 1995-12-31T23:59:59Z, and
+         is a more restricted form of the canonical representation of dateTime
+         http://www.w3.org/TR/xmlschema-2/#dateTime    
+         The trailing "Z" designates UTC time and is mandatory.
+         Optional fractional seconds are allowed: 1995-12-31T23:59:59.999Z
+         All other components are mandatory.
+
+         Expressions can also be used to denote calculations that should be
+         performed relative to "NOW" to determine the value, ie...
+
+               NOW/HOUR
+                  ... Round to the start of the current hour
+               NOW-1DAY
+                  ... Exactly 1 day prior to now
+               NOW/DAY+6MONTHS+3DAYS
+                  ... 6 months and 3 days in the future from the start of
+                      the current day
+                      
+         Consult the TrieDateField javadocs for more information.
+
+         Note: For faster range queries, consider the tdate type
+      -->
+    <fieldType name="date" class="solr.TrieDateField" precisionStep="0" positionIncrementGap="0"/>
+    <fieldType name="dates" class="solr.TrieDateField" precisionStep="0" positionIncrementGap="0" multiValued="true"/>
+
+    <!-- A Trie based date field for faster date range queries and date faceting. -->
+    <fieldType name="tdate" class="solr.TrieDateField" precisionStep="6" positionIncrementGap="0"/>
+
+    <fieldType name="tdates" class="solr.TrieDateField" precisionStep="6" positionIncrementGap="0" multiValued="true"/>
+
+
+    <!--Binary data type. The data should be sent/retrieved in as Base64 encoded Strings -->
+    <fieldType name="binary" class="solr.BinaryField"/>
+
+    <!-- The "RandomSortField" is not used to store or search any
+         data.  You can declare fields of this type it in your schema
+         to generate pseudo-random orderings of your docs for sorting 
+         or function purposes.  The ordering is generated based on the field
+         name and the version of the index. As long as the index version
+         remains unchanged, and the same field name is reused,
+         the ordering of the docs will be consistent.  
+         If you want different psuedo-random orderings of documents,
+         for the same version of the index, use a dynamicField and
+         change the field name in the request.
+     -->
+    <fieldType name="random" class="solr.RandomSortField" indexed="true" />
+
+    <!-- solr.TextField allows the specification of custom text analyzers
+         specified as a tokenizer and a list of token filters. Different
+         analyzers may be specified for indexing and querying.
+
+         The optional positionIncrementGap puts space between multiple fields of
+         this type on the same document, with the purpose of preventing false phrase
+         matching across fields.
+
+         For more info on customizing your analyzer chain, please see
+         http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters
+     -->
+
+    <!-- One can also specify an existing Analyzer class that has a
+         default constructor via the class attribute on the analyzer element.
+         Example:
+    <fieldType name="text_greek" class="solr.TextField">
+      <analyzer class="org.apache.lucene.analysis.el.GreekAnalyzer"/>
+    </fieldType>
+    -->
+
+    <!-- A text field that only splits on whitespace for exact matching of words -->
+  <dynamicField name="*_ws" type="text_ws"  indexed="true"  stored="true"/>
+    <fieldType name="text_ws" class="solr.TextField" positionIncrementGap="100">
+      <analyzer>
+        <tokenizer class="solr.WhitespaceTokenizerFactory"/>
+      </analyzer>
+    </fieldType>
+
+    <!-- A general text field that has reasonable, generic
+         cross-language defaults: it tokenizes with StandardTokenizer,
+	 removes stop words from case-insensitive "stopwords.txt"
+	 (empty by default), and down cases.  At query time only, it
+	 also applies synonyms. -->
+    <fieldType name="text_general" class="solr.TextField" positionIncrementGap="100" multiValued="true">
+      <analyzer type="index">
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />
+        <!-- in this example, we will only use synonyms at query time
+        <filter class="solr.SynonymFilterFactory" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
+        -->
+        <filter class="solr.LowerCaseFilterFactory"/>
+      </analyzer>
+      <analyzer type="query">
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />
+        <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+      </analyzer>
+    </fieldType>
+
+    <!-- A text field with defaults appropriate for English: it
+         tokenizes with StandardTokenizer, removes English stop words
+         (lang/stopwords_en.txt), down cases, protects words from protwords.txt, and
+         finally applies Porter's stemming.  The query time analyzer
+         also applies synonyms from synonyms.txt. -->
+    <dynamicField name="*_txt_en" type="text_en"  indexed="true"  stored="true"/>
+    <fieldType name="text_en" class="solr.TextField" positionIncrementGap="100">
+      <analyzer type="index">
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <!-- in this example, we will only use synonyms at query time
+        <filter class="solr.SynonymFilterFactory" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
+        -->
+        <!-- Case insensitive stop word removal.
+        -->
+        <filter class="solr.StopFilterFactory"
+                ignoreCase="true"
+                words="lang/stopwords_en.txt"
+            />
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.EnglishPossessiveFilterFactory"/>
+        <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>
+        <!-- Optionally you may want to use this less aggressive stemmer instead of PorterStemFilterFactory:
+        <filter class="solr.EnglishMinimalStemFilterFactory"/>
+	-->
+        <filter class="solr.PorterStemFilterFactory"/>
+      </analyzer>
+      <analyzer type="query">
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
+        <filter class="solr.StopFilterFactory"
+                ignoreCase="true"
+                words="lang/stopwords_en.txt"
+            />
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.EnglishPossessiveFilterFactory"/>
+        <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>
+        <!-- Optionally you may want to use this less aggressive stemmer instead of PorterStemFilterFactory:
+        <filter class="solr.EnglishMinimalStemFilterFactory"/>
+	-->
+        <filter class="solr.PorterStemFilterFactory"/>
+      </analyzer>
+    </fieldType>
+
+    <!-- A text field with defaults appropriate for English, plus
+	 aggressive word-splitting and autophrase features enabled.
+	 This field is just like text_en, except it adds
+	 WordDelimiterFilter to enable splitting and matching of
+	 words on case-change, alpha numeric boundaries, and
+	 non-alphanumeric chars.  This means certain compound word
+	 cases will work, for example query "wi fi" will match
+	 document "WiFi" or "wi-fi".
+        -->
+    <dynamicField name="*_txt_en_split" type="text_en_splitting"  indexed="true"  stored="true"/>
+    <fieldType name="text_en_splitting" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="true">
+      <analyzer type="index">
+        <tokenizer class="solr.WhitespaceTokenizerFactory"/>
+        <!-- in this example, we will only use synonyms at query time
+        <filter class="solr.SynonymFilterFactory" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
+        -->
+        <!-- Case insensitive stop word removal.
+        -->
+        <filter class="solr.StopFilterFactory"
+                ignoreCase="true"
+                words="lang/stopwords_en.txt"
+            />
+        <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="1" catenateNumbers="1" catenateAll="0" splitOnCaseChange="1"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>
+        <filter class="solr.PorterStemFilterFactory"/>
+      </analyzer>
+      <analyzer type="query">
+        <tokenizer class="solr.WhitespaceTokenizerFactory"/>
+        <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
+        <filter class="solr.StopFilterFactory"
+                ignoreCase="true"
+                words="lang/stopwords_en.txt"
+            />
+        <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="0" catenateNumbers="0" catenateAll="0" splitOnCaseChange="1"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>
+        <filter class="solr.PorterStemFilterFactory"/>
+      </analyzer>
+    </fieldType>
+
+    <!-- Less flexible matching, but less false matches.  Probably not ideal for product names,
+         but may be good for SKUs.  Can insert dashes in the wrong place and still match. -->
+    <dynamicField name="*_txt_en_split_tight" type="text_en_splitting_tight"  indexed="true"  stored="true"/>
+    <fieldType name="text_en_splitting_tight" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="true">
+      <analyzer>
+        <tokenizer class="solr.WhitespaceTokenizerFactory"/>
+        <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="false"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_en.txt"/>
+        <filter class="solr.WordDelimiterFilterFactory" generateWordParts="0" generateNumberParts="0" catenateWords="1" catenateNumbers="1" catenateAll="0"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>
+        <filter class="solr.EnglishMinimalStemFilterFactory"/>
+        <!-- this filter can remove any duplicate tokens that appear at the same position - sometimes
+             possible with WordDelimiterFilter in conjuncton with stemming. -->
+        <filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
+      </analyzer>
+    </fieldType>
+
+    <!-- Just like text_general except it reverses the characters of
+	 each token, to enable more efficient leading wildcard queries. -->
+  <dynamicField name="*_txt_rev" type="text_general_rev"  indexed="true"  stored="true"/>
+  <fieldType name="text_general_rev" class="solr.TextField" positionIncrementGap="100">
+      <analyzer type="index">
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.ReversedWildcardFilterFactory" withOriginal="true"
+                maxPosAsterisk="3" maxPosQuestion="2" maxFractionAsterisk="0.33"/>
+      </analyzer>
+      <analyzer type="query">
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />
+        <filter class="solr.LowerCaseFilterFactory"/>
+      </analyzer>
+    </fieldType>
+
+  <dynamicField name="*_phon_en" type="phonetic_en"  indexed="true"  stored="true"/>
+  <fieldType name="phonetic_en" stored="false" indexed="true" class="solr.TextField" >
+      <analyzer>
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.DoubleMetaphoneFilterFactory" inject="false"/>
+      </analyzer>
+    </fieldType>
+
+    <!-- lowercases the entire field value, keeping it as a single token.  -->
+    <dynamicField name="*_s_lower" type="lowercase"  indexed="true"  stored="true"/>
+    <fieldType name="lowercase" class="solr.TextField" positionIncrementGap="100">
+      <analyzer>
+        <tokenizer class="solr.KeywordTokenizerFactory"/>
+        <filter class="solr.LowerCaseFilterFactory" />
+      </analyzer>
+    </fieldType>
+
+    <!-- 
+      Example of using PathHierarchyTokenizerFactory at index time, so
+      queries for paths match documents at that path, or in descendent paths
+    -->
+  <dynamicField name="*_descendent_path" type="descendent_path"  indexed="true"  stored="true"/>
+  <fieldType name="descendent_path" class="solr.TextField">
+      <analyzer type="index">
+        <tokenizer class="solr.PathHierarchyTokenizerFactory" delimiter="/" />
+      </analyzer>
+      <analyzer type="query">
+        <tokenizer class="solr.KeywordTokenizerFactory" />
+      </analyzer>
+    </fieldType>
+    <!-- 
+      Example of using PathHierarchyTokenizerFactory at query time, so
+      queries for paths match documents at that path, or in ancestor paths
+    -->
+    <dynamicField name="*_ancestor_path" type="ancestor_path"  indexed="true"  stored="true"/>
+    <fieldType name="ancestor_path" class="solr.TextField">
+      <analyzer type="index">
+        <tokenizer class="solr.KeywordTokenizerFactory" />
+      </analyzer>
+      <analyzer type="query">
+        <tokenizer class="solr.PathHierarchyTokenizerFactory" delimiter="/" />
+      </analyzer>
+    </fieldType>
+
+    <!-- since fields of this type are by default not stored or indexed,
+         any data added to them will be ignored outright.  --> 
+    <fieldType name="ignored" stored="false" indexed="false" multiValued="true" class="solr.StrField" />
+
+    <!-- This point type indexes the coordinates as separate fields (subFields)
+      If subFieldType is defined, it references a type, and a dynamic field
+      definition is created matching *___<typename>.  Alternately, if 
+      subFieldSuffix is defined, that is used to create the subFields.
+      Example: if subFieldType="double", then the coordinates would be
+        indexed in fields myloc_0___double,myloc_1___double.
+      Example: if subFieldSuffix="_d" then the coordinates would be indexed
+        in fields myloc_0_d,myloc_1_d
+      The subFields are an implementation detail of the fieldType, and end
+      users normally should not need to know about them.
+     -->
+  <dynamicField name="*_point" type="point"  indexed="true"  stored="true"/>
+  <fieldType name="point" class="solr.PointType" dimension="2" subFieldSuffix="_d"/>
+
+    <!-- A specialized field for geospatial search. If indexed, this fieldType must not be multivalued. -->
+    <fieldType name="location" class="solr.LatLonType" subFieldSuffix="_coordinate"/>
+
+    <!-- An alternative geospatial field type new to Solr 4.  It supports multiValued and polygon shapes.
+      For more information about this and other Spatial fields new to Solr 4, see:
+      http://wiki.apache.org/solr/SolrAdaptersForLuceneSpatial4
+    -->
+    <fieldType name="location_rpt" class="solr.SpatialRecursivePrefixTreeFieldType"
+               geo="true" distErrPct="0.025" maxDistErr="0.001" distanceUnits="kilometers" />
+
+    <!-- Money/currency field type. See http://wiki.apache.org/solr/MoneyFieldType
+        Parameters:
+          defaultCurrency: Specifies the default currency if none specified. Defaults to "USD"
+          precisionStep:   Specifies the precisionStep for the TrieLong field used for the amount
+          providerClass:   Lets you plug in other exchange provider backend:
+                           solr.FileExchangeRateProvider is the default and takes one parameter:
+                             currencyConfig: name of an xml file holding exchange rates
+                           solr.OpenExchangeRatesOrgProvider uses rates from openexchangerates.org:
+                             ratesFileLocation: URL or path to rates JSON file (default latest.json on the web)
+                             refreshInterval: Number of minutes between each rates fetch (default: 1440, min: 60)
+   -->
+    <fieldType name="currency" class="solr.CurrencyField" precisionStep="8" defaultCurrency="USD" currencyConfig="currency.xml" />
+             
+
+
+    <!-- some examples for different languages (generally ordered by ISO code) -->
+
+    <!-- Arabic -->
+    <dynamicField name="*_txt_ar" type="text_ar"  indexed="true"  stored="true"/>
+    <fieldType name="text_ar" class="solr.TextField" positionIncrementGap="100">
+      <analyzer> 
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <!-- for any non-arabic -->
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_ar.txt" />
+        <!-- normalizes ﻯ to ﻱ, etc -->
+        <filter class="solr.ArabicNormalizationFilterFactory"/>
+        <filter class="solr.ArabicStemFilterFactory"/>
+      </analyzer>
+    </fieldType>
+
+    <!-- Bulgarian -->
+    <dynamicField name="*_txt_bg" type="text_bg"  indexed="true"  stored="true"/>
+    <fieldType name="text_bg" class="solr.TextField" positionIncrementGap="100">
+      <analyzer> 
+        <tokenizer class="solr.StandardTokenizerFactory"/> 
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_bg.txt" /> 
+        <filter class="solr.BulgarianStemFilterFactory"/>       
+      </analyzer>
+    </fieldType>
+    
+    <!-- Catalan -->
+    <dynamicField name="*_txt_ca" type="text_ca"  indexed="true"  stored="true"/>
+    <fieldType name="text_ca" class="solr.TextField" positionIncrementGap="100">
+      <analyzer> 
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <!-- removes l', etc -->
+        <filter class="solr.ElisionFilterFactory" ignoreCase="true" articles="lang/contractions_ca.txt"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_ca.txt" />
+        <filter class="solr.SnowballPorterFilterFactory" language="Catalan"/>       
+      </analyzer>
+    </fieldType>
+    
+    <!-- CJK bigram (see text_ja for a Japanese configuration using morphological analysis) -->
+    <dynamicField name="*_txt_cjk" type="text_cjk"  indexed="true"  stored="true"/>
+    <fieldType name="text_cjk" class="solr.TextField" positionIncrementGap="100">
+      <analyzer>
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <!-- normalize width before bigram, as e.g. half-width dakuten combine  -->
+        <filter class="solr.CJKWidthFilterFactory"/>
+        <!-- for any non-CJK -->
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.CJKBigramFilterFactory"/>
+      </analyzer>
+    </fieldType>
+
+    <!-- Czech -->
+    <dynamicField name="*_txt_cz" type="text_cz"  indexed="true"  stored="true"/>
+    <fieldType name="text_cz" class="solr.TextField" positionIncrementGap="100">
+      <analyzer> 
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_cz.txt" />
+        <filter class="solr.CzechStemFilterFactory"/>       
+      </analyzer>
+    </fieldType>
+    
+    <!-- Danish -->
+    <dynamicField name="*_txt_da" type="text_da"  indexed="true"  stored="true"/>
+    <fieldType name="text_da" class="solr.TextField" positionIncrementGap="100">
+      <analyzer> 
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_da.txt" format="snowball" />
+        <filter class="solr.SnowballPorterFilterFactory" language="Danish"/>       
+      </analyzer>
+    </fieldType>
+    
+    <!-- German -->
+    <dynamicField name="*_txt_de" type="text_de"  indexed="true"  stored="true"/>
+    <fieldType name="text_de" class="solr.TextField" positionIncrementGap="100">
+      <analyzer> 
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_de.txt" format="snowball" />
+        <filter class="solr.GermanNormalizationFilterFactory"/>
+        <filter class="solr.GermanLightStemFilterFactory"/>
+        <!-- less aggressive: <filter class="solr.GermanMinimalStemFilterFactory"/> -->
+        <!-- more aggressive: <filter class="solr.SnowballPorterFilterFactory" language="German2"/> -->
+      </analyzer>
+    </fieldType>
+    
+    <!-- Greek -->
+    <dynamicField name="*_txt_el" type="text_el"  indexed="true"  stored="true"/>
+    <fieldType name="text_el" class="solr.TextField" positionIncrementGap="100">
+      <analyzer> 
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <!-- greek specific lowercase for sigma -->
+        <filter class="solr.GreekLowerCaseFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="false" words="lang/stopwords_el.txt" />
+        <filter class="solr.GreekStemFilterFactory"/>
+      </analyzer>
+    </fieldType>
+    
+    <!-- Spanish -->
+    <dynamicField name="*_txt_es" type="text_es"  indexed="true"  stored="true"/>
+    <fieldType name="text_es" class="solr.TextField" positionIncrementGap="100">
+      <analyzer> 
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_es.txt" format="snowball" />
+        <filter class="solr.SpanishLightStemFilterFactory"/>
+        <!-- more aggressive: <filter class="solr.SnowballPorterFilterFactory" language="Spanish"/> -->
+      </analyzer>
+    </fieldType>
+    
+    <!-- Basque -->
+    <dynamicField name="*_txt_eu" type="text_eu"  indexed="true"  stored="true"/>
+    <fieldType name="text_eu" class="solr.TextField" positionIncrementGap="100">
+      <analyzer> 
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_eu.txt" />
+        <filter class="solr.SnowballPorterFilterFactory" language="Basque"/>
+      </analyzer>
+    </fieldType>
+    
+    <!-- Persian -->
+    <dynamicField name="*_txt_fa" type="text_fa"  indexed="true"  stored="true"/>
+    <fieldType name="text_fa" class="solr.TextField" positionIncrementGap="100">
+      <analyzer>
+        <!-- for ZWNJ -->
+        <charFilter class="solr.PersianCharFilterFactory"/>
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.ArabicNormalizationFilterFactory"/>
+        <filter class="solr.PersianNormalizationFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_fa.txt" />
+      </analyzer>
+    </fieldType>
+    
+    <!-- Finnish -->
+    <dynamicField name="*_txt_fi" type="text_fi"  indexed="true"  stored="true"/>
+    <fieldType name="text_fi" class="solr.TextField" positionIncrementGap="100">
+      <analyzer> 
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_fi.txt" format="snowball" />
+        <filter class="solr.SnowballPorterFilterFactory" language="Finnish"/>
+        <!-- less aggressive: <filter class="solr.FinnishLightStemFilterFactory"/> -->
+      </analyzer>
+    </fieldType>
+    
+    <!-- French -->
+    <dynamicField name="*_txt_fr" type="text_fr"  indexed="true"  stored="true"/>
+    <fieldType name="text_fr" class="solr.TextField" positionIncrementGap="100">
+      <analyzer> 
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <!-- removes l', etc -->
+        <filter class="solr.ElisionFilterFactory" ignoreCase="true" articles="lang/contractions_fr.txt"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_fr.txt" format="snowball" />
+        <filter class="solr.FrenchLightStemFilterFactory"/>
+        <!-- less aggressive: <filter class="solr.FrenchMinimalStemFilterFactory"/> -->
+        <!-- more aggressive: <filter class="solr.SnowballPorterFilterFactory" language="French"/> -->
+      </analyzer>
+    </fieldType>
+    
+    <!-- Irish -->
+    <dynamicField name="*_txt_ga" type="text_ga"  indexed="true"  stored="true"/>
+    <fieldType name="text_ga" class="solr.TextField" positionIncrementGap="100">
+      <analyzer> 
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <!-- removes d', etc -->
+        <filter class="solr.ElisionFilterFactory" ignoreCase="true" articles="lang/contractions_ga.txt"/>
+        <!-- removes n-, etc. position increments is intentionally false! -->
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/hyphenations_ga.txt"/>
+        <filter class="solr.IrishLowerCaseFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_ga.txt"/>
+        <filter class="solr.SnowballPorterFilterFactory" language="Irish"/>
+      </analyzer>
+    </fieldType>
+    
+    <!-- Galician -->
+    <dynamicField name="*_txt_gl" type="text_gl"  indexed="true"  stored="true"/>
+    <fieldType name="text_gl" class="solr.TextField" positionIncrementGap="100">
+      <analyzer> 
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_gl.txt" />
+        <filter class="solr.GalicianStemFilterFactory"/>
+        <!-- less aggressive: <filter class="solr.GalicianMinimalStemFilterFactory"/> -->
+      </analyzer>
+    </fieldType>
+    
+    <!-- Hindi -->
+    <dynamicField name="*_txt_hi" type="text_hi"  indexed="true"  stored="true"/>
+    <fieldType name="text_hi" class="solr.TextField" positionIncrementGap="100">
+      <analyzer> 
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <!-- normalizes unicode representation -->
+        <filter class="solr.IndicNormalizationFilterFactory"/>
+        <!-- normalizes variation in spelling -->
+        <filter class="solr.HindiNormalizationFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_hi.txt" />
+        <filter class="solr.HindiStemFilterFactory"/>
+      </analyzer>
+    </fieldType>
+    
+    <!-- Hungarian -->
+    <dynamicField name="*_txt_hu" type="text_hu"  indexed="true"  stored="true"/>
+    <fieldType name="text_hu" class="solr.TextField" positionIncrementGap="100">
+      <analyzer> 
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_hu.txt" format="snowball" />
+        <filter class="solr.SnowballPorterFilterFactory" language="Hungarian"/>
+        <!-- less aggressive: <filter class="solr.HungarianLightStemFilterFactory"/> -->   
+      </analyzer>
+    </fieldType>
+    
+    <!-- Armenian -->
+    <dynamicField name="*_txt_hy" type="text_hy"  indexed="true"  stored="true"/>
+    <fieldType name="text_hy" class="solr.TextField" positionIncrementGap="100">
+      <analyzer> 
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_hy.txt" />
+        <filter class="solr.SnowballPorterFilterFactory" language="Armenian"/>
+      </analyzer>
+    </fieldType>
+    
+    <!-- Indonesian -->
+    <dynamicField name="*_txt_id" type="text_id"  indexed="true"  stored="true"/>
+    <fieldType name="text_id" class="solr.TextField" positionIncrementGap="100">
+      <analyzer> 
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_id.txt" />
+        <!-- for a less aggressive approach (only inflectional suffixes), set stemDerivational to false -->
+        <filter class="solr.IndonesianStemFilterFactory" stemDerivational="true"/>
+      </analyzer>
+    </fieldType>
+    
+    <!-- Italian -->
+  <dynamicField name="*_txt_it" type="text_it"  indexed="true"  stored="true"/>
+  <fieldType name="text_it" class="solr.TextField" positionIncrementGap="100">
+      <analyzer> 
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <!-- removes l', etc -->
+        <filter class="solr.ElisionFilterFactory" ignoreCase="true" articles="lang/contractions_it.txt"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_it.txt" format="snowball" />
+        <filter class="solr.ItalianLightStemFilterFactory"/>
+        <!-- more aggressive: <filter class="solr.SnowballPorterFilterFactory" language="Italian"/> -->
+      </analyzer>
+    </fieldType>
+    
+    <!-- Japanese using morphological analysis (see text_cjk for a configuration using bigramming)
+
+         NOTE: If you want to optimize search for precision, use default operator AND in your query
+         parser config with <solrQueryParser defaultOperator="AND"/> further down in this file.  Use 
+         OR if you would like to optimize for recall (default).
+    -->
+    <dynamicField name="*_txt_ja" type="text_ja"  indexed="true"  stored="true"/>
+    <fieldType name="text_ja" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="false">
+      <analyzer>
+        <!-- Kuromoji Japanese morphological analyzer/tokenizer (JapaneseTokenizer)
+
+           Kuromoji has a search mode (default) that does segmentation useful for search.  A heuristic
+           is used to segment compounds into its parts and the compound itself is kept as synonym.
+
+           Valid values for attribute mode are:
+              normal: regular segmentation
+              search: segmentation useful for search with synonyms compounds (default)
+            extended: same as search mode, but unigrams unknown words (experimental)
+
+           For some applications it might be good to use search mode for indexing and normal mode for
+           queries to reduce recall and prevent parts of compounds from being matched and highlighted.
+           Use <analyzer type="index"> and <analyzer type="query"> for this and mode normal in query.
+
+           Kuromoji also has a convenient user dictionary feature that allows overriding the statistical
+           model with your own entries for segmentation, part-of-speech tags and readings without a need
+           to specify weights.  Notice that user dictionaries have not been subject to extensive testing.
+
+           User dictionary attributes are:
+                     userDictionary: user dictionary filename
+             userDictionaryEncoding: user dictionary encoding (default is UTF-8)
+
+           See lang/userdict_ja.txt for a sample user dictionary file.
+
+           Punctuation characters are discarded by default.  Use discardPunctuation="false" to keep them.
+
+           See http://wiki.apache.org/solr/JapaneseLanguageSupport for more on Japanese language support.
+        -->
+        <tokenizer class="solr.JapaneseTokenizerFactory" mode="search"/>
+        <!--<tokenizer class="solr.JapaneseTokenizerFactory" mode="search" userDictionary="lang/userdict_ja.txt"/>-->
+        <!-- Reduces inflected verbs and adjectives to their base/dictionary forms (辞書形) -->
+        <filter class="solr.JapaneseBaseFormFilterFactory"/>
+        <!-- Removes tokens with certain part-of-speech tags -->
+        <filter class="solr.JapanesePartOfSpeechStopFilterFactory" tags="lang/stoptags_ja.txt" />
+        <!-- Normalizes full-width romaji to half-width and half-width kana to full-width (Unicode NFKC subset) -->
+        <filter class="solr.CJKWidthFilterFactory"/>
+        <!-- Removes common tokens typically not useful for search, but have a negative effect on ranking -->
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_ja.txt" />
+        <!-- Normalizes common katakana spelling variations by removing any last long sound character (U+30FC) -->
+        <filter class="solr.JapaneseKatakanaStemFilterFactory" minimumLength="4"/>
+        <!-- Lower-cases romaji characters -->
+        <filter class="solr.LowerCaseFilterFactory"/>
+      </analyzer>
+    </fieldType>
+    
+    <!-- Latvian -->
+    <dynamicField name="*_txt_lv" type="text_lv"  indexed="true"  stored="true"/>
+    <fieldType name="text_lv" class="solr.TextField" positionIncrementGap="100">
+      <analyzer> 
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_lv.txt" />
+        <filter class="solr.LatvianStemFilterFactory"/>
+      </analyzer>
+    </fieldType>
+    
+    <!-- Dutch -->
+    <dynamicField name="*_txt_nl" type="text_nl"  indexed="true"  stored="true"/>
+    <fieldType name="text_nl" class="solr.TextField" positionIncrementGap="100">
+      <analyzer> 
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_nl.txt" format="snowball" />
+        <filter class="solr.StemmerOverrideFilterFactory" dictionary="lang/stemdict_nl.txt" ignoreCase="false"/>
+        <filter class="solr.SnowballPorterFilterFactory" language="Dutch"/>
+      </analyzer>
+    </fieldType>
+    
+    <!-- Norwegian -->
+    <dynamicField name="*_txt_no" type="text_no"  indexed="true"  stored="true"/>
+    <fieldType name="text_no" class="solr.TextField" positionIncrementGap="100">
+      <analyzer> 
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_no.txt" format="snowball" />
+        <filter class="solr.SnowballPorterFilterFactory" language="Norwegian"/>
+        <!-- less aggressive: <filter class="solr.NorwegianLightStemFilterFactory"/> -->
+        <!-- singular/plural: <filter class="solr.NorwegianMinimalStemFilterFactory"/> -->
+      </analyzer>
+    </fieldType>
+    
+    <!-- Portuguese -->
+  <dynamicField name="*_txt_pt" type="text_pt"  indexed="true"  stored="true"/>
+  <fieldType name="text_pt" class="solr.TextField" positionIncrementGap="100">
+      <analyzer> 
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_pt.txt" format="snowball" />
+        <filter class="solr.PortugueseLightStemFilterFactory"/>
+        <!-- less aggressive: <filter class="solr.PortugueseMinimalStemFilterFactory"/> -->
+        <!-- more aggressive: <filter class="solr.SnowballPorterFilterFactory" language="Portuguese"/> -->
+        <!-- most aggressive: <filter class="solr.PortugueseStemFilterFactory"/> -->
+      </analyzer>
+    </fieldType>
+    
+    <!-- Romanian -->
+    <dynamicField name="*_txt_ro" type="text_ro"  indexed="true"  stored="true"/>
+    <fieldType name="text_ro" class="solr.TextField" positionIncrementGap="100">
+      <analyzer> 
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_ro.txt" />
+        <filter class="solr.SnowballPorterFilterFactory" language="Romanian"/>
+      </analyzer>
+    </fieldType>
+    
+    <!-- Russian -->
+    <dynamicField name="*_txt_ru" type="text_ru"  indexed="true"  stored="true"/>
+    <fieldType name="text_ru" class="solr.TextField" positionIncrementGap="100">
+      <analyzer> 
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_ru.txt" format="snowball" />
+        <filter class="solr.SnowballPorterFilterFactory" language="Russian"/>
+        <!-- less aggressive: <filter class="solr.RussianLightStemFilterFactory"/> -->
+      </analyzer>
+    </fieldType>
+    
+    <!-- Swedish -->
+    <dynamicField name="*_txt_sv" type="text_sv"  indexed="true"  stored="true"/>
+    <fieldType name="text_sv" class="solr.TextField" positionIncrementGap="100">
+      <analyzer> 
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_sv.txt" format="snowball" />
+        <filter class="solr.SnowballPorterFilterFactory" language="Swedish"/>
+        <!-- less aggressive: <filter class="solr.SwedishLightStemFilterFactory"/> -->
+      </analyzer>
+    </fieldType>
+    
+    <!-- Thai -->
+    <dynamicField name="*_txt_th" type="text_th"  indexed="true"  stored="true"/>
+    <fieldType name="text_th" class="solr.TextField" positionIncrementGap="100">
+      <analyzer>
+        <tokenizer class="solr.ThaiTokenizerFactory"/>
+        <filter class="solr.LowerCaseFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_th.txt" />
+      </analyzer>
+    </fieldType>
+    
+    <!-- Turkish -->
+    <dynamicField name="*_txt_tr" type="text_tr"  indexed="true"  stored="true"/>
+    <fieldType name="text_tr" class="solr.TextField" positionIncrementGap="100">
+      <analyzer> 
+        <tokenizer class="solr.StandardTokenizerFactory"/>
+        <filter class="solr.TurkishLowerCaseFilterFactory"/>
+        <filter class="solr.StopFilterFactory" ignoreCase="false" words="lang/stopwords_tr.txt" />
+        <filter class="solr.SnowballPorterFilterFactory" language="Turkish"/>
+      </analyzer>
+    </fieldType>
+
+  <!-- Similarity is the scoring routine for each document vs. a query.
+       A custom Similarity or SimilarityFactory may be specified here, but 
+       the default is fine for most applications.  
+       For more info: http://wiki.apache.org/solr/SchemaXml#Similarity
+    -->
+  <!--
+     <similarity class="com.example.solr.CustomSimilarityFactory">
+       <str name="paramkey">param value</str>
+     </similarity>
+    -->
+</schema>


[11/15] incubator-sdap-edge git commit: SDAP-1 Import all code under the SDAP SGA

Posted by le...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/dataset/gcmd/dif_template.xml
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/dataset/gcmd/dif_template.xml b/src/main/python/plugins/dataset/gcmd/dif_template.xml
new file mode 100644
index 0000000..1352024
--- /dev/null
+++ b/src/main/python/plugins/dataset/gcmd/dif_template.xml
@@ -0,0 +1,216 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<DIF xmlns="http://gcmd.gsfc.nasa.gov/Aboutus/xml/dif/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://gcmd.gsfc.nasa.gov/Aboutus/xml/dif/ http://gcmd.nasa.gov/Aboutus/xml/dif/dif_v9.8.2.xsd">
+{% if doc %}
+<Entry_ID>{{ Entry_ID }}</Entry_ID>
+<Entry_Title>{{ Entry_Title }}</Entry_Title>
+{% for citation in Dataset_Citation %}
+<Data_Set_Citation>
+<Dataset_Creator>{{ citation['Dataset_Creator'] }}</Dataset_Creator>
+<Dataset_Title>{{ citation['Dataset_Title'] }}</Dataset_Title>
+{% if citation['Dataset_Series_Name'].strip() %}
+<Dataset_Series_Name>{{ citation['Dataset_Series_Name'] }}</Dataset_Series_Name>
+{% endif %}
+{% if citation['Dataset_Release_Date'].strip() %}
+<Dataset_Release_Date>{{ citation['Dataset_Release_Date'] }}</Dataset_Release_Date>
+{% endif %}
+{% if citation['Dataset_Release_Place'].strip() %}
+<Dataset_Release_Place>{{ citation['Dataset_Release_Place'] }}</Dataset_Release_Place>
+{% endif %}
+{% if citation['Dataset_Publisher'].strip() %}
+<Dataset_Publisher>{{ citation['Dataset_Publisher'] }}</Dataset_Publisher>
+{% endif %}
+<Version>{{ citation['Version'] }}</Version>
+{% if citation['Other_Citation_Details'].strip() %}
+<Other_Citation_Details>{{ citation['Other_Citation_Details'] }}</Other_Citation_Details>
+{% endif %}
+{% if citation['Online_Resource'].strip() %}
+<Online_Resource>{{ citation['Online_Resource'] }}</Online_Resource>
+{% endif %}
+</Data_Set_Citation>
+{% endfor %}
+{% for personnel in Personnel %}
+<Personnel>
+<Role>{{ personnel['Role'] }}</Role>
+<First_Name>{{ personnel['First_Name'] }}</First_Name>
+{% if personnel['Middle_Name'] != 'none' and personnel['Middle_Name'].strip() %}
+<Middle_Name>{{ personnel['Middle_Name'] }}</Middle_Name>
+{% endif %}
+<Last_Name>{{ personnel['Last_Name'] }}</Last_Name>
+<Email>{{ personnel['Email'] }}</Email>
+{% if personnel['Phone'].strip() %}
+<Phone>{{ personnel['Phone'] }}</Phone>
+{% endif %}
+{% if personnel['Fax'].strip() %}
+<Fax>{{ personnel['Fax'] }}</Fax>
+{% endif %}
+</Personnel>
+{% endfor %}
+<Personnel>
+<Role>DIF AUTHOR</Role>
+<First_Name>{{ author['firstname'] }}</First_Name>
+<Last_Name>{{ author['lastname'] }}</Last_Name>
+<Email>{{ author['email'] }}</Email>
+</Personnel>
+{% for parameter in Parameters %}
+<Parameters>
+<Category>{{ parameter['Category'] }}</Category>
+<Topic>{{ parameter['Topic'] }}</Topic>
+<Term>{{ parameter['Term'] }}</Term>
+<Variable_Level_1>{{ parameter['Variable_Level_1'] }}</Variable_Level_1>
+{% if parameter['Detailed_Variable'].strip() %}
+<Detailed_Variable>{{ parameter['Detailed_Variable'] }}</Detailed_Variable>
+{% endif %}
+</Parameters>
+{% endfor %}
+<ISO_Topic_Category>Oceans</ISO_Topic_Category>
+<ISO_Topic_Category>Geoscientific Information</ISO_Topic_Category>
+{% for i in UniqueDatasetSensor %}
+<Sensor_Name>
+<Short_Name>{{ doc['DatasetSource-Sensor-ShortName'][i] }}</Short_Name>
+{% if doc['DatasetSource-Sensor-LongName'][i].strip() %}
+<Long_Name>{{ doc['DatasetSource-Sensor-LongName'][i] }}</Long_Name>
+{% endif %}
+</Sensor_Name>
+{% endfor %}
+{% for i in UniqueDatasetSource %}
+<Source_Name>
+<Short_Name>{{ doc['DatasetSource-Source-ShortName'][i] }}</Short_Name>
+{% if doc['DatasetSource-Source-LongName'][i].strip() %}
+<Long_Name>{{ doc['DatasetSource-Source-LongName'][i] }}</Long_Name>
+{% endif %}
+</Source_Name>
+{% endfor %}
+<Temporal_Coverage>
+<Start_Date>{{ Start_Date }}</Start_Date>
+{% if Stop_Date %}
+<Stop_Date>{{ Stop_Date }}</Stop_Date>
+{% endif %}
+</Temporal_Coverage>
+<Spatial_Coverage>
+{% if doc['DatasetCoverage-SouthLat'][0].strip() %}
+<Southernmost_Latitude>{{ doc['DatasetCoverage-SouthLat'][0] }}</Southernmost_Latitude>
+{% endif %}
+{% if doc['DatasetCoverage-NorthLat'][0].strip() %}
+<Northernmost_Latitude>{{ doc['DatasetCoverage-NorthLat'][0] }}</Northernmost_Latitude>
+{% endif %}
+{% if doc['DatasetCoverage-WestLon'][0].strip() %}
+<Westernmost_Longitude>{{ doc['DatasetCoverage-WestLon'][0] }}</Westernmost_Longitude>
+{% endif %}
+{% if doc['DatasetCoverage-EastLon'][0].strip() %}
+<Easternmost_Longitude>{{ doc['DatasetCoverage-EastLon'][0] }}</Easternmost_Longitude>
+{% endif %}
+{% if doc['DatasetCoverage-MinAltitude'][0].strip() %}
+<Minimum_Altitude>{{ doc['DatasetCoverage-MinAltitude'][0] }}</Minimum_Altitude>
+{% endif %}
+{% if doc['DatasetCoverage-MaxAltitude'][0].strip() %}
+<Maximum_Altitude>{{ doc['DatasetCoverage-MaxAltitude'][0] }}</Maximum_Altitude>
+{% endif %}
+{% if doc['DatasetCoverage-MinDepth'][0].strip() %}
+<Minimum_Depth>{{ doc['DatasetCoverage-MinDepth'][0] }}</Minimum_Depth>
+{% endif %}
+{% if doc['DatasetCoverage-MaxDepth'][0].strip() %}
+<Maximum_Depth>{{ doc['DatasetCoverage-MaxDepth'][0] }}</Maximum_Depth>
+{% endif %}
+</Spatial_Coverage>
+<Location>
+<Location_Category>Geographic Region</Location_Category>
+{% if 'DatasetRegion-Region' in doc %}
+<Location_Type>{{ doc['DatasetRegion-Region'][0] }}</Location_Type>
+{% endif %}
+</Location>
+<Data_Resolution>
+{% if doc['Dataset-LatitudeResolution'][0].strip() %}
+<Latitude_Resolution>{{ doc['Dataset-LatitudeResolution'][0] }}</Latitude_Resolution>
+{% endif %}
+{% if doc['Dataset-LongitudeResolution'][0].strip() %}
+<Longitude_Resolution>{{ doc['Dataset-LongitudeResolution'][0] }}</Longitude_Resolution>
+{% endif %}
+{% if doc['Dataset-HorizontalResolutionRange'][0].strip() %}
+<Horizontal_Resolution_Range>{{ doc['Dataset-HorizontalResolutionRange'][0] }}</Horizontal_Resolution_Range>
+{% endif %}
+{% if doc['Dataset-TemporalResolution'][0].strip() %}
+<Temporal_Resolution>{{ doc['Dataset-TemporalResolution'][0] }}</Temporal_Resolution>
+{% endif %}
+{% if doc['Dataset-TemporalResolutionRange'][0].strip() %}
+<Temporal_Resolution_Range>{{ doc['Dataset-TemporalResolutionRange'][0] }}</Temporal_Resolution_Range>
+{% endif %}
+</Data_Resolution>
+{% for project in Project %}
+<Project>
+<Short_Name>{{ project['Short_Name'] }}</Short_Name>
+<Long_Name>{{ project['Long_Name'] }}</Long_Name>
+</Project>
+{% endfor %}
+<Access_Constraints>{{ doc['DatasetPolicy-AccessConstraint'][0] }}</Access_Constraints>
+<Use_Constraints>{{ doc['DatasetPolicy-UseConstraint'][0] }}</Use_Constraints>
+<Data_Set_Language>English</Data_Set_Language>
+{% if doc['Dataset-OriginalProvider'][0].strip() %}
+<Originating_Center>{{ doc['Dataset-OriginalProvider'][0] }}</Originating_Center>
+{% endif %}
+{% macro buildDataCenter(dataCenter) %}
+<Data_Center>
+<Data_Center_Name>
+<Short_Name>{{ dataCenter['shortname'] }}</Short_Name>
+<Long_Name>{{ dataCenter['longname'] }}</Long_Name>
+</Data_Center_Name>
+<Data_Center_URL>{{ dataCenter['url'] }}</Data_Center_URL>
+<Personnel>
+<Role>Data Center Contact</Role>
+<First_Name>{{ dataCenter['firstname'] }}</First_Name>
+<Last_Name>{{ dataCenter['lastname'] }}</Last_Name>
+<Email>{{ dataCenter['email'] }}</Email>
+</Personnel>
+</Data_Center>
+{% endmacro %}
+{% if doc['DatasetPolicy-AccessType'][0] != 'REMOTE' %}
+{{ buildDataCenter(podaac) }}
+{% if doc['DatasetProject-Project-ShortName'][0] == 'GHRSST' and doc['DatasetPolicy-DataClass'][0] == 'ROLLING-STORE' %}
+{{ buildDataCenter(nodc) }}
+{% endif %}
+{% else %}
+<Data_Center>
+<Data_Center_Name>
+<Short_Name>{{ doc['Dataset-Provider-ShortName'][0] }}</Short_Name>
+<Long_Name>{{ doc['Dataset-Provider-LongName'][0] }}</Long_Name>
+</Data_Center_Name>
+{% if 'Dataset-Provider-ProviderResource-Path' in doc %}
+<Data_Center_URL>{{ doc['Dataset-Provider-ProviderResource-Path'][0] }}</Data_Center_URL>
+{% endif %}
+{% if doc['Dataset-ProviderDatasetName'][0].strip() %}
+<Data_Set_ID>{{ doc['Dataset-ProviderDatasetName'][0] }}</Data_Set_ID>
+{% endif %}
+{% if Provider_Personnel %}
+<Personnel>
+<Role>Data Center Contact</Role>
+<First_Name>{{ Provider_Personnel['First_Name'] }}</First_Name>
+{% if Provider_Personnel['Middle_Name'] != 'none' and Provider_Personnel['Middle_Name'].strip() %}
+<Middle_Name>{{ Provider_Personnel['Middle_Name'] }}</Middle_Name>
+{% endif %}
+<Last_Name>{{ Provider_Personnel['Last_Name'] }}</Last_Name>
+<Email>{{ Provider_Personnel['Email'] }}</Email>
+{% if Provider_Personnel['Phone'].strip() %}
+<Phone>{{ Provider_Personnel['Phone'] }}</Phone>
+{% endif %}
+{% if Provider_Personnel['Fax'].strip() %}
+<Fax>{{ Provider_Personnel['Fax'] }}</Fax>
+{% endif %}
+</Personnel>
+{% endif %}
+</Data_Center>
+{% endif %}
+{% if doc['Dataset-Reference'][0].strip() %}
+<Reference>{{ doc['Dataset-Reference'][0] }}</Reference>
+{% endif %}
+<Summary>
+<Abstract>{{ doc['Dataset-Description'][0] }}</Abstract>
+</Summary>
+<IDN_Node>
+<Short_Name>USA/NASA</Short_Name>
+</IDN_Node>
+<Metadata_Name>CEOS IDN DIF</Metadata_Name>
+<Metadata_Version>9.8</Metadata_Version>
+<DIF_Creation_Date>{{ DIF_Creation_Date }}</DIF_Creation_Date>
+<Last_DIF_Revision_Date>{{ Last_DIF_Revision_Date }}</Last_DIF_Revision_Date>
+<DIF_Revision_History>{{ DIF_Revision_History }}</DIF_Revision_History>
+{% endif %}
+</DIF>

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/dataset/gcmd/plugin.conf
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/dataset/gcmd/plugin.conf b/src/main/python/plugins/dataset/gcmd/plugin.conf
new file mode 100644
index 0000000..07ff23f
--- /dev/null
+++ b/src/main/python/plugins/dataset/gcmd/plugin.conf
@@ -0,0 +1,32 @@
+[solr]
+datasetUrl=http://localhost:8983/solr.war/dataset
+entriesPerPage=7
+
+[portal]
+datasetUrl=http://localhost:8000/drupal/dataset
+
+[service]
+url=http://localhost:8890
+host=localhost:8890
+template=dif_template.xml
+
+[author]
+firstName=PO.DAAC
+lastName=User Services
+email=podaac@podaac.jpl.nasa.gov
+
+[podaac]
+shortName=NASA/JPL/PODAAC
+longName=Physical Oceanography Distributed Active Archive Center, Jet Propulsion Laboratory, NASA
+firstName=PO.DAAC
+lastName=User Services
+email=podaac@podaac.jpl.nasa.gov
+url=http://podaac.jpl.nasa.gov/
+
+[nodc]
+shortName=DOC/NOAA/NESDIS/NODC
+longName=National Oceanographic Data Center, NESDIS, NOAA, U.S. Department of Commerce
+firstName=NODC
+lastName=User Services
+email=NODC.Services@noaa.gov
+url=http://www.nodc.noaa.gov/

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/dataset/iso/IsoWriter.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/dataset/iso/IsoWriter.py b/src/main/python/plugins/dataset/iso/IsoWriter.py
new file mode 100644
index 0000000..a93bd76
--- /dev/null
+++ b/src/main/python/plugins/dataset/iso/IsoWriter.py
@@ -0,0 +1,28 @@
+import logging
+import os
+import os.path
+import codecs
+
+from edge.opensearch.datasetisoresponse import DatasetIsoResponse
+from edge.opensearch.datasetwriter import DatasetWriter
+
+class IsoWriter(DatasetWriter):
+    def __init__(self, configFilePath):
+        super(IsoWriter, self).__init__(configFilePath)
+        
+        templatePath = os.path.dirname(configFilePath) + os.sep
+        templatePath += self._configuration.get('service', 'template')
+        self.template = self._readTemplate(templatePath)
+
+    def _generateOpenSearchResponse(self, solrResponse, searchText, searchUrl, searchParams, pretty):
+        response = DatasetIsoResponse()
+        response.setTemplate(self.template)
+
+        return response.generate(solrResponse, pretty=pretty)
+
+    def _readTemplate(self, path):
+        file = codecs.open(path, encoding='utf-8')
+        data = file.read()
+        file.close()
+
+        return data

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/dataset/iso/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/dataset/iso/__init__.py b/src/main/python/plugins/dataset/iso/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/dataset/iso/iso_template.xml
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/dataset/iso/iso_template.xml b/src/main/python/plugins/dataset/iso/iso_template.xml
new file mode 100644
index 0000000..3c40854
--- /dev/null
+++ b/src/main/python/plugins/dataset/iso/iso_template.xml
@@ -0,0 +1,587 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<gmd:DS_Series xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.isotc211.org/2005/gmd http://www.ngdc.noaa.gov/metadata/published/xsd/schema.xsd" xmlns:gmd="http://www.isotc211.org/2005/gmd" xmlns:gco="http://www.isotc211.org/2005/gco" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:gml="http://www.opengis.net/gml" xmlns:gsr="http://www.isotc211.org/2005/gsr" xmlns:gss="http://www.isotc211.org/2005/gss" xmlns:gts="http://www.isotc211.org/2005/gts" xmlns:gmx="http://www.isotc211.org/2005/gmx" xmlns:gmi="http://www.isotc211.org/2005/gmi">
+{% if doc %}
+<gmd:composedOf gco:nilReason="inapplicable" />
+<gmd:seriesMetadata>
+<gmi:MI_Metadata id="{{ doc['Dataset-ShortName'][0] }}">
+<gmd:fileIdentifier>
+<gco:CharacterString>{{ doc['Dataset-ShortName'][0] }}</gco:CharacterString>
+</gmd:fileIdentifier>
+<gmd:language>
+<gco:CharacterString>eng</gco:CharacterString>
+</gmd:language>
+<gmd:characterSet>
+<gmd:MD_CharacterSetCode codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_CharacterSetCode" codeListValue="UTF8">UTF8</gmd:MD_CharacterSetCode>
+</gmd:characterSet>
+<gmd:hierarchyLevel>
+<gmd:MD_ScopeCode codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_ScopeCode" codeListValue="series">series</gmd:MD_ScopeCode>
+</gmd:hierarchyLevel>
+<gmd:contact>
+<gmd:CI_ResponsibleParty id="seriesMetadataContact">
+<gmd:individualName>
+<gco:CharacterString>{{ doc['DatasetContact-Contact-FirstName'][0] }}{% if doc['DatasetContact-Contact-MiddleName'][0] != 'none' %} {{ doc['DatasetContact-Contact-MiddleName'][0] }}{% endif %} {{ doc['DatasetContact-Contact-LastName'][0] }}</gco:CharacterString>
+</gmd:individualName>
+<gmd:organisationName>
+<gco:CharacterString>{{ doc['DatasetContact-Contact-Provider-ShortName'][0] }} &gt; {{ doc['DatasetContact-Contact-Provider-LongName'][0] }}</gco:CharacterString>
+</gmd:organisationName>
+<gmd:positionName>
+<gco:CharacterString>{{ doc['DatasetContact-Contact-Role'][0] }}</gco:CharacterString>
+</gmd:positionName>
+<gmd:contactInfo>
+<gmd:CI_Contact>
+<gmd:phone>
+<gmd:CI_Telephone>
+<gmd:voice>
+<gco:CharacterString>{{ doc['DatasetContact-Contact-Phone'][0] }}</gco:CharacterString>
+</gmd:voice>
+<gmd:facsimile>
+<gco:CharacterString>{{ doc['DatasetContact-Contact-Fax'][0] }}</gco:CharacterString>
+</gmd:facsimile>
+</gmd:CI_Telephone>
+</gmd:phone>
+<gmd:address>
+<gmd:CI_Address>
+<gmd:electronicMailAddress>
+<gco:CharacterString>{{ doc['DatasetContact-Contact-Email'][0] }}</gco:CharacterString>
+</gmd:electronicMailAddress>
+</gmd:CI_Address>
+</gmd:address>
+<gmd:contactInstructions>
+<gco:CharacterString>Phone/FAX/E-mail</gco:CharacterString>
+</gmd:contactInstructions>
+</gmd:CI_Contact>
+</gmd:contactInfo>
+<gmd:role>
+<gmd:CI_RoleCode codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_RoleCode" codeListValue="pointOfContact">pointOfContact</gmd:CI_RoleCode>
+</gmd:role>
+</gmd:CI_ResponsibleParty>
+</gmd:contact>
+<gmd:dateStamp>
+<gco:Date>{{ DateStamp }}</gco:Date>
+</gmd:dateStamp>
+<gmd:metadataStandardName>
+<gco:CharacterString>ISO 19115-2 Geographic information — Metadata — Part 2: Extensions for imagery and gridded data</gco:CharacterString>
+</gmd:metadataStandardName>
+<gmd:metadataStandardVersion>
+<gco:CharacterString>ISO 19115-2:2009-02-15</gco:CharacterString>
+</gmd:metadataStandardVersion>
+<gmd:locale>
+<gmd:PT_Locale>
+<gmd:languageCode>
+<gmd:LanguageCode codeList="http://www.loc.gov/standards/iso639-2/php/English_list.php" codeListValue="eng">eng</gmd:LanguageCode>
+</gmd:languageCode>
+<gmd:country>
+<gmd:Country codeList="http://www.iso.org/iso/iso_3166-1_list_en.zip" codeListValue="US">US</gmd:Country>
+</gmd:country>
+<gmd:characterEncoding>
+<gmd:MD_CharacterSetCode codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_CharacterSetCode" codeListValue="UTF8">UTF8</gmd:MD_CharacterSetCode>
+</gmd:characterEncoding>
+</gmd:PT_Locale>
+</gmd:locale>
+<gmd:metadataExtensionInfo>
+<gmd:MD_MetadataExtensionInformation>
+<gmd:extensionOnLineResource>
+<gmd:CI_OnlineResource>
+<gmd:linkage>
+<gmd:URL>http://www.ngdc.noaa.gov/metadata/published/19115/GHRSST/ISO/CoverageExtensions.xml</gmd:URL>
+</gmd:linkage>
+<gmd:applicationProfile>
+<gco:CharacterString>Web Browser</gco:CharacterString>
+</gmd:applicationProfile>
+<gmd:description>
+<gco:CharacterString>A description of extensions developed at NGDC to classify coverages.</gco:CharacterString>
+</gmd:description>
+<gmd:function>
+<gmd:CI_OnLineFunctionCode codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_OnlineFunctionCode" codeListValue="information">information</gmd:CI_OnLineFunctionCode>
+</gmd:function>
+</gmd:CI_OnlineResource>
+</gmd:extensionOnLineResource>
+</gmd:MD_MetadataExtensionInformation>
+</gmd:metadataExtensionInfo>
+<gmd:identificationInfo>
+<gmd:MD_DataIdentification id="seriesIdentification">
+<gmd:citation>
+<gmd:CI_Citation>
+<gmd:title>
+<gco:CharacterString>{{ doc['Dataset-LongName'][0] }}</gco:CharacterString>
+</gmd:title>
+<gmd:alternateTitle>
+<gco:CharacterString>{{ doc['DatasetCitation-Title'][0] }}</gco:CharacterString>
+</gmd:alternateTitle>
+<gmd:date>
+<gmd:CI_Date>
+<gmd:date>
+<gco:Date>{{ DatasetCitation_ReleaseDate }}</gco:Date>
+</gmd:date>
+<gmd:dateType>
+<gmd:CI_DateTypeCode codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_DateTypeCode" codeListValue="creation">creation</gmd:CI_DateTypeCode>
+</gmd:dateType>
+</gmd:CI_Date>
+</gmd:date>
+<gmd:edition>
+<gco:CharacterString>{{ doc['DatasetCitation-Version'][0] }}</gco:CharacterString>
+</gmd:edition>
+<gmd:citedResponsibleParty>
+<gmd:CI_ResponsibleParty>
+<gmd:individualName>
+<gco:CharacterString>{{ doc['DatasetCitation-Creator'][0] }}</gco:CharacterString>
+</gmd:individualName>
+<gmd:contactInfo>
+<gmd:CI_Contact>
+{% if (doc['DatasetCitation-OnlineResource'][0] | trim)[0:4] == 'http' or (doc['DatasetCitation-OnlineResource'][0] | trim)[0:3] == 'ftp' %}
+<gmd:onlineResource>
+<gmd:CI_OnlineResource>
+<gmd:linkage>
+<gmd:URL>{{ doc['DatasetCitation-OnlineResource'][0] }}</gmd:URL>
+</gmd:linkage>
+</gmd:CI_OnlineResource>
+</gmd:onlineResource>
+{% else %}
+<gmd:onlineResource gco:nilReason="missing"/>
+{% endif %}
+</gmd:CI_Contact>
+</gmd:contactInfo>
+<gmd:role>
+<gmd:CI_RoleCode codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_RoleCode" codeListValue="originator">originator</gmd:CI_RoleCode>
+</gmd:role>
+</gmd:CI_ResponsibleParty>
+</gmd:citedResponsibleParty>
+<gmd:citedResponsibleParty>
+<gmd:CI_ResponsibleParty>
+<gmd:organisationName>
+<gco:CharacterString>{{ doc['DatasetCitation-Publisher'][0] }}</gco:CharacterString>
+</gmd:organisationName>
+<gmd:contactInfo>
+<gmd:CI_Contact>
+<gmd:address>
+<gmd:CI_Address>
+<gmd:city>
+<gco:CharacterString>{{ doc['DatasetCitation-ReleasePlace'][0] }}</gco:CharacterString>
+</gmd:city>
+</gmd:CI_Address>
+</gmd:address>
+</gmd:CI_Contact>
+</gmd:contactInfo>
+<gmd:role>
+<gmd:CI_RoleCode codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_RoleCode" codeListValue="publisher">publisher</gmd:CI_RoleCode>
+</gmd:role>
+</gmd:CI_ResponsibleParty>
+</gmd:citedResponsibleParty>
+</gmd:CI_Citation>
+</gmd:citation>
+<gmd:abstract>
+<gco:CharacterString>{{ doc['Dataset-Description'][0] }}</gco:CharacterString>
+</gmd:abstract>
+<gmd:credit>
+<gco:CharacterString>{{ doc['DatasetCitation-CitationDetail'][0] }}</gco:CharacterString>
+</gmd:credit>
+<gmd:status>
+<gmd:MD_ProgressCode codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_ProgressCode" codeListValue="onGoing">onGoing</gmd:MD_ProgressCode>
+</gmd:status>
+<gmd:pointOfContact>
+<gmd:CI_ResponsibleParty>
+<gmd:individualName>
+<gco:CharacterString>{{ doc['DatasetContact-Contact-FirstName'][0] }}{% if doc['DatasetContact-Contact-MiddleName'][0] != 'none' %} {{ doc['DatasetContact-Contact-MiddleName'][0] }}{% endif %} {{ doc['DatasetContact-Contact-LastName'][0] }}</gco:CharacterString>
+</gmd:individualName>
+<gmd:organisationName>
+<gco:CharacterString>{{ doc['DatasetContact-Contact-Provider-ShortName'][0] }} &gt; {{ doc['DatasetContact-Contact-Provider-LongName'][0] }}</gco:CharacterString>
+</gmd:organisationName>
+<gmd:contactInfo>
+<gmd:CI_Contact>
+<gmd:phone>
+<gmd:CI_Telephone>
+<gmd:voice>
+<gco:CharacterString>{{ doc['DatasetContact-Contact-Phone'][0] }}</gco:CharacterString>
+</gmd:voice>
+<gmd:facsimile>
+<gco:CharacterString>{{ doc['DatasetContact-Contact-Fax'][0] }}</gco:CharacterString>
+</gmd:facsimile>
+</gmd:CI_Telephone>
+</gmd:phone>
+<gmd:address>
+<gmd:CI_Address>
+<gmd:electronicMailAddress>
+<gco:CharacterString>{{ doc['DatasetContact-Contact-Email'][0] }}</gco:CharacterString>
+</gmd:electronicMailAddress>
+</gmd:CI_Address>
+</gmd:address>
+{% if (doc['DatasetCitation-OnlineResource'][0] | trim)[0:4] == 'http' or (doc['DatasetCitation-OnlineResource'][0] | trim)[0:3] == 'ftp' %}
+<gmd:onlineResource>
+<gmd:CI_OnlineResource>
+<gmd:linkage>
+<gmd:URL>{{ doc['DatasetCitation-OnlineResource'][0] }}</gmd:URL>
+</gmd:linkage>
+</gmd:CI_OnlineResource>
+</gmd:onlineResource>
+{% else %}
+<gmd:onlineResource gco:nilReason="missing"/>
+{% endif %}
+</gmd:CI_Contact>
+</gmd:contactInfo>
+<gmd:role>
+<gmd:CI_RoleCode codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_RoleCode" codeListValue="pointOfContact">pointOfContact</gmd:CI_RoleCode>
+</gmd:role>
+</gmd:CI_ResponsibleParty>
+</gmd:pointOfContact>
+<gmd:resourceFormat>
+<gmd:MD_Format id="resourceFormat">
+<gmd:name>
+<gco:CharacterString>{{ doc['DatasetPolicy-DataFormat'][0] }}</gco:CharacterString>
+</gmd:name>
+<gmd:version>
+<gco:CharacterString>{{ DatasetPolicy_DataFormat_Version }}</gco:CharacterString>
+</gmd:version>
+<gmd:fileDecompressionTechnique>
+<gco:CharacterString>{{ doc['DatasetPolicy-CompressType'][0] }}</gco:CharacterString>
+</gmd:fileDecompressionTechnique>
+</gmd:MD_Format>
+</gmd:resourceFormat>
+{% for i in range(doc['DatasetParameter-Category']|count) %}
+<gmd:descriptiveKeywords>
+<gmd:MD_Keywords>
+<gmd:keyword>
+<gco:CharacterString>{{ doc['DatasetParameter-Category'][i] }} &gt; {{ doc['DatasetParameter-Topic'][i] }} &gt; {{ doc['DatasetParameter-Term'][i] }} &gt; {{ doc['DatasetParameter-Variable'][i] }}{% if doc['DatasetParameter-VariableDetail'][i] != ''  %} &gt; {{ doc['DatasetParameter-VariableDetail'][i] }}{% endif %}</gco:CharacterString>
+</gmd:keyword>
+<gmd:type>
+<gmd:MD_KeywordTypeCode codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_KeywordTypeCode" codeListValue="theme">theme</gmd:MD_KeywordTypeCode>
+</gmd:type>
+<gmd:thesaurusName>
+<gmd:CI_Citation>
+<gmd:title>
+<gco:CharacterString>NASA/GCMD Earth Science Keywords</gco:CharacterString>
+</gmd:title>
+<gmd:date gco:nilReason="unknown"/>
+</gmd:CI_Citation>
+</gmd:thesaurusName>
+</gmd:MD_Keywords>
+</gmd:descriptiveKeywords>
+{% endfor %}
+<gmd:descriptiveKeywords>
+<gmd:MD_Keywords>
+<gmd:keyword>
+<gco:CharacterString>{{ doc['DatasetRegion-Region'][0] }}</gco:CharacterString>
+</gmd:keyword>
+<gmd:type>
+<gmd:MD_KeywordTypeCode codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_KeywordTypeCode" codeListValue="place">place</gmd:MD_KeywordTypeCode>
+</gmd:type>
+<gmd:thesaurusName>
+<gmd:CI_Citation>
+<gmd:title>
+<gco:CharacterString>NASA/GCMD Location Keywords</gco:CharacterString>
+</gmd:title>
+<gmd:date gco:nilReason="unknown"/>
+</gmd:CI_Citation>
+</gmd:thesaurusName>
+</gmd:MD_Keywords>
+</gmd:descriptiveKeywords>
+<gmd:resourceConstraints>
+<gmd:MD_LegalConstraints>
+<gmd:useLimitation>
+<gco:CharacterString>{{ doc['DatasetPolicy-UseConstraint'][0] }}</gco:CharacterString>
+</gmd:useLimitation>
+<gmd:otherConstraints>
+<gco:CharacterString>{{ doc['DatasetPolicy-AccessConstraint'][0] }}</gco:CharacterString>
+</gmd:otherConstraints>
+</gmd:MD_LegalConstraints>
+</gmd:resourceConstraints>
+<gmd:spatialRepresentationType>
+<gmd:MD_SpatialRepresentationTypeCode codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_SpatialRepresentationTypeCode" codeListValue="grid">grid</gmd:MD_SpatialRepresentationTypeCode>
+</gmd:spatialRepresentationType>
+<gmd:language>
+<gco:CharacterString>eng</gco:CharacterString>
+</gmd:language>
+<gmd:characterSet>
+<gmd:MD_CharacterSetCode codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_CharacterSetCode" codeListValue="UTF8">UTF8</gmd:MD_CharacterSetCode>
+</gmd:characterSet>
+<gmd:extent>
+<gmd:EX_Extent id="boundingExtent">
+<gmd:geographicElement>
+<gmd:EX_GeographicBoundingBox id="boundingBox">
+<gmd:extentTypeCode>
+<gco:Boolean>true</gco:Boolean>
+</gmd:extentTypeCode>
+<gmd:westBoundLongitude>
+<gco:Decimal>{{ doc['DatasetCoverage-WestLon'][0] }}</gco:Decimal>
+</gmd:westBoundLongitude>
+<gmd:eastBoundLongitude>
+<gco:Decimal>{{ doc['DatasetCoverage-EastLon'][0] }}</gco:Decimal>
+</gmd:eastBoundLongitude>
+<gmd:southBoundLatitude>
+<gco:Decimal>{{ doc['DatasetCoverage-SouthLat'][0] }}</gco:Decimal>
+</gmd:southBoundLatitude>
+<gmd:northBoundLatitude>
+<gco:Decimal>{{ doc['DatasetCoverage-NorthLat'][0] }}</gco:Decimal>
+</gmd:northBoundLatitude>
+</gmd:EX_GeographicBoundingBox>
+</gmd:geographicElement>
+<gmd:geographicElement>
+<gmd:EX_GeographicDescription>
+<gmd:extentTypeCode>
+<gco:Boolean>true</gco:Boolean>
+</gmd:extentTypeCode>
+<gmd:geographicIdentifier>
+<gmd:MD_Identifier>
+<gmd:code/>
+</gmd:MD_Identifier>
+</gmd:geographicIdentifier>
+</gmd:EX_GeographicDescription>
+</gmd:geographicElement>
+<gmd:temporalElement>
+<gmd:EX_TemporalExtent id="temporalExtent">
+<gmd:extent>
+<TimePeriod xmlns="http://www.opengis.net/gml/3.2" xmlns:ns1="http://www.opengis.net/gml/3.2" ns1:id="timePeriod">
+<beginPosition>{{ DatasetCoverage_StartTime }}</beginPosition>
+<endPosition>{{ DatasetCoverage_StopTime }}</endPosition>
+</TimePeriod>
+</gmd:extent>
+</gmd:EX_TemporalExtent>
+</gmd:temporalElement>
+<gmd:verticalElement gco:nilReason="inapplicable"/>
+</gmd:EX_Extent>
+</gmd:extent>
+</gmd:MD_DataIdentification>
+</gmd:identificationInfo>
+<gmd:contentInfo>
+<gmi:MI_CoverageDescription id="referenceInformation">
+<gmd:attributeDescription>
+<gco:RecordType xlink:href="http://www.ghrsst.org/documents.htm?parent=475"/>
+</gmd:attributeDescription>
+<gmd:contentType>
+<gmd:MD_CoverageContentTypeCode codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_CoverageContentTypeCode" codeListValue="referenceInformation">referenceInformation</gmd:MD_CoverageContentTypeCode>
+</gmd:contentType>
+<gmd:dimension>
+<gmd:MD_Band>
+<gmd:sequenceIdentifier>
+<gco:MemberName>
+<gco:aName>
+<gco:CharacterString>lat</gco:CharacterString>
+</gco:aName>
+<gco:attributeType>
+<gco:TypeName>
+<gco:aName>
+<gco:CharacterString>float</gco:CharacterString>
+</gco:aName>
+</gco:TypeName>
+</gco:attributeType>
+</gco:MemberName>
+</gmd:sequenceIdentifier>
+</gmd:MD_Band>
+</gmd:dimension>
+<gmd:dimension>
+<gmd:MD_Band>
+<gmd:sequenceIdentifier>
+<gco:MemberName>
+<gco:aName>
+<gco:CharacterString>lon</gco:CharacterString>
+</gco:aName>
+<gco:attributeType>
+<gco:TypeName>
+<gco:aName>
+<gco:CharacterString>float</gco:CharacterString>
+</gco:aName>
+</gco:TypeName>
+</gco:attributeType>
+</gco:MemberName>
+</gmd:sequenceIdentifier>
+</gmd:MD_Band>
+</gmd:dimension>
+<gmd:dimension>
+<gmd:MD_Band>
+<gmd:sequenceIdentifier>
+<gco:MemberName>
+<gco:aName>
+<gco:CharacterString>time</gco:CharacterString>
+</gco:aName>
+<gco:attributeType>
+<gco:TypeName>
+<gco:aName>
+<gco:CharacterString>int</gco:CharacterString>
+</gco:aName>
+</gco:TypeName>
+</gco:attributeType>
+</gco:MemberName>
+</gmd:sequenceIdentifier>
+</gmd:MD_Band>
+</gmd:dimension>
+</gmi:MI_CoverageDescription>
+</gmd:contentInfo>
+<gmd:distributionInfo>
+<gmd:MD_Distribution>
+<gmd:distributionFormat xlink:href="#resourceFormat"/>
+<gmd:distributor>
+<gmd:MD_Distributor>
+<gmd:distributorContact>
+<gmd:CI_ResponsibleParty>
+<gmd:individualName>
+<gco:CharacterString>PO.DAAC User Services</gco:CharacterString>
+</gmd:individualName>
+<gmd:organisationName>
+<gco:CharacterString>NASA/JPL/PODAAC &gt; Physical Oceanography Distributed Active Archive Center, Jet Propulsion Laboratory, NASA</gco:CharacterString>
+</gmd:organisationName>
+<gmd:contactInfo>
+<gmd:CI_Contact>
+<gmd:address>
+<gmd:CI_Address>
+<gmd:deliveryPoint>
+<gco:CharacterString>4800 Oak Grove Drive</gco:CharacterString>
+</gmd:deliveryPoint>
+<gmd:city>
+<gco:CharacterString>Pasadena</gco:CharacterString>
+</gmd:city>
+<gmd:administrativeArea>
+<gco:CharacterString>CA</gco:CharacterString>
+</gmd:administrativeArea>
+<gmd:postalCode>
+<gco:CharacterString>91109-8099</gco:CharacterString>
+</gmd:postalCode>
+<gmd:country>
+<gco:CharacterString>USA</gco:CharacterString>
+</gmd:country>
+<gmd:electronicMailAddress>
+<gco:CharacterString>podaac@podaac.jpl.nasa.gov</gco:CharacterString>
+</gmd:electronicMailAddress>
+</gmd:CI_Address>
+</gmd:address>
+<gmd:onlineResource>
+<gmd:CI_OnlineResource>
+<gmd:linkage>
+<gmd:URL>http://podaac.jpl.nasa.gov</gmd:URL>
+</gmd:linkage>
+</gmd:CI_OnlineResource>
+</gmd:onlineResource>
+</gmd:CI_Contact>
+</gmd:contactInfo>
+<gmd:role>
+<gmd:CI_RoleCode codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_RoleCode" codeListValue="distributor">distributor</gmd:CI_RoleCode>
+</gmd:role>
+</gmd:CI_ResponsibleParty>
+</gmd:distributorContact>
+</gmd:MD_Distributor>
+</gmd:distributor>
+{% for i in range(doc['DatasetResource-Path']|count) if doc['DatasetResource-Type'][i] != 'Thumbnail' %}
+<gmd:transferOptions>
+<gmd:MD_DigitalTransferOptions>
+<gmd:onLine>
+<gmd:CI_OnlineResource>
+<gmd:linkage>
+<gmd:URL>{{ doc['DatasetResource-Path'][i] }}</gmd:URL>
+</gmd:linkage>
+<gmd:name>
+<gco:CharacterString>{{ doc['DatasetResource-Name'][i] }}</gco:CharacterString>
+</gmd:name>
+<gmd:description>
+<gco:CharacterString>{{ doc['DatasetResource-Description'][i] }}</gco:CharacterString>
+</gmd:description>
+<gmd:function>
+<gmd:CI_OnLineFunctionCode codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_OnlineFunctionCode" codeListValue="information">information</gmd:CI_OnLineFunctionCode>
+</gmd:function>
+</gmd:CI_OnlineResource>
+</gmd:onLine>
+</gmd:MD_DigitalTransferOptions>
+</gmd:transferOptions>
+{% endfor %}
+</gmd:MD_Distribution>
+</gmd:distributionInfo>
+<gmd:metadataMaintenance>
+<gmd:MD_MaintenanceInformation>
+<gmd:maintenanceAndUpdateFrequency>
+<gmd:MD_MaintenanceFrequencyCode codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#MD_MaintenanceFrequencyCode" codeListValue="asNeeded">asNeeded</gmd:MD_MaintenanceFrequencyCode>
+</gmd:maintenanceAndUpdateFrequency>
+<gmd:maintenanceNote>
+<gco:CharacterString>Translated from GCMD DIF </gco:CharacterString>
+</gmd:maintenanceNote>
+</gmd:MD_MaintenanceInformation>
+</gmd:metadataMaintenance>
+<gmi:acquisitionInformation>
+<gmi:MI_AcquisitionInformation>
+{% for i in UniqueDatasetSensor %}
+<gmi:instrument>
+<gmi:MI_Instrument>
+<gmi:identifier>
+<gmd:MD_Identifier>
+<gmd:code>
+<gco:CharacterString>{{ doc['DatasetSource-Sensor-ShortName'][i] }} &gt; {{ doc['DatasetSource-Sensor-LongName'][i] }}</gco:CharacterString>
+</gmd:code>
+</gmd:MD_Identifier>
+</gmi:identifier>
+<gmi:type>
+<gco:CharacterString>sensor</gco:CharacterString>
+</gmi:type>
+<gmi:description>
+<gco:CharacterString>{{ doc['DatasetSource-Sensor-Description'][i] }}</gco:CharacterString>
+</gmi:description>
+</gmi:MI_Instrument>
+</gmi:instrument>
+{% endfor %}
+{% for i in UniqueDatasetSource %}
+<gmi:platform>
+<gmi:MI_Platform>
+<gmi:identifier>
+<gmd:MD_Identifier>
+<gmd:code>
+<gco:CharacterString>{{ doc['DatasetSource-Source-ShortName'][i] }} &gt; {{ doc['DatasetSource-Source-LongName'][i] }}</gco:CharacterString>
+</gmd:code>
+</gmd:MD_Identifier>
+</gmi:identifier>
+<gmi:description>
+<gco:CharacterString>{{ doc['DatasetSource-Source-Description'][i] }}</gco:CharacterString>
+</gmi:description>
+<gmi:sponsor>
+<gmd:CI_ResponsibleParty>
+<gmd:organisationName>
+<gco:CharacterString>{{ doc['DatasetCitation-Creator'][0] }}</gco:CharacterString>
+</gmd:organisationName>
+<gmd:contactInfo>
+<gmd:CI_Contact>
+{% if (doc['DatasetCitation-OnlineResource'][0] | trim)[0:4] == 'http' or (doc['DatasetCitation-OnlineResource'][0] | trim)[0:3] == 'ftp' %}
+<gmd:onlineResource>
+<gmd:CI_OnlineResource>
+<gmd:linkage>
+<gmd:URL>{{ doc['DatasetCitation-OnlineResource'][0] }}</gmd:URL>
+</gmd:linkage>
+</gmd:CI_OnlineResource>
+</gmd:onlineResource>
+{% else %}
+<gmd:onlineResource gco:nilReason="missing"/>
+{% endif %}
+</gmd:CI_Contact>
+</gmd:contactInfo>
+<gmd:role>
+<gmd:CI_RoleCode codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_RoleCode" codeListValue="sponsor"/>
+</gmd:role>
+</gmd:CI_ResponsibleParty>
+</gmi:sponsor>
+<gmi:sponsor>
+<gmd:CI_ResponsibleParty>
+<gmd:organisationName>
+<gco:CharacterString>NASA/JPL/PODAAC &gt; Physical Oceanography Distributed Active Archive Center, Jet Propulsion Laboratory, NASA</gco:CharacterString>
+</gmd:organisationName>
+<gmd:contactInfo>
+<gmd:CI_Contact>
+<gmd:onlineResource>
+<gmd:CI_OnlineResource>
+<gmd:linkage>
+<gmd:URL>http://podaac.jpl.nasa.gov</gmd:URL>
+</gmd:linkage>
+<gmd:function>
+<gmd:CI_OnLineFunctionCode codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_OnLineFunctionCode" codeListValue="information"/>
+</gmd:function>
+</gmd:CI_OnlineResource>
+</gmd:onlineResource>
+</gmd:CI_Contact>
+</gmd:contactInfo>
+<gmd:role>
+<gmd:CI_RoleCode codeList="http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml#CI_RoleCode" codeListValue="sponsor"/>
+</gmd:role>
+</gmd:CI_ResponsibleParty>
+</gmi:sponsor>
+<gmi:instrument xlink:href="{{ doc['DatasetSource-Source-ShortName'][i] }}"/>
+</gmi:MI_Platform>
+</gmi:platform>
+{% endfor %}
+</gmi:MI_AcquisitionInformation>
+</gmi:acquisitionInformation>
+</gmi:MI_Metadata>
+</gmd:seriesMetadata>
+{% endif %}
+</gmd:DS_Series>

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/dataset/iso/plugin.conf
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/dataset/iso/plugin.conf b/src/main/python/plugins/dataset/iso/plugin.conf
new file mode 100644
index 0000000..41e5e71
--- /dev/null
+++ b/src/main/python/plugins/dataset/iso/plugin.conf
@@ -0,0 +1,11 @@
+[solr]
+datasetUrl=http://localhost:8983/solr.war/dataset
+entriesPerPage=7
+
+[portal]
+datasetUrl=http://localhost:8000/drupal/dataset
+
+[service]
+url=http://localhost:8890
+host=localhost:8890
+template=iso_template.xml

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/dataset/rss/RssWriter.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/dataset/rss/RssWriter.py b/src/main/python/plugins/dataset/rss/RssWriter.py
new file mode 100644
index 0000000..3724528
--- /dev/null
+++ b/src/main/python/plugins/dataset/rss/RssWriter.py
@@ -0,0 +1,18 @@
+import logging
+
+from edge.opensearch.datasetrssresponse import DatasetRssResponse
+from edge.opensearch.datasetwriter import DatasetWriter
+
+class RssWriter(DatasetWriter):
+    def __init__(self, configFilePath):
+        super(RssWriter, self).__init__(configFilePath)
+
+    def _generateOpenSearchResponse(self, solrResponse, searchText, searchUrl, searchParams, pretty):
+        response = DatasetRssResponse(self._configuration.get('portal', 'datasetUrl'), self._configuration.get('service', 'url'), self.datasets)
+
+        response.title = 'PO.DAAC Dataset Search Results'
+        response.description = 'Search result for "'+searchText+'"'
+        response.link = searchUrl
+        response.parameters = searchParams
+
+        return response.generate(solrResponse, pretty)

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/dataset/rss/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/dataset/rss/__init__.py b/src/main/python/plugins/dataset/rss/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/dataset/rss/plugin.conf
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/dataset/rss/plugin.conf b/src/main/python/plugins/dataset/rss/plugin.conf
new file mode 100644
index 0000000..eccb70e
--- /dev/null
+++ b/src/main/python/plugins/dataset/rss/plugin.conf
@@ -0,0 +1,10 @@
+[solr]
+datasetUrl=http://localhost:8983/solr.war/dataset
+granuleUrl=http://localhost:8983/solr.war/granule
+entriesPerPage=7
+
+[portal]
+datasetUrl=http://localhost:8000/drupal/dataset
+
+[service]
+url=http://localhost:8890

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/example/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/example/__init__.py b/src/main/python/plugins/example/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/example/elastic/Writer.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/example/elastic/Writer.py b/src/main/python/plugins/example/elastic/Writer.py
new file mode 100644
index 0000000..f02bda4
--- /dev/null
+++ b/src/main/python/plugins/example/elastic/Writer.py
@@ -0,0 +1,45 @@
+import logging
+import os
+import os.path
+import urllib
+
+from edge.writer.estemplateresponsewriter import ESTemplateResponseWriter
+from edge.response.estemplateresponse import ESTemplateResponse
+
+class Writer(ESTemplateResponseWriter):
+    def __init__(self, configFilePath):
+        super(Writer, self).__init__(configFilePath)
+
+        templatePath = os.path.dirname(configFilePath) + os.sep
+        templatePath += self._configuration.get('service', 'template')
+        self.template = self._readTemplate(templatePath)
+
+    def _generateOpenSearchResponse(self, solrResponse, searchText, searchUrl, searchParams, pretty):
+        response = ESTemplateResponse(searchUrl, searchParams, self._configuration.getint('solr', 'entriesPerPage'))
+        response.setTemplate(self.template)
+
+        return response.generate(solrResponse, pretty=pretty)
+
+    def _constructQuery(self, startIndex, entriesPerPage, parameters, facets):
+        queries = []
+        filterQueries = []
+        sort = None
+
+        for key, value in parameters.iteritems():
+            if value != "":
+                if key == 'keyword':
+                    queries.append(urllib.quote(value))
+        if len(queries) == 0:
+            queries.append('*')
+
+        query = 'q='+'+AND+'.join(queries)+'&from='+str(startIndex)+'&size='+str(entriesPerPage)
+
+        if len(filterQueries) > 0:
+            query += '&fq='+'+AND+'.join(filterQueries)
+
+        if sort is not None:
+            query += '&sort=' + sort
+
+        logging.debug('elasticsearch query: '+query)
+
+        return query

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/example/elastic/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/example/elastic/__init__.py b/src/main/python/plugins/example/elastic/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/example/elastic/plugin.conf
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/example/elastic/plugin.conf b/src/main/python/plugins/example/elastic/plugin.conf
new file mode 100644
index 0000000..06950a7
--- /dev/null
+++ b/src/main/python/plugins/example/elastic/plugin.conf
@@ -0,0 +1,12 @@
+[solr]
+datasetUrl=http://localhost:9200/example
+entriesPerPage=10
+maxEntriesPerPage=2000
+defaultSearchParam=keyword
+parameters=keyword
+facets={}
+sortKeys={}
+
+[service]
+url=http://localhost:8890
+template=template.xml

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/example/elastic/template.xml
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/example/elastic/template.xml b/src/main/python/plugins/example/elastic/template.xml
new file mode 100755
index 0000000..bddd11e
--- /dev/null
+++ b/src/main/python/plugins/example/elastic/template.xml
@@ -0,0 +1,35 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<feed esipdiscovery:version="1.2" xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/terms/" xmlns:echo="http://www.echo.nasa.gov/esip" xmlns:esipdiscovery="http://commons.esipfed.org/ns/discovery/1.2/" xmlns:georss="http://www.georss.org/georss/10" xmlns:gml="http://www.opengis.net/gml" xmlns:os="http://a9.com/-/spec/opensearch/1.1/" xmlns:time="http://a9.com/-/opensearch/extensions/time/1.0/">
+<updated>{{ updated }}</updated>
+<id>https://api.echo.nasa.gov:443/opensearch/datasets.atom</id>
+<author>
+<name>ECHO</name>
+<email>support@echo.nasa.gov</email>
+</author>
+<title type="text">ECHO dataset metadata</title>
+<os:totalResults>{{ numFound }}</os:totalResults>
+<os:itemsPerPage>{{ itemsPerPage }}</os:itemsPerPage>
+<os:startIndex>{{ startIndex }}</os:startIndex>
+<os:Query role="request" xmlns:echo="http://www.echo.nasa.gov/esip" xmlns:geo="http://a9.com/-/opensearch/extensions/geo/1.0/" xmlns:time="http://a9.com/-/opensearch/extensions/time/1.0/" />
+<subtitle type="text">Search parameters: None</subtitle>
+<link href="https://api.echo.nasa.gov:443/opensearch/granules/descriptor_document.xml" hreflang="en-US" rel="search" type="application/opensearchdescription+xml" />
+<link href="{{ myself }}" hreflang="en-US" rel="self" type="application/atom+xml" />
+{% if last %}<link href="{{ last }}" hreflang="en-US" rel="last" type="application/atom+xml" />{% endif %}
+{% if prev %}<link href="{{ prev }}" hreflang="en-US" rel="previous" type="application/atom+xml" />{% endif %}
+{% if next %}<link href="{{ next }}" hreflang="en-US" rel="next" type="application/atom+xml" />{% endif %}
+{% if first %}<link href="{{ first }}" hreflang="en-US" rel="first" type="application/atom+xml" />{% endif %}
+<link href="https://wiki.earthdata.nasa.gov/display/echo/Open+Search+API+release+information" hreflang="en-US" rel="describedBy" title="Release Notes" type="text/html" />
+{% for doc in docs %}
+<entry>
+<id>{{ link }}?concept_id={{ doc['_id'] }}</id>
+<dc:identifier>{{ doc['ShortName'] }}</dc:identifier>
+<author>
+<name>ECHO</name>
+<email>support@echo.nasa.gov</email>
+</author>
+<title type="text">{{ doc['LongName'] }}</title>
+<summary type="text">{{ doc['Description'] }}</summary>
+<updated>{{ doc['LastUpdate'] }}</updated>
+</entry>
+{% endfor %}
+</feed>

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/example/json/JsonWriter.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/example/json/JsonWriter.py b/src/main/python/plugins/example/json/JsonWriter.py
new file mode 100644
index 0000000..9fb87b6
--- /dev/null
+++ b/src/main/python/plugins/example/json/JsonWriter.py
@@ -0,0 +1,6 @@
+import requestresponder
+
+class JsonWriter(requestresponder.RequestResponder):
+    def get(self, requestHandler):
+        requestHandler.write('{"test": "aaa"}')
+        

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/example/json/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/example/json/__init__.py b/src/main/python/plugins/example/json/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/granule/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/granule/__init__.py b/src/main/python/plugins/granule/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/granule/atom/AtomWriter.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/granule/atom/AtomWriter.py b/src/main/python/plugins/granule/atom/AtomWriter.py
new file mode 100644
index 0000000..b22f30f
--- /dev/null
+++ b/src/main/python/plugins/granule/atom/AtomWriter.py
@@ -0,0 +1,27 @@
+import logging
+import datetime
+
+from edge.opensearch.granuleatomresponse import GranuleAtomResponse
+from edge.opensearch.granulewriter import GranuleWriter
+
+class AtomWriter(GranuleWriter):
+    
+    def __init__(self, configFilePath):
+        super(AtomWriter, self).__init__(configFilePath, [['datasetId', 'shortName']])
+
+    def _generateOpenSearchResponse(self, solrResponse, searchText, searchUrl, searchParams, pretty):
+        response = GranuleAtomResponse(
+            self._configuration.get('service', 'linkToGranule'),
+            self._configuration.get('service', 'host'),
+            self._configuration.get('service', 'url')
+        )
+
+        response.title = 'PO.DAAC Granule Search Results'
+        #response.description = 'Search result for "'+searchText+'"'
+        response.link = searchUrl
+        response.authors.append('PO.DAAC Granule Search Service')
+        response.updated = datetime.datetime.utcnow().isoformat()+'Z'
+        response.id = 'tag:'+self._configuration.get('service', 'host')+','+datetime.datetime.utcnow().date().isoformat()
+        response.parameters = searchParams
+
+        return response.generate(solrResponse, pretty) 

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/granule/atom/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/granule/atom/__init__.py b/src/main/python/plugins/granule/atom/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/granule/atom/plugin.conf
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/granule/atom/plugin.conf b/src/main/python/plugins/granule/atom/plugin.conf
new file mode 100644
index 0000000..3f14b5f
--- /dev/null
+++ b/src/main/python/plugins/granule/atom/plugin.conf
@@ -0,0 +1,12 @@
+[solr]
+datasetUrl=http://localhost:8983/solr.war/dataset
+granuleUrl=http://localhost:8983/solr.war/granule
+entriesPerPage=7
+
+[service]
+url=http://localhost:8890
+linkToGranule=LOCAL-FTP,REMOTE-FTP
+host=localhost:8890
+database=podaac_dev/podaac$dev@DAACDEV
+l2=http://biaxin.jpl.nasa.gov/ws/search/granule
+bbox=l2

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/granule/datacasting/DatacastingWriter.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/granule/datacasting/DatacastingWriter.py b/src/main/python/plugins/granule/datacasting/DatacastingWriter.py
new file mode 100644
index 0000000..c7ae490
--- /dev/null
+++ b/src/main/python/plugins/granule/datacasting/DatacastingWriter.py
@@ -0,0 +1,39 @@
+import logging
+import os
+import os.path
+import time
+
+from edge.opensearch.granuledatacastingresponse import GranuleDatacastingResponse
+from edge.opensearch.datasetgranulewriter import DatasetGranuleWriter
+
+class DatacastingWriter(DatasetGranuleWriter):
+    def __init__(self, configFilePath):
+        super(DatacastingWriter, self).__init__(configFilePath, [['datasetId', 'shortName']])
+        
+        templatePath = os.path.dirname(configFilePath) + os.sep
+        templatePath += self._configuration.get('service', 'template')
+        self.template = self._readTemplate(templatePath)
+        self.variables['sortBy'] = 'archiveTimeDesc'
+        self.variables['archiveTime'] = int(round(time.time() * 1000)) - (int(self._configuration.get('solr', 'archivedWithin')) * 3600000)
+
+    def _generateOpenSearchResponse(self, solrGranuleResponse, solrDatasetResponse, pretty):
+        response = GranuleDatacastingResponse(
+            self._configuration.get('portal', 'datasetUrl'), 
+            self._configuration.get('service', 'linkToGranule'),
+            int(self._configuration.get('solr', 'archivedWithin'))
+        )
+        response.setTemplate(self.template)
+
+        return response.generate(solrDatasetResponse, solrGranuleResponse, pretty)
+
+    def _onSolrGranuleResponse(self, response):
+        if response.error:
+            self._handleException(str(response.error))
+        else:
+            self.solrGranuleResponse = response.body
+            params = {}
+            if ('datasetId' in self.variables):
+                params['datasetId'] = self.variables['datasetId']
+            if ('shortName' in self.variables):
+                params['shortName'] = self.variables['shortName']
+            self._getSingleSolrDatasetResponse(params, self._onSolrDatasetResponse)

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/granule/datacasting/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/granule/datacasting/__init__.py b/src/main/python/plugins/granule/datacasting/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/granule/datacasting/datacasting_template.xml
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/granule/datacasting/datacasting_template.xml b/src/main/python/plugins/granule/datacasting/datacasting_template.xml
new file mode 100644
index 0000000..0f1b639
--- /dev/null
+++ b/src/main/python/plugins/granule/datacasting/datacasting_template.xml
@@ -0,0 +1,58 @@
+{% if doc %}
+<?xml version="1.0" encoding="UTF-8"?>
+<rss xmlns:datacasting="http://datacasting.jpl.nasa.gov/datacasting" xmlns:georss="http://www.georss.org/georss" xmlns:gml="http://www.opengis.net/gml" version="2.0">
+<channel>
+<title>{{ doc['Dataset-LongName'][0] }}</title>
+<link>{{ DatasetPortalPage }}</link>
+<description>{{ doc['Dataset-Description'][0] }}</description>
+<datacasting:channelUID>{{ doc['Dataset-PersistentId'][0] }}</datacasting:channelUID>
+{% if 'DatasetSource-Source-ShortName' in doc %}
+<datacasting:dataSource>{{ doc['DatasetSource-Source-ShortName'][0] }}/{{ doc['DatasetSource-Sensor-ShortName'][0] }}</datacasting:dataSource>
+{% endif %}
+{% if 'LOCAL-FTP' in doc['DatasetLocationPolicy-Type'] or 'REMOTE-FTP' in doc['DatasetLocationPolicy-Type'] %}
+<datacasting:customEltDef displayName="FTP URL" type="string" name="FTPURL"/>
+{% endif %}
+{% if 'LOCAL-OPENDAP' in doc['DatasetLocationPolicy-Type'] or 'REMOTE-OPENDAP' in doc['DatasetLocationPolicy-Type'] %}
+<datacasting:customEltDef displayName="OPeNDAP URL" type="string" name="OPeNDAPURL"/>
+{% endif %}
+<language>en-us</language>
+<copyright>Copyright {{ DatasetCitation_ReleaseYear }}</copyright>
+<managingEditor>PO.DAAC &lt;podaac@podaac.jpl.nasa.gov&gt;</managingEditor>
+<webMaster>admin@seastar.jpl.nasa.gov</webMaster>
+<pubDate>{{ PubDate }}</pubDate>
+<generator>PO.DAAC Oceanographic Common Search Interface</generator>
+<docs>http://datacasting.jpl.nasa.gov/datacasting.html</docs>
+{% for granule in granules %}
+<item>
+<title>{{ granule['Granule-Name'][0] }}</title>
+<link>{{ DatasetPortalPage }}</link>
+<datacasting:acquisitionStartDate>{{ granule['Granule-StartTimeLong'][0] }}</datacasting:acquisitionStartDate>
+<datacasting:acquisitionEndDate>{{ granule['Granule-StopTimeLong'][0] }}</datacasting:acquisitionEndDate>
+{% if 'LOCAL-FTP' in granule['GranuleReference']  %}
+<datacasting:customElement name="FTPURL" value="{{ granule['GranuleReference']['LOCAL-FTP'] }}"/>
+{% elif 'REMOTE-FTP' in granule['GranuleReference']  %}
+<datacasting:customElement name="FTPURL" value="{{ granule['GranuleReference']['REMOTE-FTP'] }}"/>
+{% endif %}
+{% if 'LOCAL-OPENDAP' in granule['GranuleReference']  %}
+<datacasting:customElement name="OPeNDAPURL" value="{{ granule['GranuleReference']['LOCAL-OPENDAP'] }}"/>
+{% elif 'REMOTE-OPENDAP' in granule['GranuleReference']  %}
+<datacasting:customElement name="OPeNDAPURL" value="{{ granule['GranuleReference']['REMOTE-OPENDAP'] }}"/>
+{% endif %}
+{% if 'GranuleSpatial-EastLon' in granule %}
+<georss:where>
+<gml:Envelope>
+<gml:lowerCorner>{{ granule['GranuleSpatial-SouthLat'][0] }} {{ granule['GranuleSpatial-WestLon'][0] }}</gml:lowerCorner>
+<gml:upperCorner>{{ granule['GranuleSpatial-NorthLat'][0] }} {{ granule['GranuleSpatial-EastLon'][0] }}</gml:upperCorner>
+</gml:Envelope>
+</georss:where>
+{% endif %}
+<enclosure url="{{ granule['GranuleLink'] }}" length="{{ granule['GranuleFileSize']['DATA'] }}" type="application/x-{{ doc['DatasetPolicy-DataFormat'][0].lower() }}"/>
+<description>{{ granule['Granule-Name'][0] }}</description>
+<guid isPermaLink="true">{{ granule['GranuleLink'] }}</guid>
+<pubDate>{{ granule['Granule-ArchiveTimeLong'][0] }}</pubDate>
+<source url="{{ DatasetPortalPage }}">{{ doc['Dataset-LongName'][0] }}</source>
+</item>
+{% endfor %}
+</channel>
+</rss>
+{% endif %}

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/granule/datacasting/plugin.conf
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/granule/datacasting/plugin.conf b/src/main/python/plugins/granule/datacasting/plugin.conf
new file mode 100644
index 0000000..defb5ef
--- /dev/null
+++ b/src/main/python/plugins/granule/datacasting/plugin.conf
@@ -0,0 +1,13 @@
+[solr]
+datasetUrl=http://localhost:8983/solr.war/dataset
+granuleUrl=http://localhost:8983/solr.war/granule
+entriesPerPage=5000
+archivedWithin=24
+
+[portal]
+datasetUrl=http://localhost:8000/drupal/dataset
+
+[service]
+url=http://localhost:8890
+linkToGranule=LOCAL-FTP,REMOTE-FTP
+template=datacasting_template.xml

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/granule/fgdc/FgdcWriter.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/granule/fgdc/FgdcWriter.py b/src/main/python/plugins/granule/fgdc/FgdcWriter.py
new file mode 100644
index 0000000..eb37d11
--- /dev/null
+++ b/src/main/python/plugins/granule/fgdc/FgdcWriter.py
@@ -0,0 +1,21 @@
+import logging
+import os
+import os.path
+import codecs
+
+from edge.opensearch.granulefgdcresponse import GranuleFgdcResponse
+from edge.opensearch.datasetgranulewriter import DatasetGranuleWriter
+
+class FgdcWriter(DatasetGranuleWriter):
+    def __init__(self, configFilePath):
+        super(FgdcWriter, self).__init__(configFilePath)
+        
+        templatePath = os.path.dirname(configFilePath) + os.sep
+        templatePath += self._configuration.get('service', 'template')
+        self.template = self._readTemplate(templatePath)
+
+    def _generateOpenSearchResponse(self, solrGranuleResponse, solrDatasetResponse, pretty):
+        response = GranuleFgdcResponse()
+        response.setTemplate(self.template)
+
+        return response.generate(solrDatasetResponse, solrGranuleResponse, pretty)

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/granule/fgdc/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/granule/fgdc/__init__.py b/src/main/python/plugins/granule/fgdc/__init__.py
new file mode 100644
index 0000000..e69de29

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/granule/fgdc/fgdc_template.xml
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/granule/fgdc/fgdc_template.xml b/src/main/python/plugins/granule/fgdc/fgdc_template.xml
new file mode 100644
index 0000000..4a4aa8a
--- /dev/null
+++ b/src/main/python/plugins/granule/fgdc/fgdc_template.xml
@@ -0,0 +1,510 @@
+{% if doc %}
+<?xml version="1.0" encoding="ISO-8859-1"?>
+<!DOCTYPE metadata SYSTEM "http://www.fgdc.gov/metadata/fgdc-std-001-1998.dtd">
+<metadata>
+<idinfo>
+<citation>
+<citeinfo>
+<origin>{{ doc['DatasetCitation-Creator'][0] }}</origin>
+<pubdate>{{ DatasetCitation_ReleaseDate }}</pubdate>
+<pubtime>{{ DatasetCitation_ReleaseTime }}</pubtime>
+<title>{{ doc['DatasetCitation-Title'][0] }}</title>
+<edition>{{ doc['DatasetCitation-Version'][0] }}</edition>
+<serinfo>
+<sername>{{ doc['DatasetCitation-SeriesName'][0] }}</sername>
+<issue>Not specified</issue>
+</serinfo>
+<pubinfo>
+<pubplace>{{ doc['DatasetCitation-ReleasePlace'][0] }}</pubplace>
+<publish>{{ doc['DatasetCitation-Publisher'][0] }}</publish>
+</pubinfo>
+<onlink>{% if doc['DatasetCitation-OnlineResource'][0].strip() != '' %}{{ doc['DatasetCitation-OnlineResource'][0] }}{% else %}Not specified{% endif %}</onlink>
+</citeinfo>
+</citation>
+<descript>
+<abstract>{{ doc['Dataset-Description'][0] }}</abstract>
+<purpose>The Global Ocean Data Assimilation Experiment (GODAE) high-resolution sea surface temperature (GHRSST) project data.</purpose>
+<supplinf>Entry ID: {{ doc['Dataset-ShortName'][0] }} 
+Projection_Information: 
+Projection Type: {{ doc['Dataset-ProjectionType'][0] }} 
+Ellipsoid_Type: {{ doc['Dataset-EllipsoidType'][0] }} 
+Other Projection Details: {{ doc['Dataset-ProjectionDetail'][0] }} 
+Data Resolution: 
+Latitude resolution: {{ Dataset_LatitudeResolution }} 
+Longitude resolution: {{ Dataset_LongitudeResolution }} 
+The temporal resolution: {{ doc['Dataset-TemporalResolution'][0] }} 
+This metadata record was generated from an original data set description (DSD) record in DIF format: http://www.ghrsst-pp.org 
+Reference: {{ doc['Dataset-Reference'][0] }} 
+Sources: 
+
+{% for i in UniqueDatasetSource %}
+{{ doc['DatasetSource-Source-ShortName'][i] }} 
+{{ doc['DatasetSource-Source-LongName'][i] }} 
+{% endfor %}
+
+Sensors: 
+
+{% for i in UniqueDatasetSensor %}
+{{ doc['DatasetSource-Sensor-ShortName'][i] }} 
+{{ doc['DatasetSource-Sensor-LongName'][i] }} 
+{% endfor %}
+</supplinf>
+</descript>
+<timeperd>
+<timeinfo>
+<sngdate>
+<caldate>{{ DatasetCitation_ReleaseDateTime }}</caldate>
+<time>{{ DatasetCitation_ReleaseDateTime }}</time>
+</sngdate>
+</timeinfo>
+<current>{{ DatasetCitation_ReleaseDateTime }}</current>
+</timeperd>
+<status>
+<progress>Complete</progress>
+<update>As needed</update>
+</status>
+<spdom>
+<bounding>
+<westbc>{{ doc['DatasetCoverage-WestLon'][0] }}</westbc>
+<eastbc>{{ doc['DatasetCoverage-EastLon'][0] }}</eastbc>
+<northbc>{{ doc['DatasetCoverage-NorthLat'][0] }}</northbc>
+<southbc>{{ doc['DatasetCoverage-SouthLat'][0] }}</southbc>
+</bounding>
+</spdom>
+<keywords>
+<theme>
+<themekt>GCMD</themekt>
+<themekey>{{ doc['DatasetParameter-Category'][-1] }}</themekey>
+<themekey>{{ doc['DatasetParameter-Topic'][-1] }}</themekey>
+<themekey>{{ doc['DatasetParameter-Term'][-1] }}</themekey>
+<themekey>{{ doc['DatasetParameter-Variable'][-1] }}</themekey>
+</theme>
+<theme>
+<themekt>None</themekt>
+{% for i in UniqueDatasetSensor %}
+<themekey>{{ doc['DatasetSource-Sensor-ShortName'][i] }}</themekey>
+<themekey>{{ doc['DatasetSource-Sensor-LongName'][i] }}</themekey>
+{% endfor %}
+</theme>
+{% if doc['DatasetRegion-Region'][0].strip() != '' %}
+<place>
+<placekt>GCMD</placekt>
+<placekey>{{ doc['DatasetRegion-Region'][0] }}</placekey>
+</place>
+{% endif %}
+{% if doc['DatasetRegion-RegionDetail'][0].strip() != '' %}
+<place>
+<placekt>none</placekt>
+<placekey>{{ doc['DatasetRegion-RegionDetail'][0].capitalize() }}</placekey>
+</place>
+{% endif %}
+</keywords>
+<accconst>{{ doc['DatasetPolicy-AccessConstraint'][0] }}</accconst>
+<useconst>GHRSST protocol describes data use as open.</useconst>
+{% if TechnicalContactIndex >= 0 %}
+<ptcontac>
+<cntinfo>
+<cntperp>
+<cntper>{{ doc['DatasetContact-Contact-FirstName'][TechnicalContactIndex] }}{% if doc['DatasetContact-Contact-MiddleName'][TechnicalContactIndex] != 'none' %} {{ doc['DatasetContact-Contact-MiddleName'][TechnicalContactIndex] }}{% endif %} {{ doc['DatasetContact-Contact-LastName'][TechnicalContactIndex] }}</cntper>
+<cntorg>{{ doc['Dataset-Provider-LongName'][0] }} ({{ doc['Dataset-Provider-ShortName'][0] }})</cntorg>
+</cntperp>
+<cntpos>{{ doc['DatasetContact-Contact-Role'][TechnicalContactIndex] }}</cntpos>
+<cntaddr>
+<addrtype>Mailing and Physical Address</addrtype>
+<address>{{ doc['DatasetContact-Contact-Address'][TechnicalContactIndex] }}</address>
+<city>{{ doc['DatasetContact-Contact-Address'][TechnicalContactIndex] }}</city>
+<state>{{ doc['DatasetContact-Contact-Address'][TechnicalContactIndex] }}</state>
+<postal>{{ doc['DatasetContact-Contact-Address'][TechnicalContactIndex] }}</postal>
+<country>{{ doc['DatasetContact-Contact-Address'][TechnicalContactIndex] }}</country>
+</cntaddr>
+<cntvoice>{{ doc['DatasetContact-Contact-Phone'][TechnicalContactIndex] }}</cntvoice>
+<cntfax>{% if doc['DatasetContact-Contact-Fax'][TechnicalContactIndex].strip() != '' %}{{ doc['DatasetContact-Contact-Fax'][TechnicalContactIndex] }}{% else %}Not specified{% endif %}</cntfax>
+<cntemail>{{ doc['DatasetContact-Contact-Email'][TechnicalContactIndex] }}</cntemail>
+<hours>Standard Business hours</hours>
+<cntinst>Phone/FAX/Email</cntinst>
+</cntinfo>
+</ptcontac>
+{% endif %}
+<datacred>{{ doc['DatasetCitation-Creator'][0] }}</datacred>
+{% if doc['Dataset-Reference'][0].strip() != '' %}
+<crossref>
+<citeinfo>
+<origin>{{ doc['DatasetCitation-Creator'][0] }}</origin>
+<pubdate>{{ DatasetCitation_ReleaseDateTime }}</pubdate>
+<title>{{ doc['Dataset-Reference'][0] }}</title>
+</citeinfo>
+</crossref>
+{% endif %}
+{% if 'Get Data' in DatasetResource and DatasetResource['Get Data'].strip() != '' %}
+<crossref>
+<citeinfo>
+<origin>{{ doc['DatasetCitation-Creator'][0] }}</origin>
+<pubdate>{{ DatasetCitation_ReleaseDateTime }}</pubdate>
+<title>Get Data</title>
+<onlink>{{ DatasetResource['Get Data'] }}</onlink>
+</citeinfo>
+</crossref>
+{% endif %}
+</idinfo>
+<spref>
+<horizsys>
+<geograph>
+<latres>{{ Dataset_LatitudeResolution }}</latres>
+<longres>{{ Dataset_LongitudeResolution }}</longres>
+<geogunit>Decimal degrees</geogunit>
+</geograph>
+</horizsys>
+<vertdef/>
+</spref>
+<eainfo>
+{% for granule in granules %}
+<detailed>
+<enttyp>
+<enttypl>{{ granule['Granule-Name'][0] }}</enttypl>
+<enttypd>GHRSST Formatted netCDF data file</enttypd>
+<enttypds>The Recommended GHRSST-PP Data Processing Specification
+GDS (Version 1 revision 1.5), GHRSST-PP Report Number 17
+Compiled by Craig Donlon and the GHRSST-PP Science Team
+craig.donlon@metoffice.com
+
+Published by the International GHRSST-PP Project Office
+Met Office
+Fitzroy Road
+Exeter, EX3 1PB
+United Kingdom
+
+Tel: +44 (0)1392 886622
+Fax: +44 (0)1393 885681</enttypds>
+</enttyp>
+<attr>
+<attrlabl>Latitude</attrlabl>
+<attrdef>Latitude in decimal degrees following FGDC specifications. Here, the Range Domain Minimum given below is equivalent to a South Bounding Coordinate for this Entity. The Range Domain Maximum is the North Bounding Coordinate.</attrdef>
+<attrdefs>FGDC-STD-001-1998</attrdefs>
+<attrdomv>
+<rdom>
+{% if granule['GranuleBoundingBox'] %}
+<rdommin>{{ granule['GranuleBoundingBox']['southernmostLatitude'] }}</rdommin>
+<rdommax>{{ granule['GranuleBoundingBox']['northernmostLatitude'] }}</rdommax>
+{% else %}
+<rdommin>-90.000</rdommin>
+<rdommax>90.000</rdommax>
+{% endif %}
+</rdom>
+</attrdomv>
+<begdatea>{{ granule['Granule-StartTimeLong'][0] }}</begdatea>
+<enddatea>{{ granule['Granule-StopTimeLong'][0] }}</enddatea>
+</attr>
+<attr>
+<attrlabl>Longitude</attrlabl>
+<attrdef>Longitude in decimal degrees following FGDC specifications. Here, the Range Domain Minimum given below is equivalent to a West Bounding Coordinate for this entity.  The Range Domain Maximum is the East Bounding Coordinate.</attrdef>
+<attrdefs>FGDC-STD-001-1998</attrdefs>
+<attrdomv>
+<rdom>
+{% if granule['GranuleBoundingBox'] %}
+<rdommin>{{ granule['GranuleBoundingBox']['westernmostLongitude'] }}</rdommin>
+<rdommax>{{ granule['GranuleBoundingBox']['easternmostLongitude'] }}</rdommax>
+{% else %}
+<rdommin>-180.000</rdommin>
+<rdommax>180.000</rdommax>
+{% endif %}
+</rdom>
+</attrdomv>
+<begdatea>{{ granule['Granule-StartTimeLong'][0] }}</begdatea>
+<enddatea>{{ granule['Granule-StopTimeLong'][0] }}</enddatea>
+</attr>
+<attr>
+<attrlabl>Time</attrlabl>
+<attrdef>Universal Time (Greenwich Mean Time) values shall follow the 24 hour timekeeping system for Universal time of day in hours, minutes, and seconds, and decimal fractions of a second (expressed to the precision desired) without separators convention, with the upper case letter Z following the low order (or extreme right-hand) time element of the 24 hour hour time clock expression. The general form is HHMMSSSSZ.</attrdef>
+<attrdefs>American Standards Institute, 1975, Representations of universal time, local time differentials, and United States time zone reference for information interchange (ANSI X3.51-1975): New York, American National Standards Institute. For usage in these data see:  The Recommended GHRSST-PP Data Processing Specification GDS (Version 1 revision 1.5), GHRSST-PP Report Number 17  Canned text; no .DIF match</attrdefs>
+<attrdomv>
+<rdom>
+<rdommin>{{ granule['Granule-StartTimeLong'][0] }}</rdommin>
+<rdommax>{{ granule['Granule-StopTimeLong'][0] }}</rdommax>
+</rdom>
+</attrdomv>
+<begdatea>{{ granule['Granule-StartTimeLong'][0] }}</begdatea>
+<enddatea>{{ granule['Granule-StopTimeLong'][0] }}</enddatea>
+</attr>
+</detailed>
+{% endfor %}
+<overview>
+<eaover>Within the GHRSST DIF-style metadata framework, every data set type is described by a static Data Set Description (DSD), and the individual netCDF files making up that dataset are described by dynamic File Records (FR).  These FRs contain the spatial and temporal domain contained in each data file.  The DIF-style metadata (the DSD and its children FRs) are converted to a single FGDC record for each NODC GHRSST accession, which can consist of several discrete data files.  This conversion is accomplished using an XML stylesheet which builds a framework FGDC record based on the DSD, and populates it with one Entity for each data file based on the FRs.  These Entities are given three Attributes: Latitude, Longitude, and Time.  The Range Domain Minimum and Maximum elements within each of those Attributes are use to describe the spatial and temporal domain contained in the data file.  So, for example, the Range Domain Minimum for the Attribute Latitude corresponds to a South Bound
 ing Coordinate for that Entity.</eaover>
+<eadetcit>The Recommended GHRSST-PP Data Processing Specification
+GDS (Version 1 revision 1.5), GHRSST-PP Report Number 17
+Compiled by Craig Donlon and the GHRSST-PP Science Team
+craig.donlon@metoffice.com
+
+Published by the International GHRSST-PP Project Office
+Met Office
+Fitzroy Road
+Exeter, EX3 1PB
+United Kingdom
+
+Tel: +44 (0)1392 886622
+Fax: +44 (0)1393 885681
+
+Metadata records converted to FGDC format and archived at the National Oceanographic Data Center</eadetcit>
+</overview>
+</eainfo>
+<distinfo>
+<distrib>
+<cntinfo>
+<cntperp>
+<cntper>Dr. Kenneth Casey</cntper>
+<cntorg>NOAA National Oceanographic Data Center</cntorg>
+</cntperp>
+<cntpos>Physical scientist</cntpos>
+<cntaddr>
+<addrtype>mailing address</addrtype>
+<address>NOAA National Oceanographic Data Center
+SSMC3, 4th Floor, Room 4853, Route:  E/OC1
+1315 East-West Highway</address>
+<city>Silver Spring</city>
+<state>Maryland</state>
+<postal>20910</postal>
+<country>U.S.A.</country>
+</cntaddr>
+<cntvoice>(301)713-3300</cntvoice>
+<cntfax>FAX: (301) 713-3300</cntfax>
+<cntemail>Kenneth.Casey@noaa.gov</cntemail>
+<hours>9:00 AM-4:00 PM, EST</hours>
+<cntinst>Phone/FAX/E-mail/letter</cntinst>
+</cntinfo>
+</distrib>
+<resdesc>NODC Accession #0000000</resdesc>
+<distliab>NOAA makes no warranty regarding these data, expressed or implied, nor does the fact of distribution constitute such a warranty. NOAA and NODC cannot assume liability for any damages caused by any errors or omissions in these data, nor as a result of the failure of these data to function on a particular system.</distliab>
+<stdorder>
+<digform>
+<digtinfo>
+<formname>netCDF</formname>
+<formverd>{{ DatasetCitation_ReleaseDateTime }}</formverd>
+<formspec>GHRSST formatted file (netCDF version 3); see:  The Recommended GHRSST-PP Data Processing Specification, GDS (Version 1 revision 1.5), GHRSST-PP Report Number 17.</formspec>
+<formcont>sea surface temperature data</formcont>
+<filedec>See:  The Recommended GHRSST-PP Data Processing Specification, GDS (Version 1 revision 1.5), GHRSST-PP Report Number 17.</filedec>
+<transize>15000000</transize>
+</digtinfo>
+<digtopt>
+<onlinopt>
+<computer>
+<networka>
+<networkr>http://www.nodc.noaa.gov/search/prod/</networkr>
+</networka>
+</computer>
+<accinstr>Data may be directly downloaded through the NODC Ocean Archive System at: URL: http://www.nodc.noaa.gov/search/prod/. NODC can be contacted directly for custom orders. When requesting data from the NODC, the desired data set may be referred to by the 7-digit number given in the RESOURCE DESCRIPTION field of this metadata record. For more information see the NODC GHRSST project web site: URL: http://ghrsst.nodc.noaa.gov</accinstr>
+<oncomp>Standard Internet browser and FTP capability</oncomp>
+</onlinopt>
+<onlinopt>
+<computer>
+<networka>
+<networkr>ftp://data.nodc.noaa.gov/pub/data.nodc/ghrsst</networkr>
+</networka>
+</computer>
+<accinstr>Direct FTP access:  Navigate to ftp://data.nodc.noaa.gov/pub/data.nodc/ghrsst using any FTP client to begin downloading data. For more information see the NODC GHRSST project web site: URL: http://ghrsst.nodc.noaa.gov</accinstr>
+<oncomp>Any FTP client</oncomp>
+</onlinopt>
+<onlinopt>
+<computer>
+<networka>
+<networkr>http://data.nodc.noaa.gov/ghrsst</networkr>
+</networka>
+</computer>
+<accinstr>Simple Web access:  Using any web browser, navigate to URL: http://data.nodc.noaa.gov/ghrsst and begin browsing through the file hierarchy. Clicking on any of the files will prompt you to download that file or will launch any application associated with netCDF files. For more information see the NODC GHRSST project web site: URL: http://ghrsst.nodc.noaa.gov</accinstr>
+<oncomp>Standard Internet browser and/or software capable of utilizing netCDF files</oncomp>
+</onlinopt>
+<onlinopt>
+<computer>
+<networka>
+<networkr>http://data.nodc.noaa.gov/cgi-bin/nph-dods/ghrsst</networkr>
+</networka>
+</computer>
+<accinstr>These data are also served using OPeNDAP (formerly DODS) server. The base URL is: URL: http://data.nodc.noaa.gov/cgi-bin/nph-dods/ghrsst For more information see the NODC GHRSST project web site: URL: http://ghrsst.nodc.noaa.gov For a listing of OPeNDAP clients which may be used to access OPeNDAP-enabled data sets, please see the OPeNDAP web site at URL: URL: http://opendap.org/</accinstr>
+<oncomp>Standard Internet browsers can browse OPeNDAP servers and specialized OPeNDAP software can enable more sophisticated data access and visualizations.</oncomp>
+</onlinopt>
+</digtopt>
+</digform>
+<fees>none</fees>
+<ordering>Data may be directly downloaded through the NODC website at: http://www.nodc.noaa.gov/search/prod/. NODC can be contacted directly for custom orders. (When requesting data from the NODC, the desired data set may be referred to by the 7-digit number given in the RESOURCE DESCRIPTION field of this metadata record).</ordering>
+<turnarnd>Within 24 hours if directly downloaded, depending on size</turnarnd>
+</stdorder>
+<custom>Contact the NODC User Services Group via phone/FAX/E-mail:  nodc.services@noaa.gov</custom>
+<techpreq>Working knowlege of netCDF files and ability to work with satellite data strongly recommended.</techpreq>
+<availabl>
+<timeinfo>
+<rngdates>
+<begdate>{{ DatasetCoverage_StartTime }}</begdate>
+<begtime>Unknown</begtime>
+<enddate>Present</enddate>
+<endtime>Unknown</endtime>
+</rngdates>
+</timeinfo>
+</availabl>
+</distinfo>
+<distinfo>
+<distrib>
+<cntinfo>
+<cntperp>
+<cntper>{{ doc['DatasetContact-Contact-FirstName'][0] }}{% if doc['DatasetContact-Contact-MiddleName'][0] != 'none' %} {{ doc['DatasetContact-Contact-MiddleName'][0] }}{% endif %} {{ doc['DatasetContact-Contact-LastName'][0] }}</cntper>
+<cntorg>{{ doc['Dataset-Provider-LongName'][0] }} ({{ doc['Dataset-Provider-ShortName'][0] }})</cntorg>
+</cntperp>
+<cntpos>{{ doc['DatasetContact-Contact-Role'][0] }}</cntpos>
+<cntaddr>
+<addrtype>mailing address</addrtype>
+<address>{{ doc['DatasetContact-Contact-Address'][0] }}</address>
+<city>{{ doc['DatasetContact-Contact-Address'][0] }}</city>
+<state>{{ doc['DatasetContact-Contact-Address'][0] }}</state>
+<postal>{{ doc['DatasetContact-Contact-Address'][0] }}</postal>
+<country>{{ doc['DatasetContact-Contact-Address'][0] }}</country>
+</cntaddr>
+<cntvoice>{{ doc['DatasetContact-Contact-Phone'][0] }}</cntvoice>
+<cntfax>{% if doc['DatasetContact-Contact-Fax'][0].strip() != '' %}{{ doc['DatasetContact-Contact-Fax'][0] }}{% else %}Not specified{% endif %}</cntfax>
+<cntemail>{{ doc['DatasetContact-Contact-Email'][0] }}</cntemail>
+<hours>Standard Business hours</hours>
+<cntinst>Phone/FAX/Email</cntinst>
+</cntinfo>
+</distrib>
+<resdesc>DSD-{{ doc['Dataset-ShortName'][0] }}.xml</resdesc>
+<distliab>Unknown; see {{ doc['Dataset-Provider-ProviderResource-Path'][-1] }}</distliab>
+<stdorder>
+<digform>
+<digtinfo>
+<formname>netCDF</formname>
+<formverd>{{ DatasetCitation_ReleaseDateTime }}</formverd>
+<formspec>GHRSST formatted file (netCDF version 3); see:  The Recommended GHRSST-PP Data Processing Specification, GDS (Version 1 revision 1.5), GHRSST-PP Report Number 17.</formspec>
+<formcont>sea surface temperature data</formcont>
+<filedec>See:  The Recommended GHRSST-PP Data Processing Specification, GDS (Version 1 revision 1.5), GHRSST-PP Report Number 17.</filedec>
+<transize>15000000</transize>
+</digtinfo>
+<digtopt>
+<onlinopt>
+<computer>
+<networka>
+<networkr>{{ doc['Dataset-Provider-ProviderResource-Path'][-1] }}</networkr>
+</networka>
+</computer>
+<accinstr>Access {{ doc['Dataset-Provider-ShortName'][0] }} site: {{ doc['Dataset-Provider-ProviderResource-Path'][-1] }}; some products may require authorization (restricted access)</accinstr>
+<oncomp>PC, Mac, Linux, Unix; standard Internet browser</oncomp>
+</onlinopt>
+</digtopt>
+</digform>
+<fees>See URL at: {{ doc['Dataset-Provider-ProviderResource-Path'][-1] }}</fees>
+<ordering>See URL at: {{ doc['Dataset-Provider-ProviderResource-Path'][-1] }}. Some products may have restricted access.</ordering>
+<turnarnd>Within 24 hours if directly downloaded, depending on size</turnarnd>
+</stdorder>
+<custom>See {{ doc['Dataset-Provider-ShortName'][0] }} website at URL: {{ doc['Dataset-Provider-ProviderResource-Path'][-1] }}</custom>
+<techpreq>Ability to work with netCDF files, working knowledge of satellite data strongly recommended</techpreq>
+<availabl>
+<timeinfo>
+<rngdates>
+<begdate>{{ DatasetCoverage_StartTime }}</begdate>
+<begtime>Unknown</begtime>
+<enddate>Present</enddate>
+<endtime>Unknown</endtime>
+</rngdates>
+</timeinfo>
+</availabl>
+</distinfo>
+<distinfo>
+<distrib>
+<cntinfo>
+<cntperp>
+<cntper>Edward Armstrong</cntper>
+<cntorg>Jet Propulsion Laboratory (JPL), Physical Oceanography Distributed Active Archive Center (PO.DAAC)</cntorg>
+</cntperp>
+<cntpos>Technical Contact</cntpos>
+<cntaddr>
+<addrtype>mailing address</addrtype>
+<address>4800 Oak Grove Dr.</address>
+<city>Pasadena</city>
+<state>California</state>
+<postal>91109</postal>
+<country>USA</country>
+</cntaddr>
+<cntvoice>818 393 6710</cntvoice>
+<cntfax>818 393 2718</cntfax>
+<cntemail>ghrsst@podaac.jpl.nasa.gov</cntemail>
+<hours>Standard Business hours, Pacific Time</hours>
+<cntinst>Phone/FAX/Email</cntinst>
+</cntinfo>
+</distrib>
+<resdesc>DSD-{{ doc['Dataset-ShortName'][0] }}.xml</resdesc>
+<distliab>JPL makes no warranty regarding these data, expressed or implied, nor does the fact of distribution constitute such a warranty. JPL cannot assume liability for any damages caused by any errors or omissions in these data, nor as a result of the failure of these data to function on a particular system.</distliab>
+<stdorder>
+<digform>
+<digtinfo>
+<formname>netCDF</formname>
+<formverd>{{ DatasetCitation_ReleaseDateTime }}</formverd>
+<formspec>GHRSST formatted file (netCDF version 3); see:  The Recommended GHRSST-PP Data Processing Specification, GDS (Version 1 revision 1.5), GHRSST-PP Report Number 17.</formspec>
+<formcont>sea surface temperature data</formcont>
+<filedec>See:  The Recommended GHRSST-PP Data Processing Specification, GDS (Version 1 revision 1.5), GHRSST-PP Report Number 17.</filedec>
+<transize>15000000</transize>
+</digtinfo>
+<digtopt>
+<onlinopt>
+<computer>
+<networka>
+<networkr>ftp://podaac.jpl.nasa.gov/pub/sea_surface_temperature/GHRSST</networkr>
+</networka>
+</computer>
+<accinstr>The direct FTP link is: ftp://podaac.jpl.nasa.gov/pub/sea_surface_temperature/GHRSST Data, documentation, and read software may also be downloaded through the JPL Global Data Assembly Center (GDAC) at: URL: http://ghrsst.jpl.nasa.gov/data_access.html For more information see the GDAC GHRSST project web site: URL: http://ghrsst.nasa.noaa.gov GHRSST products are generally only stored and available for a period of 30 days after satellite observation. After this time they can be acquired from the NOAA NODC.</accinstr>
+<oncomp>PC, Mac, Linux, Unix; standard Internet browser </oncomp>
+</onlinopt>
+</digtopt>
+</digform>
+<fees>None </fees>
+<ordering>See URL at: http:/ghrsst.jpl.nasa.gov . Some products may have restricted access. GHRSST products are available for a period of about 30 days after observation. After this time they can be acquired from the NOAA NODC.</ordering>
+<turnarnd>Within 24 hours if directly downloaded, depending on size</turnarnd>
+</stdorder>
+<custom>Contact the PO.DAAC User Services Office:  ghrsst@podaac.jpl.nasa.gov </custom>
+<techpreq>Ability to work with netCDF files, working knowledge of satellite data strongly recommended</techpreq>
+<availabl>
+<timeinfo>
+<rngdates>
+<begdate>{{ DatasetCoverage_StartTime }}</begdate>
+<begtime>Unknown</begtime>
+<enddate>Present</enddate>
+<endtime>Unknown</endtime>
+</rngdates>
+</timeinfo>
+</availabl>
+</distinfo>
+<metainfo>
+<metd>20041023</metd>
+<metrd>20041023</metrd>
+<metfrd>20050331</metfrd>
+<metc>
+<cntinfo>
+<cntperp>
+<cntper>Dr. Kenneth Casey</cntper>
+<cntorg>NOAA National Oceanographic Data Center</cntorg>
+</cntperp>
+<cntpos>Physical scientist</cntpos>
+<cntaddr>
+<addrtype>mailing address</addrtype>
+<address>NOAA National Oceanographic Data Center
+SSMC3, 4th Floor, Room 4853, Route:  E/OC1
+1315 East-West Highway</address>
+<city>Silver Spring</city>
+<state>Maryland</state>
+<postal>20910</postal>
+<country>U.S.A.</country>
+</cntaddr>
+<cntvoice>(301)713-3300</cntvoice>
+<cntfax>FAX: (301) 713-3300</cntfax>
+<cntemail>Kenneth.Casey@noaa.gov</cntemail>
+<hours>9:00 AM-4:00 PM, EST</hours>
+<cntinst>Phone/FAX/E-mail/letter</cntinst>
+</cntinfo>
+</metc>
+<metstdn>FGDC Content Standards for Digital Geospatial Metadata</metstdn>
+<metstdv>FGDC-STD-001-1998</metstdv>
+<mettc>local time</mettc>
+<metac>None</metac>
+<metuc>None</metuc>
+<metsi>
+<metscs>None</metscs>
+<metsc>Unclassified</metsc>
+<metshd>Not applicable</metshd>
+</metsi>
+</metainfo>
+</metadata>
+{% endif %}

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/granule/fgdc/plugin.conf
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/granule/fgdc/plugin.conf b/src/main/python/plugins/granule/fgdc/plugin.conf
new file mode 100644
index 0000000..8574828
--- /dev/null
+++ b/src/main/python/plugins/granule/fgdc/plugin.conf
@@ -0,0 +1,10 @@
+[solr]
+datasetUrl=http://localhost:8983/solr.war/dataset
+granuleUrl=http://localhost:8983/solr.war/granule
+entriesPerPage=7
+
+[service]
+url=http://localhost:8890
+linkToGranule=LOCAL-FTP,REMOTE-FTP
+database=podaac_dev/podaac$dev@DAACDEV
+template=fgdc_template.xml

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/granule/iso/IsoWriter.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/granule/iso/IsoWriter.py b/src/main/python/plugins/granule/iso/IsoWriter.py
new file mode 100644
index 0000000..8e24fe7
--- /dev/null
+++ b/src/main/python/plugins/granule/iso/IsoWriter.py
@@ -0,0 +1,23 @@
+import logging
+import os
+import os.path
+import codecs
+
+from edge.opensearch.granuleisoresponse import GranuleIsoResponse
+from edge.opensearch.datasetgranulewriter import DatasetGranuleWriter
+
+class IsoWriter(DatasetGranuleWriter):
+    def __init__(self, configFilePath):
+        super(IsoWriter, self).__init__(configFilePath)
+        
+        templatePath = os.path.dirname(configFilePath) + os.sep
+        templatePath += self._configuration.get('service', 'template')
+        self.template = self._readTemplate(templatePath)
+
+    def _generateOpenSearchResponse(self, solrGranuleResponse, solrDatasetResponse, pretty):
+        response = GranuleIsoResponse(
+            self._configuration.get('service', 'linkToGranule')
+        )
+        response.setTemplate(self.template)
+
+        return response.generate(solrDatasetResponse, solrGranuleResponse, pretty)

http://git-wip-us.apache.org/repos/asf/incubator-sdap-edge/blob/53351bf3/src/main/python/plugins/granule/iso/__init__.py
----------------------------------------------------------------------
diff --git a/src/main/python/plugins/granule/iso/__init__.py b/src/main/python/plugins/granule/iso/__init__.py
new file mode 100644
index 0000000..e69de29