You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@drill.apache.org by GitBox <gi...@apache.org> on 2020/04/23 14:48:36 UTC

[GitHub] [drill] cgivre opened a new pull request #2067: DRILL-7716: Create Format Plugin for SPSS Files

cgivre opened a new pull request #2067:
URL: https://github.com/apache/drill/pull/2067


   # [DRILL-7716](https://issues.apache.org/jira/browse/DRILL-7716): Create Format Plugin for SPSS Files
   
   ## Description
   
   This PR adds the ability for Drill to query SPSS files.
   
   ## Documentation
   # Format Plugin for SPSS (SAV) Files
   This format plugin enables Apache Drill to read and query Statistical Package for the Social Sciences (SPSS) (or Statistical Product and Service Solutions) data files. According
    to Wikipedia: [1]
    
    SPSS is a widely used program for statistical analysis in social science. It is also used by market researchers, health researchers, survey companies, government, education researchers, marketing organizations, data miners, and others. The original SPSS manual (Nie, Bent & Hull, 1970) has been described as one of "sociology's most influential books" for allowing ordinary researchers to do their own statistical analysis. In addition to statistical analysis, data management (case selection, file reshaping, creating derived data) and data documentation (a metadata dictionary is stored in the datafile) are features of the base software.
    
    
   ## Configuration 
   To configure Drill to read SPSS files, simply add the following code to the formats section of your file-based storage plugin.  This should happen automatically for the default
    `cp`, `dfs`, and `S3` storage plugins.
    
    Other than the file extensions, there are no variables to configure.
    
   ```json
   "spss": {
             "type": "spss",
             "extensions": [
               "sav"
             ]
           }
   ```
   
   ## Data Model
   SPSS only supports two data types: Numeric and Strings.  Drill maps these to `DOUBLE` and `VARCHAR` respectively. However, for some numeric columns, SPSS maps these numbers to
    text, similar to an `enum` field in Java.
    
    For instance, a field called `Survey` might have labels as shown below:
    
    <table>
       <tr>
           <th>Value</th>
           <th>Text</th>
       </tr>
       <tr>
           <td>1</td>
           <td>Yes</td>
       </tr>
       <tr>
           <td>2</td>
           <td>No</td>
       </tr>
       <tr>
           <td>99</td>
           <td>No Answer</td>
       </tr>
    </table>
   
   For situations like this, Drill will create two columns. In the example above you would get a column called `Survey` which has the numeric value (1,2 or 99) as well as a column
    called `Survey_value` which will map the integer to the appropriate value. Thus, the results would look something like this:
    
    <table>
    <tr>
    <th>`Survey`</th>
    <th>`Survey_value`</th>
    </tr>
    <tr>
    <td>1</td>
    <td>Yes</td>
    </tr>
     <tr>
     <td>1</td>
     <td>Yes</td>
     </tr>
      <tr>
      <td>1</td>
      <td>Yes</td>
      </tr>
       <tr>
       <td>2</td>
       <td>No</td>
       </tr>
        <tr>
        <td>1</td>
        <td>Yes</td>
        </tr>
         <tr>
         <td>2</td>
         <td>No</td>
         </tr>
     <tr>
     <td>99</td>
     <td>No Answer</td>
     </tr>
    </table>
   
   
   [1]: https://en.wikipedia.org/wiki/SPSS
   ## Testing
   There are unit tests attached to this PR. 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [drill] cgivre commented on a change in pull request #2067: DRILL-7716: Create Format Plugin for SPSS Files

Posted by GitBox <gi...@apache.org>.
cgivre commented on a change in pull request #2067:
URL: https://github.com/apache/drill/pull/2067#discussion_r419037351



##########
File path: exec/java-exec/src/test/java/org/apache/drill/test/ClusterTest.java
##########
@@ -125,4 +139,29 @@ public static void run(String query, Object... args) throws Exception {
   public QueryBuilder queryBuilder( ) {
     return client.queryBuilder();
   }
+
+  /**
+   * Generates a compressed file for testing
+   * @param fileName the input file to be compressed
+   * @param codecName the CODEC to be used for compression
+   * @param outFileName the output file name
+   * @throws IOException Throws IO exception if the file cannot be found or any other IO error
+   */
+  public void generateCompressedFile(String fileName, String codecName, String outFileName) throws IOException {

Review comment:
       Note:  By moving this class, this PR also addresses Drill-7600: https://issues.apache.org/jira/browse/DRILL-7600.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [drill] vvysotskyi commented on a change in pull request #2067: DRILL-7716: Create Format Plugin for SPSS Files

Posted by GitBox <gi...@apache.org>.
vvysotskyi commented on a change in pull request #2067:
URL: https://github.com/apache/drill/pull/2067#discussion_r419001452



##########
File path: contrib/format-spss/README.md
##########
@@ -0,0 +1,87 @@
+# Format Plugin for SPSS (SAV) Files
+This format plugin enables Apache Drill to read and query Statistical Package for the Social Sciences (SPSS) (or Statistical Product and Service Solutions) data files. According
+ to Wikipedia: [1]
+ 
+ SPSS is a widely used program for statistical analysis in social science. It is also used by market researchers, health researchers, survey companies, government, education researchers, marketing organizations, data miners, and others. The original SPSS manual (Nie, Bent & Hull, 1970) has been described as one of "sociology's most influential books" for allowing ordinary researchers to do their own statistical analysis. In addition to statistical analysis, data management (case selection, file reshaping, creating derived data) and data documentation (a metadata dictionary is stored in the datafile) are features of the base software.

Review comment:
       Blockquote may be used for quoting from Wiki.

##########
File path: contrib/format-spss/README.md
##########
@@ -0,0 +1,87 @@
+# Format Plugin for SPSS (SAV) Files
+This format plugin enables Apache Drill to read and query Statistical Package for the Social Sciences (SPSS) (or Statistical Product and Service Solutions) data files. According
+ to Wikipedia: [1]
+ 
+ SPSS is a widely used program for statistical analysis in social science. It is also used by market researchers, health researchers, survey companies, government, education researchers, marketing organizations, data miners, and others. The original SPSS manual (Nie, Bent & Hull, 1970) has been described as one of "sociology's most influential books" for allowing ordinary researchers to do their own statistical analysis. In addition to statistical analysis, data management (case selection, file reshaping, creating derived data) and data documentation (a metadata dictionary is stored in the datafile) are features of the base software.
+ 
+ 
+## Configuration 
+To configure Drill to read SPSS files, simply add the following code to the formats section of your file-based storage plugin.  This should happen automatically for the default
+ `cp`, `dfs`, and `S3` storage plugins.
+ 
+ Other than the file extensions, there are no variables to configure.
+ 
+```json
+"spss": {

Review comment:
       Please fix the formatting.

##########
File path: contrib/format-spss/README.md
##########
@@ -0,0 +1,87 @@
+# Format Plugin for SPSS (SAV) Files
+This format plugin enables Apache Drill to read and query Statistical Package for the Social Sciences (SPSS) (or Statistical Product and Service Solutions) data files. According
+ to Wikipedia: [1]

Review comment:
       Wikipedia may be made as a reference instead of putting the long at the end of the doc.

##########
File path: contrib/format-spss/src/main/java/org/apache/drill/exec/store/spss/SpssBatchReader.java
##########
@@ -0,0 +1,242 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.store.spss;
+
+import com.bedatadriven.spss.SpssDataFileReader;
+import com.bedatadriven.spss.SpssVariable;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.common.types.TypeProtos;
+import org.apache.drill.exec.physical.impl.scan.file.FileScanFramework.FileSchemaNegotiator;
+import org.apache.drill.exec.physical.impl.scan.framework.ManagedReader;
+import org.apache.drill.exec.physical.resultSet.ResultSetLoader;
+import org.apache.drill.exec.physical.resultSet.RowSetLoader;
+import org.apache.drill.exec.record.metadata.SchemaBuilder;
+import org.apache.drill.exec.record.metadata.TupleMetadata;
+import org.apache.drill.exec.vector.accessor.ScalarWriter;
+import org.apache.hadoop.mapred.FileSplit;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+public class SpssBatchReader implements ManagedReader<FileSchemaNegotiator> {
+
+  private static final Logger logger = LoggerFactory.getLogger(SpssBatchReader.class);
+
+  private static final String VALUE_LABEL = "_value";
+
+  private final SpssReaderConfig readerConfig;
+
+  private FileSplit split;
+
+  private InputStream fsStream;
+
+  private SpssDataFileReader spssReader;
+
+  private RowSetLoader rowWriter;
+
+  private List<SpssVariable> variableList;
+
+  private List<SpssColumnWriter> writerList;
+
+
+  public static class SpssReaderConfig {
+
+    protected final SpssFormatPlugin plugin;
+
+    public SpssReaderConfig(SpssFormatPlugin plugin) {
+      this.plugin = plugin;
+    }
+  }
+
+  public SpssBatchReader(SpssReaderConfig readerConfig) {
+    this.readerConfig = readerConfig;
+  }
+
+  @Override
+  public boolean open(FileSchemaNegotiator negotiator) {
+    split = negotiator.split();
+    openFile(negotiator);
+    negotiator.tableSchema(buildSchema(), true);
+    ResultSetLoader loader = negotiator.build();
+    rowWriter = loader.writer();
+    buildReaderList();
+
+    return true;
+  }
+
+  @Override
+  public boolean next() {
+    while (!rowWriter.isFull()) {
+      if (!processNextRow()) {
+        return false;
+      }
+    }
+    return true;
+  }
+
+  @Override
+  public void close() {
+    if (fsStream != null) {
+      try {
+        fsStream.close();
+      } catch (IOException e) {
+        logger.warn("Error when closing SPSS File Stream resource: {}", e.getMessage());
+      }
+      fsStream = null;
+    }
+  }
+
+  private void openFile(FileSchemaNegotiator negotiator) {
+    try {
+      fsStream = negotiator.fileSystem().openPossiblyCompressedStream(split.getPath());
+      spssReader = new SpssDataFileReader(fsStream);
+    } catch (IOException e) {
+      throw UserException
+        .dataReadError(e)
+        .message("Unable to open SPSS File %s", split.getPath())
+        .addContext(e.getMessage())
+        .build(logger);
+    }
+  }
+
+  private boolean processNextRow() {
+    try {
+      SpssColumnWriter currentColumn;

Review comment:
       Looks like using this variable may be omitted.

##########
File path: contrib/format-spss/src/main/java/org/apache/drill/exec/store/spss/SpssBatchReader.java
##########
@@ -0,0 +1,242 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.store.spss;
+
+import com.bedatadriven.spss.SpssDataFileReader;
+import com.bedatadriven.spss.SpssVariable;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.common.types.TypeProtos;
+import org.apache.drill.exec.physical.impl.scan.file.FileScanFramework.FileSchemaNegotiator;
+import org.apache.drill.exec.physical.impl.scan.framework.ManagedReader;
+import org.apache.drill.exec.physical.resultSet.ResultSetLoader;
+import org.apache.drill.exec.physical.resultSet.RowSetLoader;
+import org.apache.drill.exec.record.metadata.SchemaBuilder;
+import org.apache.drill.exec.record.metadata.TupleMetadata;
+import org.apache.drill.exec.vector.accessor.ScalarWriter;
+import org.apache.hadoop.mapred.FileSplit;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+public class SpssBatchReader implements ManagedReader<FileSchemaNegotiator> {
+
+  private static final Logger logger = LoggerFactory.getLogger(SpssBatchReader.class);
+
+  private static final String VALUE_LABEL = "_value";
+
+  private final SpssReaderConfig readerConfig;

Review comment:
       This field is newer used.

##########
File path: contrib/format-spss/src/main/java/org/apache/drill/exec/store/spss/SpssBatchReader.java
##########
@@ -0,0 +1,242 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.store.spss;
+
+import com.bedatadriven.spss.SpssDataFileReader;
+import com.bedatadriven.spss.SpssVariable;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.common.types.TypeProtos;
+import org.apache.drill.exec.physical.impl.scan.file.FileScanFramework.FileSchemaNegotiator;
+import org.apache.drill.exec.physical.impl.scan.framework.ManagedReader;
+import org.apache.drill.exec.physical.resultSet.ResultSetLoader;
+import org.apache.drill.exec.physical.resultSet.RowSetLoader;
+import org.apache.drill.exec.record.metadata.SchemaBuilder;
+import org.apache.drill.exec.record.metadata.TupleMetadata;
+import org.apache.drill.exec.vector.accessor.ScalarWriter;
+import org.apache.hadoop.mapred.FileSplit;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+public class SpssBatchReader implements ManagedReader<FileSchemaNegotiator> {
+
+  private static final Logger logger = LoggerFactory.getLogger(SpssBatchReader.class);
+
+  private static final String VALUE_LABEL = "_value";
+
+  private final SpssReaderConfig readerConfig;
+
+  private FileSplit split;
+
+  private InputStream fsStream;
+
+  private SpssDataFileReader spssReader;
+
+  private RowSetLoader rowWriter;
+
+  private List<SpssVariable> variableList;
+
+  private List<SpssColumnWriter> writerList;
+
+
+  public static class SpssReaderConfig {
+
+    protected final SpssFormatPlugin plugin;
+
+    public SpssReaderConfig(SpssFormatPlugin plugin) {
+      this.plugin = plugin;
+    }
+  }
+
+  public SpssBatchReader(SpssReaderConfig readerConfig) {
+    this.readerConfig = readerConfig;
+  }
+
+  @Override
+  public boolean open(FileSchemaNegotiator negotiator) {
+    split = negotiator.split();
+    openFile(negotiator);
+    negotiator.tableSchema(buildSchema(), true);
+    ResultSetLoader loader = negotiator.build();
+    rowWriter = loader.writer();
+    buildReaderList();
+
+    return true;
+  }
+
+  @Override
+  public boolean next() {
+    while (!rowWriter.isFull()) {
+      if (!processNextRow()) {
+        return false;
+      }
+    }
+    return true;
+  }
+
+  @Override
+  public void close() {
+    if (fsStream != null) {
+      try {
+        fsStream.close();
+      } catch (IOException e) {
+        logger.warn("Error when closing SPSS File Stream resource: {}", e.getMessage());
+      }
+      fsStream = null;
+    }
+  }
+
+  private void openFile(FileSchemaNegotiator negotiator) {
+    try {
+      fsStream = negotiator.fileSystem().openPossiblyCompressedStream(split.getPath());
+      spssReader = new SpssDataFileReader(fsStream);
+    } catch (IOException e) {
+      throw UserException
+        .dataReadError(e)
+        .message("Unable to open SPSS File %s", split.getPath())
+        .addContext(e.getMessage())
+        .build(logger);
+    }
+  }
+
+  private boolean processNextRow() {
+    try {
+      SpssColumnWriter currentColumn;
+
+      // Stop reading when you run out of data
+      if (!spssReader.readNextCase() ) {

Review comment:
       ```suggestion
         if (!spssReader.readNextCase()) {
   ```

##########
File path: contrib/format-spss/src/main/java/org/apache/drill/exec/store/spss/SpssBatchReader.java
##########
@@ -0,0 +1,242 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.store.spss;
+
+import com.bedatadriven.spss.SpssDataFileReader;
+import com.bedatadriven.spss.SpssVariable;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.common.types.TypeProtos;
+import org.apache.drill.exec.physical.impl.scan.file.FileScanFramework.FileSchemaNegotiator;
+import org.apache.drill.exec.physical.impl.scan.framework.ManagedReader;
+import org.apache.drill.exec.physical.resultSet.ResultSetLoader;
+import org.apache.drill.exec.physical.resultSet.RowSetLoader;
+import org.apache.drill.exec.record.metadata.SchemaBuilder;
+import org.apache.drill.exec.record.metadata.TupleMetadata;
+import org.apache.drill.exec.vector.accessor.ScalarWriter;
+import org.apache.hadoop.mapred.FileSplit;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+public class SpssBatchReader implements ManagedReader<FileSchemaNegotiator> {
+
+  private static final Logger logger = LoggerFactory.getLogger(SpssBatchReader.class);
+
+  private static final String VALUE_LABEL = "_value";
+
+  private final SpssReaderConfig readerConfig;
+
+  private FileSplit split;
+
+  private InputStream fsStream;
+
+  private SpssDataFileReader spssReader;
+
+  private RowSetLoader rowWriter;
+
+  private List<SpssVariable> variableList;
+
+  private List<SpssColumnWriter> writerList;
+
+
+  public static class SpssReaderConfig {
+
+    protected final SpssFormatPlugin plugin;
+
+    public SpssReaderConfig(SpssFormatPlugin plugin) {
+      this.plugin = plugin;
+    }
+  }
+
+  public SpssBatchReader(SpssReaderConfig readerConfig) {
+    this.readerConfig = readerConfig;
+  }
+
+  @Override
+  public boolean open(FileSchemaNegotiator negotiator) {
+    split = negotiator.split();
+    openFile(negotiator);
+    negotiator.tableSchema(buildSchema(), true);
+    ResultSetLoader loader = negotiator.build();
+    rowWriter = loader.writer();
+    buildReaderList();
+
+    return true;
+  }
+
+  @Override
+  public boolean next() {
+    while (!rowWriter.isFull()) {
+      if (!processNextRow()) {
+        return false;
+      }
+    }
+    return true;
+  }
+
+  @Override
+  public void close() {
+    if (fsStream != null) {
+      try {
+        fsStream.close();
+      } catch (IOException e) {
+        logger.warn("Error when closing SPSS File Stream resource: {}", e.getMessage());
+      }
+      fsStream = null;
+    }
+  }
+
+  private void openFile(FileSchemaNegotiator negotiator) {
+    try {
+      fsStream = negotiator.fileSystem().openPossiblyCompressedStream(split.getPath());
+      spssReader = new SpssDataFileReader(fsStream);
+    } catch (IOException e) {
+      throw UserException
+        .dataReadError(e)
+        .message("Unable to open SPSS File %s", split.getPath())
+        .addContext(e.getMessage())
+        .build(logger);
+    }
+  }
+
+  private boolean processNextRow() {
+    try {
+      SpssColumnWriter currentColumn;
+
+      // Stop reading when you run out of data
+      if (!spssReader.readNextCase() ) {
+        return false;
+      }
+
+      rowWriter.start();
+      for (SpssColumnWriter spssColumnWriter : writerList) {
+        currentColumn = spssColumnWriter;
+        currentColumn.load(spssReader);
+      }
+      rowWriter.save();
+
+    } catch (IOException e) {
+      throw UserException
+        .dataReadError(e)
+        .message("Error reading SPSS File.")
+        .addContext(e.getMessage())
+        .build(logger);
+    }
+    return true;
+  }
+
+  private TupleMetadata buildSchema() {
+    SchemaBuilder builder = new SchemaBuilder();
+    variableList = spssReader.getVariables();
+
+    for(SpssVariable variable : variableList) {
+      String varName = variable.getVariableName();
+
+      if (variable.isNumeric()) {
+        builder.addNullable(varName, TypeProtos.MinorType.FLOAT8);
+
+        // Check if the column has lookups associated with it
+        if (variable.getValueLabels() != null && variable.getValueLabels().size() > 0) {
+          builder.addNullable(varName + VALUE_LABEL, TypeProtos.MinorType.VARCHAR);
+        }
+
+      } else {
+        builder.addNullable(varName, TypeProtos.MinorType.VARCHAR);
+      }
+    }
+    return builder.buildSchema();
+  }
+
+  private void buildReaderList() {
+    writerList = new ArrayList<>();
+
+    for(SpssVariable variable : variableList) {
+      if (variable.isNumeric()) {
+        writerList.add(new NumericSpssColumnWriter(variable.getVariableName(), rowWriter, spssReader));
+      } else {
+        writerList.add(new StringSpssColumnWriter(variable.getVariableName(), rowWriter));
+      }
+    }
+  }
+
+  public abstract static class SpssColumnWriter {
+    String columnName;
+
+    ScalarWriter writer;
+
+    public boolean isNumeric;

Review comment:
       This field is newer accessed, so can we remove it?

##########
File path: contrib/format-spss/src/main/java/org/apache/drill/exec/store/spss/SpssBatchReader.java
##########
@@ -0,0 +1,242 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.store.spss;
+
+import com.bedatadriven.spss.SpssDataFileReader;
+import com.bedatadriven.spss.SpssVariable;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.common.types.TypeProtos;
+import org.apache.drill.exec.physical.impl.scan.file.FileScanFramework.FileSchemaNegotiator;
+import org.apache.drill.exec.physical.impl.scan.framework.ManagedReader;
+import org.apache.drill.exec.physical.resultSet.ResultSetLoader;
+import org.apache.drill.exec.physical.resultSet.RowSetLoader;
+import org.apache.drill.exec.record.metadata.SchemaBuilder;
+import org.apache.drill.exec.record.metadata.TupleMetadata;
+import org.apache.drill.exec.vector.accessor.ScalarWriter;
+import org.apache.hadoop.mapred.FileSplit;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+public class SpssBatchReader implements ManagedReader<FileSchemaNegotiator> {
+
+  private static final Logger logger = LoggerFactory.getLogger(SpssBatchReader.class);
+
+  private static final String VALUE_LABEL = "_value";
+
+  private final SpssReaderConfig readerConfig;
+
+  private FileSplit split;
+
+  private InputStream fsStream;
+
+  private SpssDataFileReader spssReader;
+
+  private RowSetLoader rowWriter;
+
+  private List<SpssVariable> variableList;
+
+  private List<SpssColumnWriter> writerList;
+
+
+  public static class SpssReaderConfig {
+
+    protected final SpssFormatPlugin plugin;
+
+    public SpssReaderConfig(SpssFormatPlugin plugin) {
+      this.plugin = plugin;
+    }
+  }
+
+  public SpssBatchReader(SpssReaderConfig readerConfig) {
+    this.readerConfig = readerConfig;
+  }
+
+  @Override
+  public boolean open(FileSchemaNegotiator negotiator) {
+    split = negotiator.split();
+    openFile(negotiator);
+    negotiator.tableSchema(buildSchema(), true);
+    ResultSetLoader loader = negotiator.build();
+    rowWriter = loader.writer();
+    buildReaderList();
+
+    return true;
+  }
+
+  @Override
+  public boolean next() {
+    while (!rowWriter.isFull()) {
+      if (!processNextRow()) {
+        return false;
+      }
+    }
+    return true;
+  }
+
+  @Override
+  public void close() {
+    if (fsStream != null) {
+      try {
+        fsStream.close();
+      } catch (IOException e) {
+        logger.warn("Error when closing SPSS File Stream resource: {}", e.getMessage());
+      }
+      fsStream = null;
+    }
+  }
+
+  private void openFile(FileSchemaNegotiator negotiator) {
+    try {
+      fsStream = negotiator.fileSystem().openPossiblyCompressedStream(split.getPath());
+      spssReader = new SpssDataFileReader(fsStream);
+    } catch (IOException e) {
+      throw UserException
+        .dataReadError(e)
+        .message("Unable to open SPSS File %s", split.getPath())
+        .addContext(e.getMessage())
+        .build(logger);
+    }
+  }
+
+  private boolean processNextRow() {
+    try {
+      SpssColumnWriter currentColumn;
+
+      // Stop reading when you run out of data
+      if (!spssReader.readNextCase() ) {
+        return false;
+      }
+
+      rowWriter.start();
+      for (SpssColumnWriter spssColumnWriter : writerList) {
+        currentColumn = spssColumnWriter;
+        currentColumn.load(spssReader);
+      }
+      rowWriter.save();
+
+    } catch (IOException e) {
+      throw UserException
+        .dataReadError(e)
+        .message("Error reading SPSS File.")
+        .addContext(e.getMessage())
+        .build(logger);
+    }
+    return true;
+  }
+
+  private TupleMetadata buildSchema() {
+    SchemaBuilder builder = new SchemaBuilder();
+    variableList = spssReader.getVariables();
+
+    for(SpssVariable variable : variableList) {
+      String varName = variable.getVariableName();
+
+      if (variable.isNumeric()) {
+        builder.addNullable(varName, TypeProtos.MinorType.FLOAT8);
+
+        // Check if the column has lookups associated with it
+        if (variable.getValueLabels() != null && variable.getValueLabels().size() > 0) {
+          builder.addNullable(varName + VALUE_LABEL, TypeProtos.MinorType.VARCHAR);
+        }
+
+      } else {
+        builder.addNullable(varName, TypeProtos.MinorType.VARCHAR);
+      }
+    }
+    return builder.buildSchema();
+  }
+
+  private void buildReaderList() {
+    writerList = new ArrayList<>();
+
+    for(SpssVariable variable : variableList) {
+      if (variable.isNumeric()) {
+        writerList.add(new NumericSpssColumnWriter(variable.getVariableName(), rowWriter, spssReader));
+      } else {
+        writerList.add(new StringSpssColumnWriter(variable.getVariableName(), rowWriter));
+      }
+    }
+  }
+
+  public abstract static class SpssColumnWriter {
+    String columnName;

Review comment:
       You may introduce a constructor which accepts these fields and make them final.

##########
File path: contrib/format-spss/src/main/resources/bootstrap-format-plugins.json
##########
@@ -0,0 +1,37 @@
+{
+  "storage":{

Review comment:
       ```suggestion
     "storage": {
   ```

##########
File path: protocol/src/main/protobuf/UserBitShared.proto
##########
@@ -379,7 +379,11 @@ enum CoreOperatorType {
   SHP_SUB_SCAN = 65;
   METADATA_HANDLER = 66;
   METADATA_CONTROLLER = 67;
+  // 68 Reserved for Apache Druid

Review comment:
       Please remove these comments. When plugins are merged, new values will be introduced. Protobufs will be regenerated in both cases.

##########
File path: contrib/format-spss/src/main/java/org/apache/drill/exec/store/spss/SpssBatchReader.java
##########
@@ -0,0 +1,242 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.store.spss;
+
+import com.bedatadriven.spss.SpssDataFileReader;
+import com.bedatadriven.spss.SpssVariable;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.common.types.TypeProtos;
+import org.apache.drill.exec.physical.impl.scan.file.FileScanFramework.FileSchemaNegotiator;
+import org.apache.drill.exec.physical.impl.scan.framework.ManagedReader;
+import org.apache.drill.exec.physical.resultSet.ResultSetLoader;
+import org.apache.drill.exec.physical.resultSet.RowSetLoader;
+import org.apache.drill.exec.record.metadata.SchemaBuilder;
+import org.apache.drill.exec.record.metadata.TupleMetadata;
+import org.apache.drill.exec.vector.accessor.ScalarWriter;
+import org.apache.hadoop.mapred.FileSplit;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+public class SpssBatchReader implements ManagedReader<FileSchemaNegotiator> {
+
+  private static final Logger logger = LoggerFactory.getLogger(SpssBatchReader.class);
+
+  private static final String VALUE_LABEL = "_value";
+
+  private final SpssReaderConfig readerConfig;
+
+  private FileSplit split;
+
+  private InputStream fsStream;
+
+  private SpssDataFileReader spssReader;
+
+  private RowSetLoader rowWriter;
+
+  private List<SpssVariable> variableList;
+
+  private List<SpssColumnWriter> writerList;
+
+
+  public static class SpssReaderConfig {
+
+    protected final SpssFormatPlugin plugin;
+
+    public SpssReaderConfig(SpssFormatPlugin plugin) {
+      this.plugin = plugin;
+    }
+  }
+
+  public SpssBatchReader(SpssReaderConfig readerConfig) {
+    this.readerConfig = readerConfig;
+  }
+
+  @Override
+  public boolean open(FileSchemaNegotiator negotiator) {
+    split = negotiator.split();
+    openFile(negotiator);
+    negotiator.tableSchema(buildSchema(), true);
+    ResultSetLoader loader = negotiator.build();
+    rowWriter = loader.writer();
+    buildReaderList();
+
+    return true;
+  }
+
+  @Override
+  public boolean next() {
+    while (!rowWriter.isFull()) {
+      if (!processNextRow()) {
+        return false;
+      }
+    }
+    return true;
+  }
+
+  @Override
+  public void close() {
+    if (fsStream != null) {
+      try {
+        fsStream.close();
+      } catch (IOException e) {
+        logger.warn("Error when closing SPSS File Stream resource: {}", e.getMessage());
+      }
+      fsStream = null;
+    }
+  }
+
+  private void openFile(FileSchemaNegotiator negotiator) {
+    try {
+      fsStream = negotiator.fileSystem().openPossiblyCompressedStream(split.getPath());
+      spssReader = new SpssDataFileReader(fsStream);
+    } catch (IOException e) {
+      throw UserException
+        .dataReadError(e)
+        .message("Unable to open SPSS File %s", split.getPath())
+        .addContext(e.getMessage())
+        .build(logger);
+    }
+  }
+
+  private boolean processNextRow() {
+    try {
+      SpssColumnWriter currentColumn;
+
+      // Stop reading when you run out of data
+      if (!spssReader.readNextCase() ) {
+        return false;
+      }
+
+      rowWriter.start();
+      for (SpssColumnWriter spssColumnWriter : writerList) {
+        currentColumn = spssColumnWriter;
+        currentColumn.load(spssReader);
+      }
+      rowWriter.save();
+
+    } catch (IOException e) {
+      throw UserException
+        .dataReadError(e)
+        .message("Error reading SPSS File.")
+        .addContext(e.getMessage())
+        .build(logger);
+    }
+    return true;
+  }
+
+  private TupleMetadata buildSchema() {
+    SchemaBuilder builder = new SchemaBuilder();
+    variableList = spssReader.getVariables();
+
+    for(SpssVariable variable : variableList) {

Review comment:
       ```suggestion
       for (SpssVariable variable : variableList) {
   ```

##########
File path: exec/java-exec/src/test/java/org/apache/drill/test/ClusterTest.java
##########
@@ -125,4 +139,29 @@ public static void run(String query, Object... args) throws Exception {
   public QueryBuilder queryBuilder( ) {
     return client.queryBuilder();
   }
+
+  /**
+   * Generates a compressed file for testing
+   * @param fileName the input file to be compressed
+   * @param codecName the CODEC to be used for compression
+   * @param outFileName the output file name
+   * @throws IOException Throws IO exception if the file cannot be found or any other IO error
+   */
+  public void generateCompressedFile(String fileName, String codecName, String outFileName) throws IOException {

Review comment:
       Please make it static and move to a more suitable place, for example to `QueryTestUtil`.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [drill] cgivre commented on a change in pull request #2067: DRILL-7716: Create Format Plugin for SPSS Files

Posted by GitBox <gi...@apache.org>.
cgivre commented on a change in pull request #2067:
URL: https://github.com/apache/drill/pull/2067#discussion_r419812106



##########
File path: contrib/format-spss/pom.xml
##########
@@ -0,0 +1,88 @@
+<?xml version="1.0"?>
+<!--
+
+    Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+
+-->
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
+  <modelVersion>4.0.0</modelVersion>
+
+  <parent>
+    <artifactId>drill-contrib-parent</artifactId>
+    <groupId>org.apache.drill.contrib</groupId>
+    <version>1.18.0-SNAPSHOT</version>
+  </parent>
+
+  <artifactId>drill-format-spss</artifactId>
+  <name>contrib/format-spss</name>
+
+  <dependencies>
+    <dependency>
+      <groupId>org.apache.drill.exec</groupId>
+      <artifactId>drill-java-exec</artifactId>
+      <version>${project.version}</version>
+    </dependency>
+    <dependency>
+      <groupId>com.bedatadriven.spss</groupId>
+      <artifactId>spss-reader</artifactId>
+      <version>1.3</version>
+    </dependency>
+
+    <!-- Test dependencies -->
+    <dependency>
+      <groupId>org.apache.drill.exec</groupId>
+      <artifactId>drill-java-exec</artifactId>
+      <classifier>tests</classifier>
+      <version>${project.version}</version>
+      <scope>test</scope>
+    </dependency>
+
+    <dependency>
+      <groupId>org.apache.drill</groupId>
+      <artifactId>drill-common</artifactId>
+      <classifier>tests</classifier>
+      <version>${project.version}</version>
+      <scope>test</scope>
+    </dependency>
+  </dependencies>
+  <build>
+    <plugins>
+      <plugin>
+        <artifactId>maven-resources-plugin</artifactId>
+        <executions>
+          <execution>
+            <id>copy-java-sources</id>
+            <phase>process-sources</phase>
+            <goals>
+              <goal>copy-resources</goal>
+            </goals>
+            <configuration>
+              <outputDirectory>${basedir}/target/classes/org/apache/drill/exec/store/syslog

Review comment:
       Oops ... Fixed




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [drill] cgivre commented on a change in pull request #2067: DRILL-7716: Create Format Plugin for SPSS Files

Posted by GitBox <gi...@apache.org>.
cgivre commented on a change in pull request #2067:
URL: https://github.com/apache/drill/pull/2067#discussion_r419036349



##########
File path: contrib/format-spss/src/main/java/org/apache/drill/exec/store/spss/SpssBatchReader.java
##########
@@ -0,0 +1,242 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.store.spss;
+
+import com.bedatadriven.spss.SpssDataFileReader;
+import com.bedatadriven.spss.SpssVariable;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.common.types.TypeProtos;
+import org.apache.drill.exec.physical.impl.scan.file.FileScanFramework.FileSchemaNegotiator;
+import org.apache.drill.exec.physical.impl.scan.framework.ManagedReader;
+import org.apache.drill.exec.physical.resultSet.ResultSetLoader;
+import org.apache.drill.exec.physical.resultSet.RowSetLoader;
+import org.apache.drill.exec.record.metadata.SchemaBuilder;
+import org.apache.drill.exec.record.metadata.TupleMetadata;
+import org.apache.drill.exec.vector.accessor.ScalarWriter;
+import org.apache.hadoop.mapred.FileSplit;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+public class SpssBatchReader implements ManagedReader<FileSchemaNegotiator> {
+
+  private static final Logger logger = LoggerFactory.getLogger(SpssBatchReader.class);
+
+  private static final String VALUE_LABEL = "_value";
+
+  private final SpssReaderConfig readerConfig;
+
+  private FileSplit split;
+
+  private InputStream fsStream;
+
+  private SpssDataFileReader spssReader;
+
+  private RowSetLoader rowWriter;
+
+  private List<SpssVariable> variableList;
+
+  private List<SpssColumnWriter> writerList;
+
+
+  public static class SpssReaderConfig {
+
+    protected final SpssFormatPlugin plugin;
+
+    public SpssReaderConfig(SpssFormatPlugin plugin) {
+      this.plugin = plugin;
+    }
+  }
+
+  public SpssBatchReader(SpssReaderConfig readerConfig) {
+    this.readerConfig = readerConfig;
+  }
+
+  @Override
+  public boolean open(FileSchemaNegotiator negotiator) {
+    split = negotiator.split();
+    openFile(negotiator);
+    negotiator.tableSchema(buildSchema(), true);
+    ResultSetLoader loader = negotiator.build();
+    rowWriter = loader.writer();
+    buildReaderList();
+
+    return true;
+  }
+
+  @Override
+  public boolean next() {
+    while (!rowWriter.isFull()) {
+      if (!processNextRow()) {
+        return false;
+      }
+    }
+    return true;
+  }
+
+  @Override
+  public void close() {
+    if (fsStream != null) {
+      try {
+        fsStream.close();
+      } catch (IOException e) {
+        logger.warn("Error when closing SPSS File Stream resource: {}", e.getMessage());
+      }
+      fsStream = null;
+    }
+  }
+
+  private void openFile(FileSchemaNegotiator negotiator) {
+    try {
+      fsStream = negotiator.fileSystem().openPossiblyCompressedStream(split.getPath());
+      spssReader = new SpssDataFileReader(fsStream);
+    } catch (IOException e) {
+      throw UserException
+        .dataReadError(e)
+        .message("Unable to open SPSS File %s", split.getPath())
+        .addContext(e.getMessage())
+        .build(logger);
+    }
+  }
+
+  private boolean processNextRow() {
+    try {
+      SpssColumnWriter currentColumn;

Review comment:
       Removed




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [drill] cgivre commented on a change in pull request #2067: DRILL-7716: Create Format Plugin for SPSS Files

Posted by GitBox <gi...@apache.org>.
cgivre commented on a change in pull request #2067:
URL: https://github.com/apache/drill/pull/2067#discussion_r419034126



##########
File path: contrib/format-spss/README.md
##########
@@ -0,0 +1,87 @@
+# Format Plugin for SPSS (SAV) Files
+This format plugin enables Apache Drill to read and query Statistical Package for the Social Sciences (SPSS) (or Statistical Product and Service Solutions) data files. According
+ to Wikipedia: [1]
+ 
+ SPSS is a widely used program for statistical analysis in social science. It is also used by market researchers, health researchers, survey companies, government, education researchers, marketing organizations, data miners, and others. The original SPSS manual (Nie, Bent & Hull, 1970) has been described as one of "sociology's most influential books" for allowing ordinary researchers to do their own statistical analysis. In addition to statistical analysis, data management (case selection, file reshaping, creating derived data) and data documentation (a metadata dictionary is stored in the datafile) are features of the base software.
+ 
+ 
+## Configuration 
+To configure Drill to read SPSS files, simply add the following code to the formats section of your file-based storage plugin.  This should happen automatically for the default
+ `cp`, `dfs`, and `S3` storage plugins.
+ 
+ Other than the file extensions, there are no variables to configure.
+ 
+```json
+"spss": {

Review comment:
       Fixed (I think)




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [drill] cgivre commented on a change in pull request #2067: DRILL-7716: Create Format Plugin for SPSS Files

Posted by GitBox <gi...@apache.org>.
cgivre commented on a change in pull request #2067:
URL: https://github.com/apache/drill/pull/2067#discussion_r419812878



##########
File path: contrib/format-spss/src/main/java/org/apache/drill/exec/store/spss/SpssBatchReader.java
##########
@@ -0,0 +1,240 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.store.spss;
+
+import com.bedatadriven.spss.SpssDataFileReader;
+import com.bedatadriven.spss.SpssVariable;
+import org.apache.drill.common.exceptions.CustomErrorContext;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.common.types.TypeProtos;
+import org.apache.drill.exec.physical.impl.scan.file.FileScanFramework.FileSchemaNegotiator;
+import org.apache.drill.exec.physical.impl.scan.framework.ManagedReader;
+import org.apache.drill.exec.physical.resultSet.ResultSetLoader;
+import org.apache.drill.exec.physical.resultSet.RowSetLoader;
+import org.apache.drill.exec.record.metadata.SchemaBuilder;
+import org.apache.drill.exec.record.metadata.TupleMetadata;
+import org.apache.drill.exec.vector.accessor.ScalarWriter;
+import org.apache.hadoop.mapred.FileSplit;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+public class SpssBatchReader implements ManagedReader<FileSchemaNegotiator> {
+
+  private static final Logger logger = LoggerFactory.getLogger(SpssBatchReader.class);
+
+  private static final String VALUE_LABEL = "_value";
+
+  private FileSplit split;
+
+  private InputStream fsStream;
+
+  private SpssDataFileReader spssReader;
+
+  private RowSetLoader rowWriter;
+
+  private List<SpssVariable> variableList;
+
+  private List<SpssColumnWriter> writerList;
+
+  private CustomErrorContext errorContext;
+
+
+  public static class SpssReaderConfig {
+
+    protected final SpssFormatPlugin plugin;
+
+    public SpssReaderConfig(SpssFormatPlugin plugin) {
+      this.plugin = plugin;
+    }
+  }
+
+  @Override
+  public boolean open(FileSchemaNegotiator negotiator) {
+    split = negotiator.split();
+    openFile(negotiator);
+    negotiator.tableSchema(buildSchema(), true);
+    errorContext = negotiator.parentErrorContext();
+    ResultSetLoader loader = negotiator.build();
+    rowWriter = loader.writer();
+    buildReaderList();
+
+    return true;
+  }
+
+  @Override
+  public boolean next() {
+    while (!rowWriter.isFull()) {
+      if (!processNextRow()) {
+        return false;
+      }
+    }
+    return true;
+  }
+
+  @Override
+  public void close() {
+    if (fsStream != null) {
+      try {
+        fsStream.close();
+      } catch (IOException e) {
+        logger.warn("Error when closing SPSS File Stream resource: {}", e.getMessage());
+      }
+      fsStream = null;
+    }
+  }
+
+  private void openFile(FileSchemaNegotiator negotiator) {
+    try {
+      fsStream = negotiator.fileSystem().openPossiblyCompressedStream(split.getPath());
+      spssReader = new SpssDataFileReader(fsStream);
+    } catch (IOException e) {
+      throw UserException
+        .dataReadError(e)
+        .message("Unable to open SPSS File %s", split.getPath())
+        .addContext(e.getMessage())
+        .addContext(errorContext)
+        .build(logger);
+    }
+  }
+
+  private boolean processNextRow() {
+    try {
+      // Stop reading when you run out of data
+      if (!spssReader.readNextCase()) {
+        return false;
+      }
+
+      rowWriter.start();
+      for (SpssColumnWriter spssColumnWriter : writerList) {
+        spssColumnWriter.load(spssReader);
+      }
+      rowWriter.save();
+
+    } catch (IOException e) {
+      throw UserException
+        .dataReadError(e)
+        .message("Error reading SPSS File.")
+        .addContext(e.getMessage())
+        .addContext(errorContext)
+        .build(logger);
+    }
+    return true;
+  }
+
+  private TupleMetadata buildSchema() {
+    SchemaBuilder builder = new SchemaBuilder();
+    variableList = spssReader.getVariables();
+
+    for (SpssVariable variable : variableList) {
+      String varName = variable.getVariableName();
+
+      if (variable.isNumeric()) {
+        builder.addNullable(varName, TypeProtos.MinorType.FLOAT8);
+
+        // Check if the column has lookups associated with it
+        if (variable.getValueLabels() != null && variable.getValueLabels().size() > 0) {
+          builder.addNullable(varName + VALUE_LABEL, TypeProtos.MinorType.VARCHAR);
+        }
+
+      } else {
+        builder.addNullable(varName, TypeProtos.MinorType.VARCHAR);
+      }
+    }
+    return builder.buildSchema();
+  }
+
+  private void buildReaderList() {
+    writerList = new ArrayList<>();
+
+    for(SpssVariable variable : variableList) {
+      if (variable.isNumeric()) {
+        writerList.add(new NumericSpssColumnWriter(variable.getVariableName(), rowWriter, spssReader));
+      } else {
+        writerList.add(new StringSpssColumnWriter(variable.getVariableName(), rowWriter));
+      }
+    }
+  }
+
+  public abstract static class SpssColumnWriter {
+    final String columnName;
+
+    final ScalarWriter writer;

Review comment:
       Fixed.  I think the Drill auto-formatting did that.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [drill] cgivre commented on a change in pull request #2067: DRILL-7716: Create Format Plugin for SPSS Files

Posted by GitBox <gi...@apache.org>.
cgivre commented on a change in pull request #2067:
URL: https://github.com/apache/drill/pull/2067#discussion_r419816763



##########
File path: contrib/format-spss/README.md
##########
@@ -0,0 +1,83 @@
+# Format Plugin for SPSS (SAV) Files
+This format plugin enables Apache Drill to read and query Statistical Package for the Social Sciences (SPSS) (or Statistical Product and Service Solutions) data files. According
+ to Wikipedia: (https://en.wikipedia.org/wiki/SPSS)
+ ***
+ SPSS is a widely used program for statistical analysis in social science. It is also used by market researchers, health researchers, survey companies, government, education researchers, marketing organizations, data miners, and others. The original SPSS manual (Nie, Bent & Hull, 1970) has been described as one of "sociology's most influential books" for allowing ordinary researchers to do their own statistical analysis. In addition to statistical analysis, data management (case selection, file reshaping, creating derived data) and data documentation (a metadata dictionary is stored in the datafile) are features of the base software.
+ ***
+ 
+## Configuration 
+To configure Drill to read SPSS files, simply add the following code to the formats section of your file-based storage plugin.  This should happen automatically for the default

Review comment:
       My thought is that it's better to add it in the bootstrap so that people know it's there.  Just my .02... 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [drill] cgivre commented on a change in pull request #2067: DRILL-7716: Create Format Plugin for SPSS Files

Posted by GitBox <gi...@apache.org>.
cgivre commented on a change in pull request #2067:
URL: https://github.com/apache/drill/pull/2067#discussion_r419035541



##########
File path: contrib/format-spss/src/main/java/org/apache/drill/exec/store/spss/SpssBatchReader.java
##########
@@ -0,0 +1,242 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.store.spss;
+
+import com.bedatadriven.spss.SpssDataFileReader;
+import com.bedatadriven.spss.SpssVariable;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.common.types.TypeProtos;
+import org.apache.drill.exec.physical.impl.scan.file.FileScanFramework.FileSchemaNegotiator;
+import org.apache.drill.exec.physical.impl.scan.framework.ManagedReader;
+import org.apache.drill.exec.physical.resultSet.ResultSetLoader;
+import org.apache.drill.exec.physical.resultSet.RowSetLoader;
+import org.apache.drill.exec.record.metadata.SchemaBuilder;
+import org.apache.drill.exec.record.metadata.TupleMetadata;
+import org.apache.drill.exec.vector.accessor.ScalarWriter;
+import org.apache.hadoop.mapred.FileSplit;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+public class SpssBatchReader implements ManagedReader<FileSchemaNegotiator> {
+
+  private static final Logger logger = LoggerFactory.getLogger(SpssBatchReader.class);
+
+  private static final String VALUE_LABEL = "_value";
+
+  private final SpssReaderConfig readerConfig;
+
+  private FileSplit split;
+
+  private InputStream fsStream;
+
+  private SpssDataFileReader spssReader;
+
+  private RowSetLoader rowWriter;
+
+  private List<SpssVariable> variableList;
+
+  private List<SpssColumnWriter> writerList;
+
+
+  public static class SpssReaderConfig {
+
+    protected final SpssFormatPlugin plugin;
+
+    public SpssReaderConfig(SpssFormatPlugin plugin) {
+      this.plugin = plugin;
+    }
+  }
+
+  public SpssBatchReader(SpssReaderConfig readerConfig) {
+    this.readerConfig = readerConfig;
+  }
+
+  @Override
+  public boolean open(FileSchemaNegotiator negotiator) {
+    split = negotiator.split();
+    openFile(negotiator);
+    negotiator.tableSchema(buildSchema(), true);
+    ResultSetLoader loader = negotiator.build();
+    rowWriter = loader.writer();
+    buildReaderList();
+
+    return true;
+  }
+
+  @Override
+  public boolean next() {
+    while (!rowWriter.isFull()) {
+      if (!processNextRow()) {
+        return false;
+      }
+    }
+    return true;
+  }
+
+  @Override
+  public void close() {
+    if (fsStream != null) {
+      try {
+        fsStream.close();
+      } catch (IOException e) {
+        logger.warn("Error when closing SPSS File Stream resource: {}", e.getMessage());
+      }
+      fsStream = null;
+    }
+  }
+
+  private void openFile(FileSchemaNegotiator negotiator) {
+    try {
+      fsStream = negotiator.fileSystem().openPossiblyCompressedStream(split.getPath());
+      spssReader = new SpssDataFileReader(fsStream);
+    } catch (IOException e) {
+      throw UserException
+        .dataReadError(e)
+        .message("Unable to open SPSS File %s", split.getPath())
+        .addContext(e.getMessage())
+        .build(logger);
+    }
+  }
+
+  private boolean processNextRow() {
+    try {
+      SpssColumnWriter currentColumn;
+
+      // Stop reading when you run out of data
+      if (!spssReader.readNextCase() ) {
+        return false;
+      }
+
+      rowWriter.start();
+      for (SpssColumnWriter spssColumnWriter : writerList) {
+        currentColumn = spssColumnWriter;
+        currentColumn.load(spssReader);
+      }
+      rowWriter.save();
+
+    } catch (IOException e) {
+      throw UserException
+        .dataReadError(e)
+        .message("Error reading SPSS File.")
+        .addContext(e.getMessage())
+        .build(logger);
+    }
+    return true;
+  }
+
+  private TupleMetadata buildSchema() {
+    SchemaBuilder builder = new SchemaBuilder();
+    variableList = spssReader.getVariables();
+
+    for(SpssVariable variable : variableList) {

Review comment:
       Fixed




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [drill] cgivre commented on a change in pull request #2067: DRILL-7716: Create Format Plugin for SPSS Files

Posted by GitBox <gi...@apache.org>.
cgivre commented on a change in pull request #2067:
URL: https://github.com/apache/drill/pull/2067#discussion_r419812606



##########
File path: contrib/format-spss/src/main/java/org/apache/drill/exec/store/spss/SpssBatchReader.java
##########
@@ -0,0 +1,240 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.store.spss;
+
+import com.bedatadriven.spss.SpssDataFileReader;
+import com.bedatadriven.spss.SpssVariable;
+import org.apache.drill.common.exceptions.CustomErrorContext;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.common.types.TypeProtos;
+import org.apache.drill.exec.physical.impl.scan.file.FileScanFramework.FileSchemaNegotiator;
+import org.apache.drill.exec.physical.impl.scan.framework.ManagedReader;
+import org.apache.drill.exec.physical.resultSet.ResultSetLoader;
+import org.apache.drill.exec.physical.resultSet.RowSetLoader;
+import org.apache.drill.exec.record.metadata.SchemaBuilder;
+import org.apache.drill.exec.record.metadata.TupleMetadata;
+import org.apache.drill.exec.vector.accessor.ScalarWriter;
+import org.apache.hadoop.mapred.FileSplit;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+public class SpssBatchReader implements ManagedReader<FileSchemaNegotiator> {
+
+  private static final Logger logger = LoggerFactory.getLogger(SpssBatchReader.class);
+
+  private static final String VALUE_LABEL = "_value";
+
+  private FileSplit split;
+
+  private InputStream fsStream;
+
+  private SpssDataFileReader spssReader;
+
+  private RowSetLoader rowWriter;
+
+  private List<SpssVariable> variableList;
+
+  private List<SpssColumnWriter> writerList;
+
+  private CustomErrorContext errorContext;
+
+
+  public static class SpssReaderConfig {
+
+    protected final SpssFormatPlugin plugin;
+
+    public SpssReaderConfig(SpssFormatPlugin plugin) {
+      this.plugin = plugin;
+    }
+  }
+
+  @Override
+  public boolean open(FileSchemaNegotiator negotiator) {
+    split = negotiator.split();
+    openFile(negotiator);
+    negotiator.tableSchema(buildSchema(), true);
+    errorContext = negotiator.parentErrorContext();
+    ResultSetLoader loader = negotiator.build();
+    rowWriter = loader.writer();
+    buildReaderList();
+
+    return true;
+  }
+
+  @Override
+  public boolean next() {
+    while (!rowWriter.isFull()) {
+      if (!processNextRow()) {
+        return false;
+      }
+    }
+    return true;
+  }
+
+  @Override
+  public void close() {
+    if (fsStream != null) {
+      try {
+        fsStream.close();
+      } catch (IOException e) {
+        logger.warn("Error when closing SPSS File Stream resource: {}", e.getMessage());
+      }
+      fsStream = null;
+    }
+  }
+
+  private void openFile(FileSchemaNegotiator negotiator) {
+    try {
+      fsStream = negotiator.fileSystem().openPossiblyCompressedStream(split.getPath());
+      spssReader = new SpssDataFileReader(fsStream);
+    } catch (IOException e) {
+      throw UserException
+        .dataReadError(e)
+        .message("Unable to open SPSS File %s", split.getPath())
+        .addContext(e.getMessage())
+        .addContext(errorContext)
+        .build(logger);
+    }
+  }
+
+  private boolean processNextRow() {
+    try {
+      // Stop reading when you run out of data
+      if (!spssReader.readNextCase()) {
+        return false;
+      }
+
+      rowWriter.start();
+      for (SpssColumnWriter spssColumnWriter : writerList) {
+        spssColumnWriter.load(spssReader);
+      }
+      rowWriter.save();
+
+    } catch (IOException e) {
+      throw UserException
+        .dataReadError(e)
+        .message("Error reading SPSS File.")
+        .addContext(e.getMessage())
+        .addContext(errorContext)
+        .build(logger);
+    }
+    return true;
+  }
+
+  private TupleMetadata buildSchema() {
+    SchemaBuilder builder = new SchemaBuilder();
+    variableList = spssReader.getVariables();
+
+    for (SpssVariable variable : variableList) {
+      String varName = variable.getVariableName();
+
+      if (variable.isNumeric()) {
+        builder.addNullable(varName, TypeProtos.MinorType.FLOAT8);
+
+        // Check if the column has lookups associated with it
+        if (variable.getValueLabels() != null && variable.getValueLabels().size() > 0) {
+          builder.addNullable(varName + VALUE_LABEL, TypeProtos.MinorType.VARCHAR);
+        }
+
+      } else {
+        builder.addNullable(varName, TypeProtos.MinorType.VARCHAR);
+      }
+    }
+    return builder.buildSchema();
+  }
+
+  private void buildReaderList() {
+    writerList = new ArrayList<>();
+
+    for(SpssVariable variable : variableList) {

Review comment:
       Fixed




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [drill] vvysotskyi commented on a change in pull request #2067: DRILL-7716: Create Format Plugin for SPSS Files

Posted by GitBox <gi...@apache.org>.
vvysotskyi commented on a change in pull request #2067:
URL: https://github.com/apache/drill/pull/2067#discussion_r419660268



##########
File path: pom.xml
##########
@@ -359,6 +359,7 @@
             <exclude>**/*.pcap</exclude>
             <exclude>**/*.log1</exclude>
             <exclude>**/*.log2</exclude>
+            <exclude>**/*.sav</exclude>

Review comment:
       Please also update the exclusion list for `license-maven-plugin`.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [drill] cgivre commented on a change in pull request #2067: DRILL-7716: Create Format Plugin for SPSS Files

Posted by GitBox <gi...@apache.org>.
cgivre commented on a change in pull request #2067:
URL: https://github.com/apache/drill/pull/2067#discussion_r419816168



##########
File path: contrib/format-spss/src/main/java/org/apache/drill/exec/store/spss/SpssBatchReader.java
##########
@@ -0,0 +1,240 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.store.spss;
+
+import com.bedatadriven.spss.SpssDataFileReader;
+import com.bedatadriven.spss.SpssVariable;
+import org.apache.drill.common.exceptions.CustomErrorContext;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.common.types.TypeProtos;
+import org.apache.drill.exec.physical.impl.scan.file.FileScanFramework.FileSchemaNegotiator;
+import org.apache.drill.exec.physical.impl.scan.framework.ManagedReader;
+import org.apache.drill.exec.physical.resultSet.ResultSetLoader;
+import org.apache.drill.exec.physical.resultSet.RowSetLoader;
+import org.apache.drill.exec.record.metadata.SchemaBuilder;
+import org.apache.drill.exec.record.metadata.TupleMetadata;
+import org.apache.drill.exec.vector.accessor.ScalarWriter;
+import org.apache.hadoop.mapred.FileSplit;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+public class SpssBatchReader implements ManagedReader<FileSchemaNegotiator> {
+
+  private static final Logger logger = LoggerFactory.getLogger(SpssBatchReader.class);
+
+  private static final String VALUE_LABEL = "_value";
+
+  private FileSplit split;
+
+  private InputStream fsStream;
+
+  private SpssDataFileReader spssReader;
+
+  private RowSetLoader rowWriter;
+
+  private List<SpssVariable> variableList;
+
+  private List<SpssColumnWriter> writerList;
+
+  private CustomErrorContext errorContext;
+
+
+  public static class SpssReaderConfig {
+
+    protected final SpssFormatPlugin plugin;
+
+    public SpssReaderConfig(SpssFormatPlugin plugin) {
+      this.plugin = plugin;
+    }
+  }
+
+  @Override
+  public boolean open(FileSchemaNegotiator negotiator) {
+    split = negotiator.split();
+    openFile(negotiator);
+    negotiator.tableSchema(buildSchema(), true);
+    errorContext = negotiator.parentErrorContext();
+    ResultSetLoader loader = negotiator.build();
+    rowWriter = loader.writer();
+    buildReaderList();
+
+    return true;
+  }
+
+  @Override
+  public boolean next() {
+    while (!rowWriter.isFull()) {
+      if (!processNextRow()) {
+        return false;
+      }
+    }
+    return true;
+  }
+
+  @Override
+  public void close() {
+    if (fsStream != null) {
+      try {
+        fsStream.close();
+      } catch (IOException e) {
+        logger.warn("Error when closing SPSS File Stream resource: {}", e.getMessage());
+      }
+      fsStream = null;
+    }
+  }
+
+  private void openFile(FileSchemaNegotiator negotiator) {
+    try {
+      fsStream = negotiator.fileSystem().openPossiblyCompressedStream(split.getPath());
+      spssReader = new SpssDataFileReader(fsStream);
+    } catch (IOException e) {
+      throw UserException
+        .dataReadError(e)
+        .message("Unable to open SPSS File %s", split.getPath())
+        .addContext(e.getMessage())
+        .addContext(errorContext)
+        .build(logger);
+    }
+  }
+
+  private boolean processNextRow() {
+    try {
+      // Stop reading when you run out of data
+      if (!spssReader.readNextCase()) {
+        return false;
+      }
+
+      rowWriter.start();
+      for (SpssColumnWriter spssColumnWriter : writerList) {
+        spssColumnWriter.load(spssReader);
+      }
+      rowWriter.save();
+
+    } catch (IOException e) {
+      throw UserException
+        .dataReadError(e)
+        .message("Error reading SPSS File.")
+        .addContext(e.getMessage())
+        .addContext(errorContext)
+        .build(logger);
+    }
+    return true;
+  }
+
+  private TupleMetadata buildSchema() {
+    SchemaBuilder builder = new SchemaBuilder();
+    variableList = spssReader.getVariables();
+
+    for (SpssVariable variable : variableList) {
+      String varName = variable.getVariableName();
+
+      if (variable.isNumeric()) {
+        builder.addNullable(varName, TypeProtos.MinorType.FLOAT8);
+
+        // Check if the column has lookups associated with it
+        if (variable.getValueLabels() != null && variable.getValueLabels().size() > 0) {
+          builder.addNullable(varName + VALUE_LABEL, TypeProtos.MinorType.VARCHAR);
+        }
+
+      } else {
+        builder.addNullable(varName, TypeProtos.MinorType.VARCHAR);
+      }
+    }
+    return builder.buildSchema();
+  }
+
+  private void buildReaderList() {
+    writerList = new ArrayList<>();
+
+    for(SpssVariable variable : variableList) {
+      if (variable.isNumeric()) {
+        writerList.add(new NumericSpssColumnWriter(variable.getVariableName(), rowWriter, spssReader));
+      } else {
+        writerList.add(new StringSpssColumnWriter(variable.getVariableName(), rowWriter));
+      }
+    }
+  }
+
+  public abstract static class SpssColumnWriter {
+    final String columnName;
+
+    final ScalarWriter writer;
+
+    public SpssColumnWriter(String columnName, ScalarWriter writer) {
+      this.columnName = columnName;
+      this.writer = writer;
+    }
+
+
+    public abstract void load (SpssDataFileReader reader);
+  }
+
+  public static class StringSpssColumnWriter extends SpssColumnWriter {
+
+    StringSpssColumnWriter (String columnName, RowSetLoader rowWriter) {
+      super(columnName, rowWriter.scalar(columnName));
+    }
+
+    @Override
+    public void load(SpssDataFileReader reader) {
+      writer.setString(reader.getStringValue(columnName));
+    }
+  }
+
+  public static class NumericSpssColumnWriter extends SpssColumnWriter {
+
+    ScalarWriter labelWriter;
+
+    Map<Double, String> labels;
+
+    boolean hasLabels;
+
+    NumericSpssColumnWriter(String columnName, RowSetLoader rowWriter, SpssDataFileReader reader) {
+      super(columnName, rowWriter.scalar(columnName));
+
+      if (reader.getValueLabels(columnName) != null && reader.getValueLabels(columnName).size() != 0) {
+        labelWriter = rowWriter.scalar(columnName + VALUE_LABEL);
+        labels = reader.getValueLabels(columnName);
+        hasLabels = true;
+      }
+    }
+
+    @Override
+    public void load(SpssDataFileReader reader) {
+
+      double value = reader.getDoubleValue(columnName);

Review comment:
       It turns out there is!  SPSS provides a way to access the column by index (int) rather than by column name.  

##########
File path: contrib/format-spss/src/main/java/org/apache/drill/exec/store/spss/SpssBatchReader.java
##########
@@ -0,0 +1,240 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.store.spss;
+
+import com.bedatadriven.spss.SpssDataFileReader;
+import com.bedatadriven.spss.SpssVariable;
+import org.apache.drill.common.exceptions.CustomErrorContext;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.common.types.TypeProtos;
+import org.apache.drill.exec.physical.impl.scan.file.FileScanFramework.FileSchemaNegotiator;
+import org.apache.drill.exec.physical.impl.scan.framework.ManagedReader;
+import org.apache.drill.exec.physical.resultSet.ResultSetLoader;
+import org.apache.drill.exec.physical.resultSet.RowSetLoader;
+import org.apache.drill.exec.record.metadata.SchemaBuilder;
+import org.apache.drill.exec.record.metadata.TupleMetadata;
+import org.apache.drill.exec.vector.accessor.ScalarWriter;
+import org.apache.hadoop.mapred.FileSplit;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+public class SpssBatchReader implements ManagedReader<FileSchemaNegotiator> {
+
+  private static final Logger logger = LoggerFactory.getLogger(SpssBatchReader.class);
+
+  private static final String VALUE_LABEL = "_value";
+
+  private FileSplit split;
+
+  private InputStream fsStream;
+
+  private SpssDataFileReader spssReader;
+
+  private RowSetLoader rowWriter;
+
+  private List<SpssVariable> variableList;
+
+  private List<SpssColumnWriter> writerList;
+
+  private CustomErrorContext errorContext;
+
+
+  public static class SpssReaderConfig {
+
+    protected final SpssFormatPlugin plugin;
+
+    public SpssReaderConfig(SpssFormatPlugin plugin) {
+      this.plugin = plugin;
+    }
+  }
+
+  @Override
+  public boolean open(FileSchemaNegotiator negotiator) {
+    split = negotiator.split();
+    openFile(negotiator);
+    negotiator.tableSchema(buildSchema(), true);
+    errorContext = negotiator.parentErrorContext();
+    ResultSetLoader loader = negotiator.build();
+    rowWriter = loader.writer();
+    buildReaderList();
+
+    return true;
+  }
+
+  @Override
+  public boolean next() {
+    while (!rowWriter.isFull()) {
+      if (!processNextRow()) {
+        return false;
+      }
+    }
+    return true;
+  }
+
+  @Override
+  public void close() {
+    if (fsStream != null) {
+      try {
+        fsStream.close();

Review comment:
       Fixed




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [drill] paul-rogers commented on a change in pull request #2067: DRILL-7716: Create Format Plugin for SPSS Files

Posted by GitBox <gi...@apache.org>.
paul-rogers commented on a change in pull request #2067:
URL: https://github.com/apache/drill/pull/2067#discussion_r419742326



##########
File path: contrib/format-spss/README.md
##########
@@ -0,0 +1,83 @@
+# Format Plugin for SPSS (SAV) Files
+This format plugin enables Apache Drill to read and query Statistical Package for the Social Sciences (SPSS) (or Statistical Product and Service Solutions) data files. According
+ to Wikipedia: (https://en.wikipedia.org/wiki/SPSS)
+ ***
+ SPSS is a widely used program for statistical analysis in social science. It is also used by market researchers, health researchers, survey companies, government, education researchers, marketing organizations, data miners, and others. The original SPSS manual (Nie, Bent & Hull, 1970) has been described as one of "sociology's most influential books" for allowing ordinary researchers to do their own statistical analysis. In addition to statistical analysis, data management (case selection, file reshaping, creating derived data) and data documentation (a metadata dictionary is stored in the datafile) are features of the base software.
+ ***
+ 
+## Configuration 
+To configure Drill to read SPSS files, simply add the following code to the formats section of your file-based storage plugin.  This should happen automatically for the default

Review comment:
       Do we want to add all the format plugins at bootstrap? Creates a rather intimidating-looking hunk of JSON for newbies. Of course, it would be good for format plugins to be independent of the storage plugin, but that will come later.

##########
File path: contrib/format-spss/README.md
##########
@@ -0,0 +1,83 @@
+# Format Plugin for SPSS (SAV) Files
+This format plugin enables Apache Drill to read and query Statistical Package for the Social Sciences (SPSS) (or Statistical Product and Service Solutions) data files. According
+ to Wikipedia: (https://en.wikipedia.org/wiki/SPSS)
+ ***
+ SPSS is a widely used program for statistical analysis in social science. It is also used by market researchers, health researchers, survey companies, government, education researchers, marketing organizations, data miners, and others. The original SPSS manual (Nie, Bent & Hull, 1970) has been described as one of "sociology's most influential books" for allowing ordinary researchers to do their own statistical analysis. In addition to statistical analysis, data management (case selection, file reshaping, creating derived data) and data documentation (a metadata dictionary is stored in the datafile) are features of the base software.

Review comment:
       Nit: for ease of editing, it is handy to break lines at around 80 chars. MD will combine them to form a paragraph as if they were one long line.

##########
File path: contrib/format-spss/pom.xml
##########
@@ -0,0 +1,88 @@
+<?xml version="1.0"?>
+<!--
+
+    Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+
+-->
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
+  <modelVersion>4.0.0</modelVersion>
+
+  <parent>
+    <artifactId>drill-contrib-parent</artifactId>
+    <groupId>org.apache.drill.contrib</groupId>
+    <version>1.18.0-SNAPSHOT</version>
+  </parent>
+
+  <artifactId>drill-format-spss</artifactId>
+  <name>contrib/format-spss</name>
+
+  <dependencies>
+    <dependency>
+      <groupId>org.apache.drill.exec</groupId>
+      <artifactId>drill-java-exec</artifactId>
+      <version>${project.version}</version>
+    </dependency>
+    <dependency>
+      <groupId>com.bedatadriven.spss</groupId>
+      <artifactId>spss-reader</artifactId>
+      <version>1.3</version>
+    </dependency>
+
+    <!-- Test dependencies -->
+    <dependency>
+      <groupId>org.apache.drill.exec</groupId>
+      <artifactId>drill-java-exec</artifactId>
+      <classifier>tests</classifier>
+      <version>${project.version}</version>
+      <scope>test</scope>
+    </dependency>
+
+    <dependency>
+      <groupId>org.apache.drill</groupId>
+      <artifactId>drill-common</artifactId>
+      <classifier>tests</classifier>
+      <version>${project.version}</version>
+      <scope>test</scope>
+    </dependency>
+  </dependencies>
+  <build>
+    <plugins>
+      <plugin>
+        <artifactId>maven-resources-plugin</artifactId>
+        <executions>
+          <execution>
+            <id>copy-java-sources</id>
+            <phase>process-sources</phase>
+            <goals>
+              <goal>copy-resources</goal>
+            </goals>
+            <configuration>
+              <outputDirectory>${basedir}/target/classes/org/apache/drill/exec/store/syslog

Review comment:
       `syslog`?

##########
File path: contrib/format-spss/src/main/java/org/apache/drill/exec/store/spss/SpssBatchReader.java
##########
@@ -0,0 +1,240 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.store.spss;
+
+import com.bedatadriven.spss.SpssDataFileReader;
+import com.bedatadriven.spss.SpssVariable;
+import org.apache.drill.common.exceptions.CustomErrorContext;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.common.types.TypeProtos;
+import org.apache.drill.exec.physical.impl.scan.file.FileScanFramework.FileSchemaNegotiator;
+import org.apache.drill.exec.physical.impl.scan.framework.ManagedReader;
+import org.apache.drill.exec.physical.resultSet.ResultSetLoader;
+import org.apache.drill.exec.physical.resultSet.RowSetLoader;
+import org.apache.drill.exec.record.metadata.SchemaBuilder;
+import org.apache.drill.exec.record.metadata.TupleMetadata;
+import org.apache.drill.exec.vector.accessor.ScalarWriter;
+import org.apache.hadoop.mapred.FileSplit;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+public class SpssBatchReader implements ManagedReader<FileSchemaNegotiator> {
+
+  private static final Logger logger = LoggerFactory.getLogger(SpssBatchReader.class);
+
+  private static final String VALUE_LABEL = "_value";
+
+  private FileSplit split;
+
+  private InputStream fsStream;
+
+  private SpssDataFileReader spssReader;
+
+  private RowSetLoader rowWriter;
+
+  private List<SpssVariable> variableList;
+
+  private List<SpssColumnWriter> writerList;
+
+  private CustomErrorContext errorContext;
+
+
+  public static class SpssReaderConfig {
+
+    protected final SpssFormatPlugin plugin;
+
+    public SpssReaderConfig(SpssFormatPlugin plugin) {
+      this.plugin = plugin;
+    }
+  }
+
+  @Override
+  public boolean open(FileSchemaNegotiator negotiator) {
+    split = negotiator.split();
+    openFile(negotiator);
+    negotiator.tableSchema(buildSchema(), true);
+    errorContext = negotiator.parentErrorContext();
+    ResultSetLoader loader = negotiator.build();
+    rowWriter = loader.writer();
+    buildReaderList();
+
+    return true;
+  }
+
+  @Override
+  public boolean next() {
+    while (!rowWriter.isFull()) {
+      if (!processNextRow()) {
+        return false;
+      }
+    }
+    return true;
+  }
+
+  @Override
+  public void close() {
+    if (fsStream != null) {
+      try {
+        fsStream.close();

Review comment:
       I think `Closeables.closeSilently(fsStream)` is the preferred approach these days.

##########
File path: contrib/format-spss/src/main/java/org/apache/drill/exec/store/spss/SpssBatchReader.java
##########
@@ -0,0 +1,240 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.store.spss;
+
+import com.bedatadriven.spss.SpssDataFileReader;
+import com.bedatadriven.spss.SpssVariable;
+import org.apache.drill.common.exceptions.CustomErrorContext;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.common.types.TypeProtos;
+import org.apache.drill.exec.physical.impl.scan.file.FileScanFramework.FileSchemaNegotiator;
+import org.apache.drill.exec.physical.impl.scan.framework.ManagedReader;
+import org.apache.drill.exec.physical.resultSet.ResultSetLoader;
+import org.apache.drill.exec.physical.resultSet.RowSetLoader;
+import org.apache.drill.exec.record.metadata.SchemaBuilder;
+import org.apache.drill.exec.record.metadata.TupleMetadata;
+import org.apache.drill.exec.vector.accessor.ScalarWriter;
+import org.apache.hadoop.mapred.FileSplit;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+public class SpssBatchReader implements ManagedReader<FileSchemaNegotiator> {
+
+  private static final Logger logger = LoggerFactory.getLogger(SpssBatchReader.class);
+
+  private static final String VALUE_LABEL = "_value";
+
+  private FileSplit split;
+
+  private InputStream fsStream;
+
+  private SpssDataFileReader spssReader;
+
+  private RowSetLoader rowWriter;
+
+  private List<SpssVariable> variableList;
+
+  private List<SpssColumnWriter> writerList;
+
+  private CustomErrorContext errorContext;
+
+
+  public static class SpssReaderConfig {
+
+    protected final SpssFormatPlugin plugin;
+
+    public SpssReaderConfig(SpssFormatPlugin plugin) {
+      this.plugin = plugin;
+    }
+  }
+
+  @Override
+  public boolean open(FileSchemaNegotiator negotiator) {
+    split = negotiator.split();
+    openFile(negotiator);
+    negotiator.tableSchema(buildSchema(), true);
+    errorContext = negotiator.parentErrorContext();
+    ResultSetLoader loader = negotiator.build();
+    rowWriter = loader.writer();
+    buildReaderList();
+
+    return true;
+  }
+
+  @Override
+  public boolean next() {
+    while (!rowWriter.isFull()) {
+      if (!processNextRow()) {
+        return false;
+      }
+    }
+    return true;
+  }
+
+  @Override
+  public void close() {
+    if (fsStream != null) {
+      try {
+        fsStream.close();
+      } catch (IOException e) {
+        logger.warn("Error when closing SPSS File Stream resource: {}", e.getMessage());
+      }
+      fsStream = null;
+    }
+  }
+
+  private void openFile(FileSchemaNegotiator negotiator) {
+    try {
+      fsStream = negotiator.fileSystem().openPossiblyCompressedStream(split.getPath());
+      spssReader = new SpssDataFileReader(fsStream);
+    } catch (IOException e) {
+      throw UserException
+        .dataReadError(e)
+        .message("Unable to open SPSS File %s", split.getPath())
+        .addContext(e.getMessage())
+        .addContext(errorContext)
+        .build(logger);
+    }
+  }
+
+  private boolean processNextRow() {
+    try {
+      // Stop reading when you run out of data
+      if (!spssReader.readNextCase()) {
+        return false;
+      }
+
+      rowWriter.start();
+      for (SpssColumnWriter spssColumnWriter : writerList) {
+        spssColumnWriter.load(spssReader);
+      }
+      rowWriter.save();
+
+    } catch (IOException e) {
+      throw UserException
+        .dataReadError(e)
+        .message("Error reading SPSS File.")
+        .addContext(e.getMessage())
+        .addContext(errorContext)
+        .build(logger);
+    }
+    return true;
+  }
+
+  private TupleMetadata buildSchema() {
+    SchemaBuilder builder = new SchemaBuilder();
+    variableList = spssReader.getVariables();
+
+    for (SpssVariable variable : variableList) {
+      String varName = variable.getVariableName();
+
+      if (variable.isNumeric()) {
+        builder.addNullable(varName, TypeProtos.MinorType.FLOAT8);
+
+        // Check if the column has lookups associated with it
+        if (variable.getValueLabels() != null && variable.getValueLabels().size() > 0) {
+          builder.addNullable(varName + VALUE_LABEL, TypeProtos.MinorType.VARCHAR);
+        }
+
+      } else {
+        builder.addNullable(varName, TypeProtos.MinorType.VARCHAR);
+      }
+    }
+    return builder.buildSchema();
+  }
+
+  private void buildReaderList() {
+    writerList = new ArrayList<>();
+
+    for(SpssVariable variable : variableList) {

Review comment:
       Nit: `for (`

##########
File path: contrib/format-spss/src/main/java/org/apache/drill/exec/store/spss/SpssBatchReader.java
##########
@@ -0,0 +1,240 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.store.spss;
+
+import com.bedatadriven.spss.SpssDataFileReader;
+import com.bedatadriven.spss.SpssVariable;
+import org.apache.drill.common.exceptions.CustomErrorContext;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.common.types.TypeProtos;
+import org.apache.drill.exec.physical.impl.scan.file.FileScanFramework.FileSchemaNegotiator;
+import org.apache.drill.exec.physical.impl.scan.framework.ManagedReader;
+import org.apache.drill.exec.physical.resultSet.ResultSetLoader;
+import org.apache.drill.exec.physical.resultSet.RowSetLoader;
+import org.apache.drill.exec.record.metadata.SchemaBuilder;
+import org.apache.drill.exec.record.metadata.TupleMetadata;
+import org.apache.drill.exec.vector.accessor.ScalarWriter;
+import org.apache.hadoop.mapred.FileSplit;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+public class SpssBatchReader implements ManagedReader<FileSchemaNegotiator> {
+
+  private static final Logger logger = LoggerFactory.getLogger(SpssBatchReader.class);
+
+  private static final String VALUE_LABEL = "_value";
+
+  private FileSplit split;
+
+  private InputStream fsStream;
+
+  private SpssDataFileReader spssReader;
+
+  private RowSetLoader rowWriter;
+
+  private List<SpssVariable> variableList;
+
+  private List<SpssColumnWriter> writerList;
+
+  private CustomErrorContext errorContext;
+
+
+  public static class SpssReaderConfig {
+
+    protected final SpssFormatPlugin plugin;
+
+    public SpssReaderConfig(SpssFormatPlugin plugin) {
+      this.plugin = plugin;
+    }
+  }
+
+  @Override
+  public boolean open(FileSchemaNegotiator negotiator) {
+    split = negotiator.split();
+    openFile(negotiator);
+    negotiator.tableSchema(buildSchema(), true);
+    errorContext = negotiator.parentErrorContext();
+    ResultSetLoader loader = negotiator.build();
+    rowWriter = loader.writer();
+    buildReaderList();
+
+    return true;
+  }
+
+  @Override
+  public boolean next() {
+    while (!rowWriter.isFull()) {
+      if (!processNextRow()) {
+        return false;
+      }
+    }
+    return true;
+  }
+
+  @Override
+  public void close() {
+    if (fsStream != null) {
+      try {
+        fsStream.close();
+      } catch (IOException e) {
+        logger.warn("Error when closing SPSS File Stream resource: {}", e.getMessage());
+      }
+      fsStream = null;
+    }
+  }
+
+  private void openFile(FileSchemaNegotiator negotiator) {
+    try {
+      fsStream = negotiator.fileSystem().openPossiblyCompressedStream(split.getPath());
+      spssReader = new SpssDataFileReader(fsStream);
+    } catch (IOException e) {
+      throw UserException
+        .dataReadError(e)
+        .message("Unable to open SPSS File %s", split.getPath())
+        .addContext(e.getMessage())
+        .addContext(errorContext)
+        .build(logger);
+    }
+  }
+
+  private boolean processNextRow() {
+    try {
+      // Stop reading when you run out of data
+      if (!spssReader.readNextCase()) {
+        return false;
+      }
+
+      rowWriter.start();
+      for (SpssColumnWriter spssColumnWriter : writerList) {
+        spssColumnWriter.load(spssReader);
+      }
+      rowWriter.save();
+
+    } catch (IOException e) {
+      throw UserException
+        .dataReadError(e)
+        .message("Error reading SPSS File.")
+        .addContext(e.getMessage())

Review comment:
       Not necessary. Better is:
   
   ```
   .context("Error reading SPSS File.")
   ```
   And let the message be the underlying error.

##########
File path: contrib/format-spss/src/main/java/org/apache/drill/exec/store/spss/SpssBatchReader.java
##########
@@ -0,0 +1,240 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.store.spss;
+
+import com.bedatadriven.spss.SpssDataFileReader;
+import com.bedatadriven.spss.SpssVariable;
+import org.apache.drill.common.exceptions.CustomErrorContext;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.common.types.TypeProtos;
+import org.apache.drill.exec.physical.impl.scan.file.FileScanFramework.FileSchemaNegotiator;
+import org.apache.drill.exec.physical.impl.scan.framework.ManagedReader;
+import org.apache.drill.exec.physical.resultSet.ResultSetLoader;
+import org.apache.drill.exec.physical.resultSet.RowSetLoader;
+import org.apache.drill.exec.record.metadata.SchemaBuilder;
+import org.apache.drill.exec.record.metadata.TupleMetadata;
+import org.apache.drill.exec.vector.accessor.ScalarWriter;
+import org.apache.hadoop.mapred.FileSplit;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+public class SpssBatchReader implements ManagedReader<FileSchemaNegotiator> {
+
+  private static final Logger logger = LoggerFactory.getLogger(SpssBatchReader.class);
+
+  private static final String VALUE_LABEL = "_value";
+
+  private FileSplit split;
+
+  private InputStream fsStream;
+
+  private SpssDataFileReader spssReader;
+
+  private RowSetLoader rowWriter;
+
+  private List<SpssVariable> variableList;
+
+  private List<SpssColumnWriter> writerList;
+
+  private CustomErrorContext errorContext;
+
+
+  public static class SpssReaderConfig {
+
+    protected final SpssFormatPlugin plugin;
+
+    public SpssReaderConfig(SpssFormatPlugin plugin) {
+      this.plugin = plugin;
+    }
+  }
+
+  @Override
+  public boolean open(FileSchemaNegotiator negotiator) {
+    split = negotiator.split();
+    openFile(negotiator);
+    negotiator.tableSchema(buildSchema(), true);
+    errorContext = negotiator.parentErrorContext();
+    ResultSetLoader loader = negotiator.build();
+    rowWriter = loader.writer();
+    buildReaderList();
+
+    return true;
+  }
+
+  @Override
+  public boolean next() {
+    while (!rowWriter.isFull()) {
+      if (!processNextRow()) {
+        return false;
+      }
+    }
+    return true;
+  }
+
+  @Override
+  public void close() {
+    if (fsStream != null) {
+      try {
+        fsStream.close();
+      } catch (IOException e) {
+        logger.warn("Error when closing SPSS File Stream resource: {}", e.getMessage());
+      }
+      fsStream = null;
+    }
+  }
+
+  private void openFile(FileSchemaNegotiator negotiator) {
+    try {
+      fsStream = negotiator.fileSystem().openPossiblyCompressedStream(split.getPath());
+      spssReader = new SpssDataFileReader(fsStream);
+    } catch (IOException e) {
+      throw UserException
+        .dataReadError(e)
+        .message("Unable to open SPSS File %s", split.getPath())
+        .addContext(e.getMessage())
+        .addContext(errorContext)
+        .build(logger);
+    }
+  }
+
+  private boolean processNextRow() {
+    try {
+      // Stop reading when you run out of data
+      if (!spssReader.readNextCase()) {
+        return false;
+      }
+
+      rowWriter.start();
+      for (SpssColumnWriter spssColumnWriter : writerList) {
+        spssColumnWriter.load(spssReader);
+      }
+      rowWriter.save();
+
+    } catch (IOException e) {
+      throw UserException
+        .dataReadError(e)
+        .message("Error reading SPSS File.")
+        .addContext(e.getMessage())
+        .addContext(errorContext)
+        .build(logger);
+    }
+    return true;
+  }
+
+  private TupleMetadata buildSchema() {
+    SchemaBuilder builder = new SchemaBuilder();
+    variableList = spssReader.getVariables();
+
+    for (SpssVariable variable : variableList) {
+      String varName = variable.getVariableName();
+
+      if (variable.isNumeric()) {
+        builder.addNullable(varName, TypeProtos.MinorType.FLOAT8);
+
+        // Check if the column has lookups associated with it
+        if (variable.getValueLabels() != null && variable.getValueLabels().size() > 0) {
+          builder.addNullable(varName + VALUE_LABEL, TypeProtos.MinorType.VARCHAR);
+        }
+
+      } else {
+        builder.addNullable(varName, TypeProtos.MinorType.VARCHAR);
+      }
+    }
+    return builder.buildSchema();
+  }
+
+  private void buildReaderList() {
+    writerList = new ArrayList<>();
+
+    for(SpssVariable variable : variableList) {
+      if (variable.isNumeric()) {
+        writerList.add(new NumericSpssColumnWriter(variable.getVariableName(), rowWriter, spssReader));
+      } else {
+        writerList.add(new StringSpssColumnWriter(variable.getVariableName(), rowWriter));
+      }
+    }
+  }
+
+  public abstract static class SpssColumnWriter {
+    final String columnName;
+
+    final ScalarWriter writer;

Review comment:
       Nit: no need to double-space fields. Single-spacing is more compact.

##########
File path: contrib/format-spss/src/main/java/org/apache/drill/exec/store/spss/SpssBatchReader.java
##########
@@ -0,0 +1,240 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.store.spss;
+
+import com.bedatadriven.spss.SpssDataFileReader;
+import com.bedatadriven.spss.SpssVariable;
+import org.apache.drill.common.exceptions.CustomErrorContext;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.common.types.TypeProtos;
+import org.apache.drill.exec.physical.impl.scan.file.FileScanFramework.FileSchemaNegotiator;
+import org.apache.drill.exec.physical.impl.scan.framework.ManagedReader;
+import org.apache.drill.exec.physical.resultSet.ResultSetLoader;
+import org.apache.drill.exec.physical.resultSet.RowSetLoader;
+import org.apache.drill.exec.record.metadata.SchemaBuilder;
+import org.apache.drill.exec.record.metadata.TupleMetadata;
+import org.apache.drill.exec.vector.accessor.ScalarWriter;
+import org.apache.hadoop.mapred.FileSplit;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+public class SpssBatchReader implements ManagedReader<FileSchemaNegotiator> {
+
+  private static final Logger logger = LoggerFactory.getLogger(SpssBatchReader.class);
+
+  private static final String VALUE_LABEL = "_value";
+
+  private FileSplit split;
+
+  private InputStream fsStream;
+
+  private SpssDataFileReader spssReader;
+
+  private RowSetLoader rowWriter;
+
+  private List<SpssVariable> variableList;
+
+  private List<SpssColumnWriter> writerList;
+
+  private CustomErrorContext errorContext;
+
+
+  public static class SpssReaderConfig {
+
+    protected final SpssFormatPlugin plugin;
+
+    public SpssReaderConfig(SpssFormatPlugin plugin) {
+      this.plugin = plugin;
+    }
+  }
+
+  @Override
+  public boolean open(FileSchemaNegotiator negotiator) {
+    split = negotiator.split();
+    openFile(negotiator);
+    negotiator.tableSchema(buildSchema(), true);
+    errorContext = negotiator.parentErrorContext();
+    ResultSetLoader loader = negotiator.build();
+    rowWriter = loader.writer();
+    buildReaderList();
+
+    return true;
+  }
+
+  @Override
+  public boolean next() {
+    while (!rowWriter.isFull()) {
+      if (!processNextRow()) {
+        return false;
+      }
+    }
+    return true;
+  }
+
+  @Override
+  public void close() {
+    if (fsStream != null) {
+      try {
+        fsStream.close();
+      } catch (IOException e) {
+        logger.warn("Error when closing SPSS File Stream resource: {}", e.getMessage());
+      }
+      fsStream = null;
+    }
+  }
+
+  private void openFile(FileSchemaNegotiator negotiator) {
+    try {
+      fsStream = negotiator.fileSystem().openPossiblyCompressedStream(split.getPath());
+      spssReader = new SpssDataFileReader(fsStream);
+    } catch (IOException e) {
+      throw UserException
+        .dataReadError(e)
+        .message("Unable to open SPSS File %s", split.getPath())
+        .addContext(e.getMessage())
+        .addContext(errorContext)
+        .build(logger);
+    }
+  }
+
+  private boolean processNextRow() {
+    try {
+      // Stop reading when you run out of data
+      if (!spssReader.readNextCase()) {
+        return false;
+      }
+
+      rowWriter.start();
+      for (SpssColumnWriter spssColumnWriter : writerList) {
+        spssColumnWriter.load(spssReader);
+      }
+      rowWriter.save();
+
+    } catch (IOException e) {
+      throw UserException
+        .dataReadError(e)
+        .message("Error reading SPSS File.")
+        .addContext(e.getMessage())
+        .addContext(errorContext)
+        .build(logger);
+    }
+    return true;
+  }
+
+  private TupleMetadata buildSchema() {
+    SchemaBuilder builder = new SchemaBuilder();
+    variableList = spssReader.getVariables();
+
+    for (SpssVariable variable : variableList) {
+      String varName = variable.getVariableName();
+
+      if (variable.isNumeric()) {
+        builder.addNullable(varName, TypeProtos.MinorType.FLOAT8);
+
+        // Check if the column has lookups associated with it
+        if (variable.getValueLabels() != null && variable.getValueLabels().size() > 0) {
+          builder.addNullable(varName + VALUE_LABEL, TypeProtos.MinorType.VARCHAR);
+        }
+
+      } else {
+        builder.addNullable(varName, TypeProtos.MinorType.VARCHAR);
+      }
+    }
+    return builder.buildSchema();
+  }
+
+  private void buildReaderList() {
+    writerList = new ArrayList<>();
+
+    for(SpssVariable variable : variableList) {
+      if (variable.isNumeric()) {
+        writerList.add(new NumericSpssColumnWriter(variable.getVariableName(), rowWriter, spssReader));
+      } else {
+        writerList.add(new StringSpssColumnWriter(variable.getVariableName(), rowWriter));
+      }
+    }
+  }
+
+  public abstract static class SpssColumnWriter {
+    final String columnName;
+
+    final ScalarWriter writer;
+
+    public SpssColumnWriter(String columnName, ScalarWriter writer) {
+      this.columnName = columnName;
+      this.writer = writer;
+    }
+
+

Review comment:
       Nit: extra newlines.

##########
File path: contrib/format-spss/src/main/java/org/apache/drill/exec/store/spss/SpssBatchReader.java
##########
@@ -0,0 +1,240 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.store.spss;
+
+import com.bedatadriven.spss.SpssDataFileReader;
+import com.bedatadriven.spss.SpssVariable;
+import org.apache.drill.common.exceptions.CustomErrorContext;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.common.types.TypeProtos;
+import org.apache.drill.exec.physical.impl.scan.file.FileScanFramework.FileSchemaNegotiator;
+import org.apache.drill.exec.physical.impl.scan.framework.ManagedReader;
+import org.apache.drill.exec.physical.resultSet.ResultSetLoader;
+import org.apache.drill.exec.physical.resultSet.RowSetLoader;
+import org.apache.drill.exec.record.metadata.SchemaBuilder;
+import org.apache.drill.exec.record.metadata.TupleMetadata;
+import org.apache.drill.exec.vector.accessor.ScalarWriter;
+import org.apache.hadoop.mapred.FileSplit;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+public class SpssBatchReader implements ManagedReader<FileSchemaNegotiator> {
+
+  private static final Logger logger = LoggerFactory.getLogger(SpssBatchReader.class);
+
+  private static final String VALUE_LABEL = "_value";
+
+  private FileSplit split;
+
+  private InputStream fsStream;
+
+  private SpssDataFileReader spssReader;
+
+  private RowSetLoader rowWriter;
+
+  private List<SpssVariable> variableList;
+
+  private List<SpssColumnWriter> writerList;
+
+  private CustomErrorContext errorContext;
+
+
+  public static class SpssReaderConfig {
+
+    protected final SpssFormatPlugin plugin;
+
+    public SpssReaderConfig(SpssFormatPlugin plugin) {
+      this.plugin = plugin;
+    }
+  }
+
+  @Override
+  public boolean open(FileSchemaNegotiator negotiator) {
+    split = negotiator.split();
+    openFile(negotiator);
+    negotiator.tableSchema(buildSchema(), true);
+    errorContext = negotiator.parentErrorContext();
+    ResultSetLoader loader = negotiator.build();
+    rowWriter = loader.writer();
+    buildReaderList();
+
+    return true;
+  }
+
+  @Override
+  public boolean next() {
+    while (!rowWriter.isFull()) {
+      if (!processNextRow()) {
+        return false;
+      }
+    }
+    return true;
+  }
+
+  @Override
+  public void close() {
+    if (fsStream != null) {
+      try {
+        fsStream.close();
+      } catch (IOException e) {
+        logger.warn("Error when closing SPSS File Stream resource: {}", e.getMessage());
+      }
+      fsStream = null;
+    }
+  }
+
+  private void openFile(FileSchemaNegotiator negotiator) {
+    try {
+      fsStream = negotiator.fileSystem().openPossiblyCompressedStream(split.getPath());
+      spssReader = new SpssDataFileReader(fsStream);
+    } catch (IOException e) {
+      throw UserException
+        .dataReadError(e)
+        .message("Unable to open SPSS File %s", split.getPath())
+        .addContext(e.getMessage())
+        .addContext(errorContext)
+        .build(logger);
+    }
+  }
+
+  private boolean processNextRow() {
+    try {
+      // Stop reading when you run out of data
+      if (!spssReader.readNextCase()) {
+        return false;
+      }
+
+      rowWriter.start();
+      for (SpssColumnWriter spssColumnWriter : writerList) {
+        spssColumnWriter.load(spssReader);
+      }
+      rowWriter.save();
+
+    } catch (IOException e) {
+      throw UserException
+        .dataReadError(e)
+        .message("Error reading SPSS File.")
+        .addContext(e.getMessage())
+        .addContext(errorContext)
+        .build(logger);
+    }
+    return true;
+  }
+
+  private TupleMetadata buildSchema() {
+    SchemaBuilder builder = new SchemaBuilder();
+    variableList = spssReader.getVariables();
+
+    for (SpssVariable variable : variableList) {
+      String varName = variable.getVariableName();
+
+      if (variable.isNumeric()) {
+        builder.addNullable(varName, TypeProtos.MinorType.FLOAT8);
+
+        // Check if the column has lookups associated with it
+        if (variable.getValueLabels() != null && variable.getValueLabels().size() > 0) {
+          builder.addNullable(varName + VALUE_LABEL, TypeProtos.MinorType.VARCHAR);
+        }
+
+      } else {
+        builder.addNullable(varName, TypeProtos.MinorType.VARCHAR);
+      }
+    }
+    return builder.buildSchema();
+  }
+
+  private void buildReaderList() {
+    writerList = new ArrayList<>();
+
+    for(SpssVariable variable : variableList) {
+      if (variable.isNumeric()) {
+        writerList.add(new NumericSpssColumnWriter(variable.getVariableName(), rowWriter, spssReader));
+      } else {
+        writerList.add(new StringSpssColumnWriter(variable.getVariableName(), rowWriter));
+      }
+    }
+  }
+
+  public abstract static class SpssColumnWriter {
+    final String columnName;
+
+    final ScalarWriter writer;
+
+    public SpssColumnWriter(String columnName, ScalarWriter writer) {
+      this.columnName = columnName;
+      this.writer = writer;
+    }
+
+
+    public abstract void load (SpssDataFileReader reader);
+  }
+
+  public static class StringSpssColumnWriter extends SpssColumnWriter {
+
+    StringSpssColumnWriter (String columnName, RowSetLoader rowWriter) {
+      super(columnName, rowWriter.scalar(columnName));
+    }
+
+    @Override
+    public void load(SpssDataFileReader reader) {
+      writer.setString(reader.getStringValue(columnName));
+    }
+  }
+
+  public static class NumericSpssColumnWriter extends SpssColumnWriter {
+
+    ScalarWriter labelWriter;
+
+    Map<Double, String> labels;
+
+    boolean hasLabels;
+
+    NumericSpssColumnWriter(String columnName, RowSetLoader rowWriter, SpssDataFileReader reader) {
+      super(columnName, rowWriter.scalar(columnName));
+
+      if (reader.getValueLabels(columnName) != null && reader.getValueLabels(columnName).size() != 0) {
+        labelWriter = rowWriter.scalar(columnName + VALUE_LABEL);
+        labels = reader.getValueLabels(columnName);
+        hasLabels = true;
+      }
+    }
+
+    @Override
+    public void load(SpssDataFileReader reader) {
+
+      double value = reader.getDoubleValue(columnName);

Review comment:
       Nice use of your `SpssColumnWriter` class to avoid a name lookup for each column write. I wonder, does SPSS provide an indexed way to get values? Do the values form a row (tuple) in addition to a map?

##########
File path: contrib/format-spss/src/main/java/org/apache/drill/exec/store/spss/SpssBatchReader.java
##########
@@ -0,0 +1,240 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.store.spss;
+
+import com.bedatadriven.spss.SpssDataFileReader;
+import com.bedatadriven.spss.SpssVariable;
+import org.apache.drill.common.exceptions.CustomErrorContext;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.common.types.TypeProtos;
+import org.apache.drill.exec.physical.impl.scan.file.FileScanFramework.FileSchemaNegotiator;
+import org.apache.drill.exec.physical.impl.scan.framework.ManagedReader;
+import org.apache.drill.exec.physical.resultSet.ResultSetLoader;
+import org.apache.drill.exec.physical.resultSet.RowSetLoader;
+import org.apache.drill.exec.record.metadata.SchemaBuilder;
+import org.apache.drill.exec.record.metadata.TupleMetadata;
+import org.apache.drill.exec.vector.accessor.ScalarWriter;
+import org.apache.hadoop.mapred.FileSplit;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+public class SpssBatchReader implements ManagedReader<FileSchemaNegotiator> {
+
+  private static final Logger logger = LoggerFactory.getLogger(SpssBatchReader.class);
+
+  private static final String VALUE_LABEL = "_value";
+
+  private FileSplit split;
+
+  private InputStream fsStream;
+
+  private SpssDataFileReader spssReader;
+
+  private RowSetLoader rowWriter;
+
+  private List<SpssVariable> variableList;
+
+  private List<SpssColumnWriter> writerList;
+
+  private CustomErrorContext errorContext;
+
+
+  public static class SpssReaderConfig {
+
+    protected final SpssFormatPlugin plugin;
+
+    public SpssReaderConfig(SpssFormatPlugin plugin) {
+      this.plugin = plugin;
+    }
+  }
+
+  @Override
+  public boolean open(FileSchemaNegotiator negotiator) {
+    split = negotiator.split();
+    openFile(negotiator);
+    negotiator.tableSchema(buildSchema(), true);
+    errorContext = negotiator.parentErrorContext();
+    ResultSetLoader loader = negotiator.build();
+    rowWriter = loader.writer();
+    buildReaderList();
+
+    return true;
+  }
+
+  @Override
+  public boolean next() {
+    while (!rowWriter.isFull()) {
+      if (!processNextRow()) {
+        return false;
+      }
+    }
+    return true;
+  }
+
+  @Override
+  public void close() {
+    if (fsStream != null) {
+      try {
+        fsStream.close();
+      } catch (IOException e) {
+        logger.warn("Error when closing SPSS File Stream resource: {}", e.getMessage());
+      }
+      fsStream = null;
+    }
+  }
+
+  private void openFile(FileSchemaNegotiator negotiator) {
+    try {
+      fsStream = negotiator.fileSystem().openPossiblyCompressedStream(split.getPath());
+      spssReader = new SpssDataFileReader(fsStream);
+    } catch (IOException e) {
+      throw UserException
+        .dataReadError(e)
+        .message("Unable to open SPSS File %s", split.getPath())
+        .addContext(e.getMessage())
+        .addContext(errorContext)
+        .build(logger);
+    }
+  }
+
+  private boolean processNextRow() {
+    try {
+      // Stop reading when you run out of data
+      if (!spssReader.readNextCase()) {
+        return false;
+      }
+
+      rowWriter.start();
+      for (SpssColumnWriter spssColumnWriter : writerList) {
+        spssColumnWriter.load(spssReader);
+      }
+      rowWriter.save();
+
+    } catch (IOException e) {
+      throw UserException
+        .dataReadError(e)
+        .message("Error reading SPSS File.")
+        .addContext(e.getMessage())
+        .addContext(errorContext)
+        .build(logger);
+    }
+    return true;
+  }
+
+  private TupleMetadata buildSchema() {
+    SchemaBuilder builder = new SchemaBuilder();
+    variableList = spssReader.getVariables();
+
+    for (SpssVariable variable : variableList) {
+      String varName = variable.getVariableName();
+
+      if (variable.isNumeric()) {
+        builder.addNullable(varName, TypeProtos.MinorType.FLOAT8);
+
+        // Check if the column has lookups associated with it
+        if (variable.getValueLabels() != null && variable.getValueLabels().size() > 0) {
+          builder.addNullable(varName + VALUE_LABEL, TypeProtos.MinorType.VARCHAR);
+        }
+
+      } else {
+        builder.addNullable(varName, TypeProtos.MinorType.VARCHAR);
+      }
+    }
+    return builder.buildSchema();
+  }
+
+  private void buildReaderList() {
+    writerList = new ArrayList<>();
+
+    for(SpssVariable variable : variableList) {
+      if (variable.isNumeric()) {
+        writerList.add(new NumericSpssColumnWriter(variable.getVariableName(), rowWriter, spssReader));
+      } else {
+        writerList.add(new StringSpssColumnWriter(variable.getVariableName(), rowWriter));
+      }
+    }
+  }
+
+  public abstract static class SpssColumnWriter {
+    final String columnName;
+
+    final ScalarWriter writer;
+
+    public SpssColumnWriter(String columnName, ScalarWriter writer) {
+      this.columnName = columnName;
+      this.writer = writer;
+    }
+
+
+    public abstract void load (SpssDataFileReader reader);
+  }
+
+  public static class StringSpssColumnWriter extends SpssColumnWriter {
+
+    StringSpssColumnWriter (String columnName, RowSetLoader rowWriter) {
+      super(columnName, rowWriter.scalar(columnName));
+    }
+
+    @Override
+    public void load(SpssDataFileReader reader) {
+      writer.setString(reader.getStringValue(columnName));
+    }
+  }
+
+  public static class NumericSpssColumnWriter extends SpssColumnWriter {
+
+    ScalarWriter labelWriter;
+
+    Map<Double, String> labels;
+
+    boolean hasLabels;
+
+    NumericSpssColumnWriter(String columnName, RowSetLoader rowWriter, SpssDataFileReader reader) {
+      super(columnName, rowWriter.scalar(columnName));
+
+      if (reader.getValueLabels(columnName) != null && reader.getValueLabels(columnName).size() != 0) {
+        labelWriter = rowWriter.scalar(columnName + VALUE_LABEL);
+        labels = reader.getValueLabels(columnName);
+        hasLabels = true;

Review comment:
       Nit: `hasLabels` is redundant: can check if `labelWriter` is `null` below.

##########
File path: contrib/format-spss/src/main/java/org/apache/drill/exec/store/spss/SpssBatchReader.java
##########
@@ -0,0 +1,240 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.store.spss;
+
+import com.bedatadriven.spss.SpssDataFileReader;
+import com.bedatadriven.spss.SpssVariable;
+import org.apache.drill.common.exceptions.CustomErrorContext;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.common.types.TypeProtos;
+import org.apache.drill.exec.physical.impl.scan.file.FileScanFramework.FileSchemaNegotiator;
+import org.apache.drill.exec.physical.impl.scan.framework.ManagedReader;
+import org.apache.drill.exec.physical.resultSet.ResultSetLoader;
+import org.apache.drill.exec.physical.resultSet.RowSetLoader;
+import org.apache.drill.exec.record.metadata.SchemaBuilder;
+import org.apache.drill.exec.record.metadata.TupleMetadata;
+import org.apache.drill.exec.vector.accessor.ScalarWriter;
+import org.apache.hadoop.mapred.FileSplit;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+public class SpssBatchReader implements ManagedReader<FileSchemaNegotiator> {
+
+  private static final Logger logger = LoggerFactory.getLogger(SpssBatchReader.class);
+
+  private static final String VALUE_LABEL = "_value";
+
+  private FileSplit split;
+
+  private InputStream fsStream;
+
+  private SpssDataFileReader spssReader;
+
+  private RowSetLoader rowWriter;
+
+  private List<SpssVariable> variableList;
+
+  private List<SpssColumnWriter> writerList;
+
+  private CustomErrorContext errorContext;
+
+
+  public static class SpssReaderConfig {
+
+    protected final SpssFormatPlugin plugin;
+
+    public SpssReaderConfig(SpssFormatPlugin plugin) {
+      this.plugin = plugin;
+    }
+  }
+
+  @Override
+  public boolean open(FileSchemaNegotiator negotiator) {
+    split = negotiator.split();
+    openFile(negotiator);
+    negotiator.tableSchema(buildSchema(), true);
+    errorContext = negotiator.parentErrorContext();
+    ResultSetLoader loader = negotiator.build();
+    rowWriter = loader.writer();
+    buildReaderList();
+
+    return true;
+  }
+
+  @Override
+  public boolean next() {
+    while (!rowWriter.isFull()) {
+      if (!processNextRow()) {
+        return false;
+      }
+    }
+    return true;
+  }
+
+  @Override
+  public void close() {
+    if (fsStream != null) {
+      try {
+        fsStream.close();
+      } catch (IOException e) {
+        logger.warn("Error when closing SPSS File Stream resource: {}", e.getMessage());
+      }
+      fsStream = null;
+    }
+  }
+
+  private void openFile(FileSchemaNegotiator negotiator) {
+    try {
+      fsStream = negotiator.fileSystem().openPossiblyCompressedStream(split.getPath());
+      spssReader = new SpssDataFileReader(fsStream);
+    } catch (IOException e) {
+      throw UserException
+        .dataReadError(e)
+        .message("Unable to open SPSS File %s", split.getPath())
+        .addContext(e.getMessage())
+        .addContext(errorContext)
+        .build(logger);
+    }
+  }
+
+  private boolean processNextRow() {
+    try {
+      // Stop reading when you run out of data
+      if (!spssReader.readNextCase()) {
+        return false;
+      }
+
+      rowWriter.start();
+      for (SpssColumnWriter spssColumnWriter : writerList) {
+        spssColumnWriter.load(spssReader);
+      }
+      rowWriter.save();
+
+    } catch (IOException e) {
+      throw UserException
+        .dataReadError(e)
+        .message("Error reading SPSS File.")
+        .addContext(e.getMessage())
+        .addContext(errorContext)
+        .build(logger);
+    }
+    return true;
+  }
+
+  private TupleMetadata buildSchema() {
+    SchemaBuilder builder = new SchemaBuilder();
+    variableList = spssReader.getVariables();
+
+    for (SpssVariable variable : variableList) {
+      String varName = variable.getVariableName();
+
+      if (variable.isNumeric()) {
+        builder.addNullable(varName, TypeProtos.MinorType.FLOAT8);
+
+        // Check if the column has lookups associated with it
+        if (variable.getValueLabels() != null && variable.getValueLabels().size() > 0) {
+          builder.addNullable(varName + VALUE_LABEL, TypeProtos.MinorType.VARCHAR);
+        }
+
+      } else {
+        builder.addNullable(varName, TypeProtos.MinorType.VARCHAR);
+      }
+    }
+    return builder.buildSchema();
+  }
+
+  private void buildReaderList() {
+    writerList = new ArrayList<>();
+
+    for(SpssVariable variable : variableList) {
+      if (variable.isNumeric()) {
+        writerList.add(new NumericSpssColumnWriter(variable.getVariableName(), rowWriter, spssReader));
+      } else {
+        writerList.add(new StringSpssColumnWriter(variable.getVariableName(), rowWriter));
+      }
+    }
+  }
+
+  public abstract static class SpssColumnWriter {
+    final String columnName;
+
+    final ScalarWriter writer;
+
+    public SpssColumnWriter(String columnName, ScalarWriter writer) {
+      this.columnName = columnName;
+      this.writer = writer;
+    }
+
+
+    public abstract void load (SpssDataFileReader reader);
+  }
+
+  public static class StringSpssColumnWriter extends SpssColumnWriter {
+
+    StringSpssColumnWriter (String columnName, RowSetLoader rowWriter) {
+      super(columnName, rowWriter.scalar(columnName));
+    }
+
+    @Override
+    public void load(SpssDataFileReader reader) {
+      writer.setString(reader.getStringValue(columnName));
+    }
+  }
+
+  public static class NumericSpssColumnWriter extends SpssColumnWriter {
+
+    ScalarWriter labelWriter;
+
+    Map<Double, String> labels;

Review comment:
       This makes me nervous: `double` is a fragile thing to map from. Does SPSS require that indexed columns have integer values? If so, map from an `Integer`, which is more reliable.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [drill] cgivre commented on a change in pull request #2067: DRILL-7716: Create Format Plugin for SPSS Files

Posted by GitBox <gi...@apache.org>.
cgivre commented on a change in pull request #2067:
URL: https://github.com/apache/drill/pull/2067#discussion_r419036031



##########
File path: contrib/format-spss/src/main/java/org/apache/drill/exec/store/spss/SpssBatchReader.java
##########
@@ -0,0 +1,242 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.store.spss;
+
+import com.bedatadriven.spss.SpssDataFileReader;
+import com.bedatadriven.spss.SpssVariable;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.common.types.TypeProtos;
+import org.apache.drill.exec.physical.impl.scan.file.FileScanFramework.FileSchemaNegotiator;
+import org.apache.drill.exec.physical.impl.scan.framework.ManagedReader;
+import org.apache.drill.exec.physical.resultSet.ResultSetLoader;
+import org.apache.drill.exec.physical.resultSet.RowSetLoader;
+import org.apache.drill.exec.record.metadata.SchemaBuilder;
+import org.apache.drill.exec.record.metadata.TupleMetadata;
+import org.apache.drill.exec.vector.accessor.ScalarWriter;
+import org.apache.hadoop.mapred.FileSplit;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+public class SpssBatchReader implements ManagedReader<FileSchemaNegotiator> {
+
+  private static final Logger logger = LoggerFactory.getLogger(SpssBatchReader.class);
+
+  private static final String VALUE_LABEL = "_value";
+
+  private final SpssReaderConfig readerConfig;

Review comment:
       Removed




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [drill] cgivre commented on a change in pull request #2067: DRILL-7716: Create Format Plugin for SPSS Files

Posted by GitBox <gi...@apache.org>.
cgivre commented on a change in pull request #2067:
URL: https://github.com/apache/drill/pull/2067#discussion_r419812810



##########
File path: contrib/format-spss/src/main/java/org/apache/drill/exec/store/spss/SpssBatchReader.java
##########
@@ -0,0 +1,240 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.store.spss;
+
+import com.bedatadriven.spss.SpssDataFileReader;
+import com.bedatadriven.spss.SpssVariable;
+import org.apache.drill.common.exceptions.CustomErrorContext;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.common.types.TypeProtos;
+import org.apache.drill.exec.physical.impl.scan.file.FileScanFramework.FileSchemaNegotiator;
+import org.apache.drill.exec.physical.impl.scan.framework.ManagedReader;
+import org.apache.drill.exec.physical.resultSet.ResultSetLoader;
+import org.apache.drill.exec.physical.resultSet.RowSetLoader;
+import org.apache.drill.exec.record.metadata.SchemaBuilder;
+import org.apache.drill.exec.record.metadata.TupleMetadata;
+import org.apache.drill.exec.vector.accessor.ScalarWriter;
+import org.apache.hadoop.mapred.FileSplit;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+public class SpssBatchReader implements ManagedReader<FileSchemaNegotiator> {
+
+  private static final Logger logger = LoggerFactory.getLogger(SpssBatchReader.class);
+
+  private static final String VALUE_LABEL = "_value";
+
+  private FileSplit split;
+
+  private InputStream fsStream;
+
+  private SpssDataFileReader spssReader;
+
+  private RowSetLoader rowWriter;
+
+  private List<SpssVariable> variableList;
+
+  private List<SpssColumnWriter> writerList;
+
+  private CustomErrorContext errorContext;
+
+
+  public static class SpssReaderConfig {
+
+    protected final SpssFormatPlugin plugin;
+
+    public SpssReaderConfig(SpssFormatPlugin plugin) {
+      this.plugin = plugin;
+    }
+  }
+
+  @Override
+  public boolean open(FileSchemaNegotiator negotiator) {
+    split = negotiator.split();
+    openFile(negotiator);
+    negotiator.tableSchema(buildSchema(), true);
+    errorContext = negotiator.parentErrorContext();
+    ResultSetLoader loader = negotiator.build();
+    rowWriter = loader.writer();
+    buildReaderList();
+
+    return true;
+  }
+
+  @Override
+  public boolean next() {
+    while (!rowWriter.isFull()) {
+      if (!processNextRow()) {
+        return false;
+      }
+    }
+    return true;
+  }
+
+  @Override
+  public void close() {
+    if (fsStream != null) {
+      try {
+        fsStream.close();
+      } catch (IOException e) {
+        logger.warn("Error when closing SPSS File Stream resource: {}", e.getMessage());
+      }
+      fsStream = null;
+    }
+  }
+
+  private void openFile(FileSchemaNegotiator negotiator) {
+    try {
+      fsStream = negotiator.fileSystem().openPossiblyCompressedStream(split.getPath());
+      spssReader = new SpssDataFileReader(fsStream);
+    } catch (IOException e) {
+      throw UserException
+        .dataReadError(e)
+        .message("Unable to open SPSS File %s", split.getPath())
+        .addContext(e.getMessage())
+        .addContext(errorContext)
+        .build(logger);
+    }
+  }
+
+  private boolean processNextRow() {
+    try {
+      // Stop reading when you run out of data
+      if (!spssReader.readNextCase()) {
+        return false;
+      }
+
+      rowWriter.start();
+      for (SpssColumnWriter spssColumnWriter : writerList) {
+        spssColumnWriter.load(spssReader);
+      }
+      rowWriter.save();
+
+    } catch (IOException e) {
+      throw UserException
+        .dataReadError(e)
+        .message("Error reading SPSS File.")
+        .addContext(e.getMessage())
+        .addContext(errorContext)
+        .build(logger);
+    }
+    return true;
+  }
+
+  private TupleMetadata buildSchema() {
+    SchemaBuilder builder = new SchemaBuilder();
+    variableList = spssReader.getVariables();
+
+    for (SpssVariable variable : variableList) {
+      String varName = variable.getVariableName();
+
+      if (variable.isNumeric()) {
+        builder.addNullable(varName, TypeProtos.MinorType.FLOAT8);
+
+        // Check if the column has lookups associated with it
+        if (variable.getValueLabels() != null && variable.getValueLabels().size() > 0) {
+          builder.addNullable(varName + VALUE_LABEL, TypeProtos.MinorType.VARCHAR);
+        }
+
+      } else {
+        builder.addNullable(varName, TypeProtos.MinorType.VARCHAR);
+      }
+    }
+    return builder.buildSchema();
+  }
+
+  private void buildReaderList() {
+    writerList = new ArrayList<>();
+
+    for(SpssVariable variable : variableList) {
+      if (variable.isNumeric()) {
+        writerList.add(new NumericSpssColumnWriter(variable.getVariableName(), rowWriter, spssReader));
+      } else {
+        writerList.add(new StringSpssColumnWriter(variable.getVariableName(), rowWriter));
+      }
+    }
+  }
+
+  public abstract static class SpssColumnWriter {
+    final String columnName;
+
+    final ScalarWriter writer;
+
+    public SpssColumnWriter(String columnName, ScalarWriter writer) {
+      this.columnName = columnName;
+      this.writer = writer;
+    }
+
+

Review comment:
       Fixed




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [drill] cgivre commented on pull request #2067: DRILL-7716: Create Format Plugin for SPSS Files

Posted by GitBox <gi...@apache.org>.
cgivre commented on pull request #2067:
URL: https://github.com/apache/drill/pull/2067#issuecomment-623789077


   @paul-rogers 
   Thanks for the review.  I believe I addressed all your comments. 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [drill] cgivre commented on a change in pull request #2067: DRILL-7716: Create Format Plugin for SPSS Files

Posted by GitBox <gi...@apache.org>.
cgivre commented on a change in pull request #2067:
URL: https://github.com/apache/drill/pull/2067#discussion_r419814031



##########
File path: contrib/format-spss/src/main/java/org/apache/drill/exec/store/spss/SpssBatchReader.java
##########
@@ -0,0 +1,240 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.store.spss;
+
+import com.bedatadriven.spss.SpssDataFileReader;
+import com.bedatadriven.spss.SpssVariable;
+import org.apache.drill.common.exceptions.CustomErrorContext;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.common.types.TypeProtos;
+import org.apache.drill.exec.physical.impl.scan.file.FileScanFramework.FileSchemaNegotiator;
+import org.apache.drill.exec.physical.impl.scan.framework.ManagedReader;
+import org.apache.drill.exec.physical.resultSet.ResultSetLoader;
+import org.apache.drill.exec.physical.resultSet.RowSetLoader;
+import org.apache.drill.exec.record.metadata.SchemaBuilder;
+import org.apache.drill.exec.record.metadata.TupleMetadata;
+import org.apache.drill.exec.vector.accessor.ScalarWriter;
+import org.apache.hadoop.mapred.FileSplit;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+public class SpssBatchReader implements ManagedReader<FileSchemaNegotiator> {
+
+  private static final Logger logger = LoggerFactory.getLogger(SpssBatchReader.class);
+
+  private static final String VALUE_LABEL = "_value";
+
+  private FileSplit split;
+
+  private InputStream fsStream;
+
+  private SpssDataFileReader spssReader;
+
+  private RowSetLoader rowWriter;
+
+  private List<SpssVariable> variableList;
+
+  private List<SpssColumnWriter> writerList;
+
+  private CustomErrorContext errorContext;
+
+
+  public static class SpssReaderConfig {
+
+    protected final SpssFormatPlugin plugin;
+
+    public SpssReaderConfig(SpssFormatPlugin plugin) {
+      this.plugin = plugin;
+    }
+  }
+
+  @Override
+  public boolean open(FileSchemaNegotiator negotiator) {
+    split = negotiator.split();
+    openFile(negotiator);
+    negotiator.tableSchema(buildSchema(), true);
+    errorContext = negotiator.parentErrorContext();
+    ResultSetLoader loader = negotiator.build();
+    rowWriter = loader.writer();
+    buildReaderList();
+
+    return true;
+  }
+
+  @Override
+  public boolean next() {
+    while (!rowWriter.isFull()) {
+      if (!processNextRow()) {
+        return false;
+      }
+    }
+    return true;
+  }
+
+  @Override
+  public void close() {
+    if (fsStream != null) {
+      try {
+        fsStream.close();
+      } catch (IOException e) {
+        logger.warn("Error when closing SPSS File Stream resource: {}", e.getMessage());
+      }
+      fsStream = null;
+    }
+  }
+
+  private void openFile(FileSchemaNegotiator negotiator) {
+    try {
+      fsStream = negotiator.fileSystem().openPossiblyCompressedStream(split.getPath());
+      spssReader = new SpssDataFileReader(fsStream);
+    } catch (IOException e) {
+      throw UserException
+        .dataReadError(e)
+        .message("Unable to open SPSS File %s", split.getPath())
+        .addContext(e.getMessage())
+        .addContext(errorContext)
+        .build(logger);
+    }
+  }
+
+  private boolean processNextRow() {
+    try {
+      // Stop reading when you run out of data
+      if (!spssReader.readNextCase()) {
+        return false;
+      }
+
+      rowWriter.start();
+      for (SpssColumnWriter spssColumnWriter : writerList) {
+        spssColumnWriter.load(spssReader);
+      }
+      rowWriter.save();
+
+    } catch (IOException e) {
+      throw UserException
+        .dataReadError(e)
+        .message("Error reading SPSS File.")
+        .addContext(e.getMessage())
+        .addContext(errorContext)
+        .build(logger);
+    }
+    return true;
+  }
+
+  private TupleMetadata buildSchema() {
+    SchemaBuilder builder = new SchemaBuilder();
+    variableList = spssReader.getVariables();
+
+    for (SpssVariable variable : variableList) {
+      String varName = variable.getVariableName();
+
+      if (variable.isNumeric()) {
+        builder.addNullable(varName, TypeProtos.MinorType.FLOAT8);
+
+        // Check if the column has lookups associated with it
+        if (variable.getValueLabels() != null && variable.getValueLabels().size() > 0) {
+          builder.addNullable(varName + VALUE_LABEL, TypeProtos.MinorType.VARCHAR);
+        }
+
+      } else {
+        builder.addNullable(varName, TypeProtos.MinorType.VARCHAR);
+      }
+    }
+    return builder.buildSchema();
+  }
+
+  private void buildReaderList() {
+    writerList = new ArrayList<>();
+
+    for(SpssVariable variable : variableList) {
+      if (variable.isNumeric()) {
+        writerList.add(new NumericSpssColumnWriter(variable.getVariableName(), rowWriter, spssReader));
+      } else {
+        writerList.add(new StringSpssColumnWriter(variable.getVariableName(), rowWriter));
+      }
+    }
+  }
+
+  public abstract static class SpssColumnWriter {
+    final String columnName;
+
+    final ScalarWriter writer;
+
+    public SpssColumnWriter(String columnName, ScalarWriter writer) {
+      this.columnName = columnName;
+      this.writer = writer;
+    }
+
+
+    public abstract void load (SpssDataFileReader reader);
+  }
+
+  public static class StringSpssColumnWriter extends SpssColumnWriter {
+
+    StringSpssColumnWriter (String columnName, RowSetLoader rowWriter) {
+      super(columnName, rowWriter.scalar(columnName));
+    }
+
+    @Override
+    public void load(SpssDataFileReader reader) {
+      writer.setString(reader.getStringValue(columnName));
+    }
+  }
+
+  public static class NumericSpssColumnWriter extends SpssColumnWriter {
+
+    ScalarWriter labelWriter;
+
+    Map<Double, String> labels;

Review comment:
       This is weird, but unfortunately, the only data types this format supports are strings and numerics which are floating point numbers.  

##########
File path: contrib/format-spss/src/main/java/org/apache/drill/exec/store/spss/SpssBatchReader.java
##########
@@ -0,0 +1,240 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.store.spss;
+
+import com.bedatadriven.spss.SpssDataFileReader;
+import com.bedatadriven.spss.SpssVariable;
+import org.apache.drill.common.exceptions.CustomErrorContext;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.common.types.TypeProtos;
+import org.apache.drill.exec.physical.impl.scan.file.FileScanFramework.FileSchemaNegotiator;
+import org.apache.drill.exec.physical.impl.scan.framework.ManagedReader;
+import org.apache.drill.exec.physical.resultSet.ResultSetLoader;
+import org.apache.drill.exec.physical.resultSet.RowSetLoader;
+import org.apache.drill.exec.record.metadata.SchemaBuilder;
+import org.apache.drill.exec.record.metadata.TupleMetadata;
+import org.apache.drill.exec.vector.accessor.ScalarWriter;
+import org.apache.hadoop.mapred.FileSplit;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+public class SpssBatchReader implements ManagedReader<FileSchemaNegotiator> {
+
+  private static final Logger logger = LoggerFactory.getLogger(SpssBatchReader.class);
+
+  private static final String VALUE_LABEL = "_value";
+
+  private FileSplit split;
+
+  private InputStream fsStream;
+
+  private SpssDataFileReader spssReader;
+
+  private RowSetLoader rowWriter;
+
+  private List<SpssVariable> variableList;
+
+  private List<SpssColumnWriter> writerList;
+
+  private CustomErrorContext errorContext;
+
+
+  public static class SpssReaderConfig {
+
+    protected final SpssFormatPlugin plugin;
+
+    public SpssReaderConfig(SpssFormatPlugin plugin) {
+      this.plugin = plugin;
+    }
+  }
+
+  @Override
+  public boolean open(FileSchemaNegotiator negotiator) {
+    split = negotiator.split();
+    openFile(negotiator);
+    negotiator.tableSchema(buildSchema(), true);
+    errorContext = negotiator.parentErrorContext();
+    ResultSetLoader loader = negotiator.build();
+    rowWriter = loader.writer();
+    buildReaderList();
+
+    return true;
+  }
+
+  @Override
+  public boolean next() {
+    while (!rowWriter.isFull()) {
+      if (!processNextRow()) {
+        return false;
+      }
+    }
+    return true;
+  }
+
+  @Override
+  public void close() {
+    if (fsStream != null) {
+      try {
+        fsStream.close();
+      } catch (IOException e) {
+        logger.warn("Error when closing SPSS File Stream resource: {}", e.getMessage());
+      }
+      fsStream = null;
+    }
+  }
+
+  private void openFile(FileSchemaNegotiator negotiator) {
+    try {
+      fsStream = negotiator.fileSystem().openPossiblyCompressedStream(split.getPath());
+      spssReader = new SpssDataFileReader(fsStream);
+    } catch (IOException e) {
+      throw UserException
+        .dataReadError(e)
+        .message("Unable to open SPSS File %s", split.getPath())
+        .addContext(e.getMessage())
+        .addContext(errorContext)
+        .build(logger);
+    }
+  }
+
+  private boolean processNextRow() {
+    try {
+      // Stop reading when you run out of data
+      if (!spssReader.readNextCase()) {
+        return false;
+      }
+
+      rowWriter.start();
+      for (SpssColumnWriter spssColumnWriter : writerList) {
+        spssColumnWriter.load(spssReader);
+      }
+      rowWriter.save();
+
+    } catch (IOException e) {
+      throw UserException
+        .dataReadError(e)
+        .message("Error reading SPSS File.")
+        .addContext(e.getMessage())
+        .addContext(errorContext)
+        .build(logger);
+    }
+    return true;
+  }
+
+  private TupleMetadata buildSchema() {
+    SchemaBuilder builder = new SchemaBuilder();
+    variableList = spssReader.getVariables();
+
+    for (SpssVariable variable : variableList) {
+      String varName = variable.getVariableName();
+
+      if (variable.isNumeric()) {
+        builder.addNullable(varName, TypeProtos.MinorType.FLOAT8);
+
+        // Check if the column has lookups associated with it
+        if (variable.getValueLabels() != null && variable.getValueLabels().size() > 0) {
+          builder.addNullable(varName + VALUE_LABEL, TypeProtos.MinorType.VARCHAR);
+        }
+
+      } else {
+        builder.addNullable(varName, TypeProtos.MinorType.VARCHAR);
+      }
+    }
+    return builder.buildSchema();
+  }
+
+  private void buildReaderList() {
+    writerList = new ArrayList<>();
+
+    for(SpssVariable variable : variableList) {
+      if (variable.isNumeric()) {
+        writerList.add(new NumericSpssColumnWriter(variable.getVariableName(), rowWriter, spssReader));
+      } else {
+        writerList.add(new StringSpssColumnWriter(variable.getVariableName(), rowWriter));
+      }
+    }
+  }
+
+  public abstract static class SpssColumnWriter {
+    final String columnName;
+
+    final ScalarWriter writer;
+
+    public SpssColumnWriter(String columnName, ScalarWriter writer) {
+      this.columnName = columnName;
+      this.writer = writer;
+    }
+
+
+    public abstract void load (SpssDataFileReader reader);
+  }
+
+  public static class StringSpssColumnWriter extends SpssColumnWriter {
+
+    StringSpssColumnWriter (String columnName, RowSetLoader rowWriter) {
+      super(columnName, rowWriter.scalar(columnName));
+    }
+
+    @Override
+    public void load(SpssDataFileReader reader) {
+      writer.setString(reader.getStringValue(columnName));
+    }
+  }
+
+  public static class NumericSpssColumnWriter extends SpssColumnWriter {
+
+    ScalarWriter labelWriter;
+
+    Map<Double, String> labels;
+
+    boolean hasLabels;
+
+    NumericSpssColumnWriter(String columnName, RowSetLoader rowWriter, SpssDataFileReader reader) {
+      super(columnName, rowWriter.scalar(columnName));
+
+      if (reader.getValueLabels(columnName) != null && reader.getValueLabels(columnName).size() != 0) {
+        labelWriter = rowWriter.scalar(columnName + VALUE_LABEL);
+        labels = reader.getValueLabels(columnName);
+        hasLabels = true;

Review comment:
       Removed




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [drill] cgivre commented on a change in pull request #2067: DRILL-7716: Create Format Plugin for SPSS Files

Posted by GitBox <gi...@apache.org>.
cgivre commented on a change in pull request #2067:
URL: https://github.com/apache/drill/pull/2067#discussion_r419035267



##########
File path: contrib/format-spss/src/main/java/org/apache/drill/exec/store/spss/SpssBatchReader.java
##########
@@ -0,0 +1,242 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.store.spss;
+
+import com.bedatadriven.spss.SpssDataFileReader;
+import com.bedatadriven.spss.SpssVariable;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.common.types.TypeProtos;
+import org.apache.drill.exec.physical.impl.scan.file.FileScanFramework.FileSchemaNegotiator;
+import org.apache.drill.exec.physical.impl.scan.framework.ManagedReader;
+import org.apache.drill.exec.physical.resultSet.ResultSetLoader;
+import org.apache.drill.exec.physical.resultSet.RowSetLoader;
+import org.apache.drill.exec.record.metadata.SchemaBuilder;
+import org.apache.drill.exec.record.metadata.TupleMetadata;
+import org.apache.drill.exec.vector.accessor.ScalarWriter;
+import org.apache.hadoop.mapred.FileSplit;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+public class SpssBatchReader implements ManagedReader<FileSchemaNegotiator> {
+
+  private static final Logger logger = LoggerFactory.getLogger(SpssBatchReader.class);
+
+  private static final String VALUE_LABEL = "_value";
+
+  private final SpssReaderConfig readerConfig;
+
+  private FileSplit split;
+
+  private InputStream fsStream;
+
+  private SpssDataFileReader spssReader;
+
+  private RowSetLoader rowWriter;
+
+  private List<SpssVariable> variableList;
+
+  private List<SpssColumnWriter> writerList;
+
+
+  public static class SpssReaderConfig {
+
+    protected final SpssFormatPlugin plugin;
+
+    public SpssReaderConfig(SpssFormatPlugin plugin) {
+      this.plugin = plugin;
+    }
+  }
+
+  public SpssBatchReader(SpssReaderConfig readerConfig) {
+    this.readerConfig = readerConfig;
+  }
+
+  @Override
+  public boolean open(FileSchemaNegotiator negotiator) {
+    split = negotiator.split();
+    openFile(negotiator);
+    negotiator.tableSchema(buildSchema(), true);
+    ResultSetLoader loader = negotiator.build();
+    rowWriter = loader.writer();
+    buildReaderList();
+
+    return true;
+  }
+
+  @Override
+  public boolean next() {
+    while (!rowWriter.isFull()) {
+      if (!processNextRow()) {
+        return false;
+      }
+    }
+    return true;
+  }
+
+  @Override
+  public void close() {
+    if (fsStream != null) {
+      try {
+        fsStream.close();
+      } catch (IOException e) {
+        logger.warn("Error when closing SPSS File Stream resource: {}", e.getMessage());
+      }
+      fsStream = null;
+    }
+  }
+
+  private void openFile(FileSchemaNegotiator negotiator) {
+    try {
+      fsStream = negotiator.fileSystem().openPossiblyCompressedStream(split.getPath());
+      spssReader = new SpssDataFileReader(fsStream);
+    } catch (IOException e) {
+      throw UserException
+        .dataReadError(e)
+        .message("Unable to open SPSS File %s", split.getPath())
+        .addContext(e.getMessage())
+        .build(logger);
+    }
+  }
+
+  private boolean processNextRow() {
+    try {
+      SpssColumnWriter currentColumn;
+
+      // Stop reading when you run out of data
+      if (!spssReader.readNextCase() ) {
+        return false;
+      }
+
+      rowWriter.start();
+      for (SpssColumnWriter spssColumnWriter : writerList) {
+        currentColumn = spssColumnWriter;
+        currentColumn.load(spssReader);
+      }
+      rowWriter.save();
+
+    } catch (IOException e) {
+      throw UserException
+        .dataReadError(e)
+        .message("Error reading SPSS File.")
+        .addContext(e.getMessage())
+        .build(logger);
+    }
+    return true;
+  }
+
+  private TupleMetadata buildSchema() {
+    SchemaBuilder builder = new SchemaBuilder();
+    variableList = spssReader.getVariables();
+
+    for(SpssVariable variable : variableList) {
+      String varName = variable.getVariableName();
+
+      if (variable.isNumeric()) {
+        builder.addNullable(varName, TypeProtos.MinorType.FLOAT8);
+
+        // Check if the column has lookups associated with it
+        if (variable.getValueLabels() != null && variable.getValueLabels().size() > 0) {
+          builder.addNullable(varName + VALUE_LABEL, TypeProtos.MinorType.VARCHAR);
+        }
+
+      } else {
+        builder.addNullable(varName, TypeProtos.MinorType.VARCHAR);
+      }
+    }
+    return builder.buildSchema();
+  }
+
+  private void buildReaderList() {
+    writerList = new ArrayList<>();
+
+    for(SpssVariable variable : variableList) {
+      if (variable.isNumeric()) {
+        writerList.add(new NumericSpssColumnWriter(variable.getVariableName(), rowWriter, spssReader));
+      } else {
+        writerList.add(new StringSpssColumnWriter(variable.getVariableName(), rowWriter));
+      }
+    }
+  }
+
+  public abstract static class SpssColumnWriter {
+    String columnName;
+
+    ScalarWriter writer;
+
+    public boolean isNumeric;

Review comment:
       Removed.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [drill] cgivre commented on a change in pull request #2067: DRILL-7716: Create Format Plugin for SPSS Files

Posted by GitBox <gi...@apache.org>.
cgivre commented on a change in pull request #2067:
URL: https://github.com/apache/drill/pull/2067#discussion_r419037264



##########
File path: exec/java-exec/src/test/java/org/apache/drill/test/ClusterTest.java
##########
@@ -125,4 +139,29 @@ public static void run(String query, Object... args) throws Exception {
   public QueryBuilder queryBuilder( ) {
     return client.queryBuilder();
   }
+
+  /**
+   * Generates a compressed file for testing
+   * @param fileName the input file to be compressed
+   * @param codecName the CODEC to be used for compression
+   * @param outFileName the output file name
+   * @throws IOException Throws IO exception if the file cannot be found or any other IO error
+   */
+  public void generateCompressedFile(String fileName, String codecName, String outFileName) throws IOException {

Review comment:
       Moved to `QueryTestUtil`




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [drill] cgivre commented on a change in pull request #2067: DRILL-7716: Create Format Plugin for SPSS Files

Posted by GitBox <gi...@apache.org>.
cgivre commented on a change in pull request #2067:
URL: https://github.com/apache/drill/pull/2067#discussion_r419813495



##########
File path: contrib/format-spss/src/main/java/org/apache/drill/exec/store/spss/SpssBatchReader.java
##########
@@ -0,0 +1,240 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.store.spss;
+
+import com.bedatadriven.spss.SpssDataFileReader;
+import com.bedatadriven.spss.SpssVariable;
+import org.apache.drill.common.exceptions.CustomErrorContext;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.common.types.TypeProtos;
+import org.apache.drill.exec.physical.impl.scan.file.FileScanFramework.FileSchemaNegotiator;
+import org.apache.drill.exec.physical.impl.scan.framework.ManagedReader;
+import org.apache.drill.exec.physical.resultSet.ResultSetLoader;
+import org.apache.drill.exec.physical.resultSet.RowSetLoader;
+import org.apache.drill.exec.record.metadata.SchemaBuilder;
+import org.apache.drill.exec.record.metadata.TupleMetadata;
+import org.apache.drill.exec.vector.accessor.ScalarWriter;
+import org.apache.hadoop.mapred.FileSplit;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+public class SpssBatchReader implements ManagedReader<FileSchemaNegotiator> {
+
+  private static final Logger logger = LoggerFactory.getLogger(SpssBatchReader.class);
+
+  private static final String VALUE_LABEL = "_value";
+
+  private FileSplit split;
+
+  private InputStream fsStream;
+
+  private SpssDataFileReader spssReader;
+
+  private RowSetLoader rowWriter;
+
+  private List<SpssVariable> variableList;
+
+  private List<SpssColumnWriter> writerList;
+
+  private CustomErrorContext errorContext;
+
+
+  public static class SpssReaderConfig {
+
+    protected final SpssFormatPlugin plugin;
+
+    public SpssReaderConfig(SpssFormatPlugin plugin) {
+      this.plugin = plugin;
+    }
+  }
+
+  @Override
+  public boolean open(FileSchemaNegotiator negotiator) {
+    split = negotiator.split();
+    openFile(negotiator);
+    negotiator.tableSchema(buildSchema(), true);
+    errorContext = negotiator.parentErrorContext();
+    ResultSetLoader loader = negotiator.build();
+    rowWriter = loader.writer();
+    buildReaderList();
+
+    return true;
+  }
+
+  @Override
+  public boolean next() {
+    while (!rowWriter.isFull()) {
+      if (!processNextRow()) {
+        return false;
+      }
+    }
+    return true;
+  }
+
+  @Override
+  public void close() {
+    if (fsStream != null) {
+      try {
+        fsStream.close();
+      } catch (IOException e) {
+        logger.warn("Error when closing SPSS File Stream resource: {}", e.getMessage());
+      }
+      fsStream = null;
+    }
+  }
+
+  private void openFile(FileSchemaNegotiator negotiator) {
+    try {
+      fsStream = negotiator.fileSystem().openPossiblyCompressedStream(split.getPath());
+      spssReader = new SpssDataFileReader(fsStream);
+    } catch (IOException e) {
+      throw UserException
+        .dataReadError(e)
+        .message("Unable to open SPSS File %s", split.getPath())
+        .addContext(e.getMessage())
+        .addContext(errorContext)
+        .build(logger);
+    }
+  }
+
+  private boolean processNextRow() {
+    try {
+      // Stop reading when you run out of data
+      if (!spssReader.readNextCase()) {
+        return false;
+      }
+
+      rowWriter.start();
+      for (SpssColumnWriter spssColumnWriter : writerList) {
+        spssColumnWriter.load(spssReader);
+      }
+      rowWriter.save();
+
+    } catch (IOException e) {
+      throw UserException
+        .dataReadError(e)
+        .message("Error reading SPSS File.")
+        .addContext(e.getMessage())

Review comment:
       I think this is what you meant here. 

##########
File path: contrib/format-spss/src/main/java/org/apache/drill/exec/store/spss/SpssBatchReader.java
##########
@@ -0,0 +1,240 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.store.spss;
+
+import com.bedatadriven.spss.SpssDataFileReader;
+import com.bedatadriven.spss.SpssVariable;
+import org.apache.drill.common.exceptions.CustomErrorContext;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.common.types.TypeProtos;
+import org.apache.drill.exec.physical.impl.scan.file.FileScanFramework.FileSchemaNegotiator;
+import org.apache.drill.exec.physical.impl.scan.framework.ManagedReader;
+import org.apache.drill.exec.physical.resultSet.ResultSetLoader;
+import org.apache.drill.exec.physical.resultSet.RowSetLoader;
+import org.apache.drill.exec.record.metadata.SchemaBuilder;
+import org.apache.drill.exec.record.metadata.TupleMetadata;
+import org.apache.drill.exec.vector.accessor.ScalarWriter;
+import org.apache.hadoop.mapred.FileSplit;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+public class SpssBatchReader implements ManagedReader<FileSchemaNegotiator> {
+
+  private static final Logger logger = LoggerFactory.getLogger(SpssBatchReader.class);
+
+  private static final String VALUE_LABEL = "_value";
+
+  private FileSplit split;
+
+  private InputStream fsStream;
+
+  private SpssDataFileReader spssReader;
+
+  private RowSetLoader rowWriter;
+
+  private List<SpssVariable> variableList;
+
+  private List<SpssColumnWriter> writerList;
+
+  private CustomErrorContext errorContext;
+
+
+  public static class SpssReaderConfig {
+
+    protected final SpssFormatPlugin plugin;
+
+    public SpssReaderConfig(SpssFormatPlugin plugin) {
+      this.plugin = plugin;
+    }
+  }
+
+  @Override
+  public boolean open(FileSchemaNegotiator negotiator) {
+    split = negotiator.split();
+    openFile(negotiator);
+    negotiator.tableSchema(buildSchema(), true);
+    errorContext = negotiator.parentErrorContext();
+    ResultSetLoader loader = negotiator.build();
+    rowWriter = loader.writer();
+    buildReaderList();
+
+    return true;
+  }
+
+  @Override
+  public boolean next() {
+    while (!rowWriter.isFull()) {
+      if (!processNextRow()) {
+        return false;
+      }
+    }
+    return true;
+  }
+
+  @Override
+  public void close() {
+    if (fsStream != null) {
+      try {
+        fsStream.close();
+      } catch (IOException e) {
+        logger.warn("Error when closing SPSS File Stream resource: {}", e.getMessage());
+      }
+      fsStream = null;
+    }
+  }
+
+  private void openFile(FileSchemaNegotiator negotiator) {
+    try {
+      fsStream = negotiator.fileSystem().openPossiblyCompressedStream(split.getPath());
+      spssReader = new SpssDataFileReader(fsStream);
+    } catch (IOException e) {
+      throw UserException
+        .dataReadError(e)
+        .message("Unable to open SPSS File %s", split.getPath())
+        .addContext(e.getMessage())
+        .addContext(errorContext)
+        .build(logger);
+    }
+  }
+
+  private boolean processNextRow() {
+    try {
+      // Stop reading when you run out of data
+      if (!spssReader.readNextCase()) {
+        return false;
+      }
+
+      rowWriter.start();
+      for (SpssColumnWriter spssColumnWriter : writerList) {
+        spssColumnWriter.load(spssReader);
+      }
+      rowWriter.save();
+
+    } catch (IOException e) {
+      throw UserException
+        .dataReadError(e)
+        .message("Error reading SPSS File.")
+        .addContext(e.getMessage())

Review comment:
       ```
   throw UserException
           .dataReadError(e)
           .message("Error reading SPSS File.")
           .addContext(errorContext)
           .build(logger);
   ```




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [drill] cgivre commented on pull request #2067: DRILL-7716: Create Format Plugin for SPSS Files

Posted by GitBox <gi...@apache.org>.
cgivre commented on pull request #2067:
URL: https://github.com/apache/drill/pull/2067#issuecomment-623045652


   @vvysotskyi 
   Thanks for the quick review!  I addressed all your review comments.  Regarding updating the website, I will be happy to do so, but I've never done it before.  I'll create a JIRA to do so and add this to it. 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [drill] cgivre commented on a change in pull request #2067: DRILL-7716: Create Format Plugin for SPSS Files

Posted by GitBox <gi...@apache.org>.
cgivre commented on a change in pull request #2067:
URL: https://github.com/apache/drill/pull/2067#discussion_r419036988



##########
File path: protocol/src/main/protobuf/UserBitShared.proto
##########
@@ -379,7 +379,11 @@ enum CoreOperatorType {
   SHP_SUB_SCAN = 65;
   METADATA_HANDLER = 66;
   METADATA_CONTROLLER = 67;
+  // 68 Reserved for Apache Druid

Review comment:
       Comments removed and protobufs rebuilt.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [drill] cgivre commented on a change in pull request #2067: DRILL-7716: Create Format Plugin for SPSS Files

Posted by GitBox <gi...@apache.org>.
cgivre commented on a change in pull request #2067:
URL: https://github.com/apache/drill/pull/2067#discussion_r419034264



##########
File path: contrib/format-spss/README.md
##########
@@ -0,0 +1,87 @@
+# Format Plugin for SPSS (SAV) Files
+This format plugin enables Apache Drill to read and query Statistical Package for the Social Sciences (SPSS) (or Statistical Product and Service Solutions) data files. According
+ to Wikipedia: [1]
+ 
+ SPSS is a widely used program for statistical analysis in social science. It is also used by market researchers, health researchers, survey companies, government, education researchers, marketing organizations, data miners, and others. The original SPSS manual (Nie, Bent & Hull, 1970) has been described as one of "sociology's most influential books" for allowing ordinary researchers to do their own statistical analysis. In addition to statistical analysis, data management (case selection, file reshaping, creating derived data) and data documentation (a metadata dictionary is stored in the datafile) are features of the base software.

Review comment:
       Fixed.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [drill] cgivre commented on a change in pull request #2067: DRILL-7716: Create Format Plugin for SPSS Files

Posted by GitBox <gi...@apache.org>.
cgivre commented on a change in pull request #2067:
URL: https://github.com/apache/drill/pull/2067#discussion_r419668523



##########
File path: pom.xml
##########
@@ -359,6 +359,7 @@
             <exclude>**/*.pcap</exclude>
             <exclude>**/*.log1</exclude>
             <exclude>**/*.log2</exclude>
+            <exclude>**/*.sav</exclude>

Review comment:
       Done.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [drill] cgivre commented on a change in pull request #2067: DRILL-7716: Create Format Plugin for SPSS Files

Posted by GitBox <gi...@apache.org>.
cgivre commented on a change in pull request #2067:
URL: https://github.com/apache/drill/pull/2067#discussion_r419811899



##########
File path: contrib/format-spss/README.md
##########
@@ -0,0 +1,83 @@
+# Format Plugin for SPSS (SAV) Files
+This format plugin enables Apache Drill to read and query Statistical Package for the Social Sciences (SPSS) (or Statistical Product and Service Solutions) data files. According
+ to Wikipedia: (https://en.wikipedia.org/wiki/SPSS)
+ ***
+ SPSS is a widely used program for statistical analysis in social science. It is also used by market researchers, health researchers, survey companies, government, education researchers, marketing organizations, data miners, and others. The original SPSS manual (Nie, Bent & Hull, 1970) has been described as one of "sociology's most influential books" for allowing ordinary researchers to do their own statistical analysis. In addition to statistical analysis, data management (case selection, file reshaping, creating derived data) and data documentation (a metadata dictionary is stored in the datafile) are features of the base software.

Review comment:
       Fixed




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [drill] cgivre commented on a change in pull request #2067: DRILL-7716: Create Format Plugin for SPSS Files

Posted by GitBox <gi...@apache.org>.
cgivre commented on a change in pull request #2067:
URL: https://github.com/apache/drill/pull/2067#discussion_r419034417



##########
File path: contrib/format-spss/README.md
##########
@@ -0,0 +1,87 @@
+# Format Plugin for SPSS (SAV) Files
+This format plugin enables Apache Drill to read and query Statistical Package for the Social Sciences (SPSS) (or Statistical Product and Service Solutions) data files. According
+ to Wikipedia: [1]

Review comment:
       Fixed




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [drill] cgivre commented on a change in pull request #2067: DRILL-7716: Create Format Plugin for SPSS Files

Posted by GitBox <gi...@apache.org>.
cgivre commented on a change in pull request #2067:
URL: https://github.com/apache/drill/pull/2067#discussion_r419035598



##########
File path: contrib/format-spss/src/main/resources/bootstrap-format-plugins.json
##########
@@ -0,0 +1,37 @@
+{
+  "storage":{

Review comment:
       Fixed




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [drill] cgivre commented on a change in pull request #2067: DRILL-7716: Create Format Plugin for SPSS Files

Posted by GitBox <gi...@apache.org>.
cgivre commented on a change in pull request #2067:
URL: https://github.com/apache/drill/pull/2067#discussion_r419034736



##########
File path: contrib/format-spss/src/main/java/org/apache/drill/exec/store/spss/SpssBatchReader.java
##########
@@ -0,0 +1,242 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.store.spss;
+
+import com.bedatadriven.spss.SpssDataFileReader;
+import com.bedatadriven.spss.SpssVariable;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.common.types.TypeProtos;
+import org.apache.drill.exec.physical.impl.scan.file.FileScanFramework.FileSchemaNegotiator;
+import org.apache.drill.exec.physical.impl.scan.framework.ManagedReader;
+import org.apache.drill.exec.physical.resultSet.ResultSetLoader;
+import org.apache.drill.exec.physical.resultSet.RowSetLoader;
+import org.apache.drill.exec.record.metadata.SchemaBuilder;
+import org.apache.drill.exec.record.metadata.TupleMetadata;
+import org.apache.drill.exec.vector.accessor.ScalarWriter;
+import org.apache.hadoop.mapred.FileSplit;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+public class SpssBatchReader implements ManagedReader<FileSchemaNegotiator> {
+
+  private static final Logger logger = LoggerFactory.getLogger(SpssBatchReader.class);
+
+  private static final String VALUE_LABEL = "_value";
+
+  private final SpssReaderConfig readerConfig;
+
+  private FileSplit split;
+
+  private InputStream fsStream;
+
+  private SpssDataFileReader spssReader;
+
+  private RowSetLoader rowWriter;
+
+  private List<SpssVariable> variableList;
+
+  private List<SpssColumnWriter> writerList;
+
+
+  public static class SpssReaderConfig {
+
+    protected final SpssFormatPlugin plugin;
+
+    public SpssReaderConfig(SpssFormatPlugin plugin) {
+      this.plugin = plugin;
+    }
+  }
+
+  public SpssBatchReader(SpssReaderConfig readerConfig) {
+    this.readerConfig = readerConfig;
+  }
+
+  @Override
+  public boolean open(FileSchemaNegotiator negotiator) {
+    split = negotiator.split();
+    openFile(negotiator);
+    negotiator.tableSchema(buildSchema(), true);
+    ResultSetLoader loader = negotiator.build();
+    rowWriter = loader.writer();
+    buildReaderList();
+
+    return true;
+  }
+
+  @Override
+  public boolean next() {
+    while (!rowWriter.isFull()) {
+      if (!processNextRow()) {
+        return false;
+      }
+    }
+    return true;
+  }
+
+  @Override
+  public void close() {
+    if (fsStream != null) {
+      try {
+        fsStream.close();
+      } catch (IOException e) {
+        logger.warn("Error when closing SPSS File Stream resource: {}", e.getMessage());
+      }
+      fsStream = null;
+    }
+  }
+
+  private void openFile(FileSchemaNegotiator negotiator) {
+    try {
+      fsStream = negotiator.fileSystem().openPossiblyCompressedStream(split.getPath());
+      spssReader = new SpssDataFileReader(fsStream);
+    } catch (IOException e) {
+      throw UserException
+        .dataReadError(e)
+        .message("Unable to open SPSS File %s", split.getPath())
+        .addContext(e.getMessage())
+        .build(logger);
+    }
+  }
+
+  private boolean processNextRow() {
+    try {
+      SpssColumnWriter currentColumn;
+
+      // Stop reading when you run out of data
+      if (!spssReader.readNextCase() ) {

Review comment:
       Fixed




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [drill] cgivre commented on a change in pull request #2067: DRILL-7716: Create Format Plugin for SPSS Files

Posted by GitBox <gi...@apache.org>.
cgivre commented on a change in pull request #2067:
URL: https://github.com/apache/drill/pull/2067#discussion_r419035528



##########
File path: contrib/format-spss/src/main/java/org/apache/drill/exec/store/spss/SpssBatchReader.java
##########
@@ -0,0 +1,242 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.store.spss;
+
+import com.bedatadriven.spss.SpssDataFileReader;
+import com.bedatadriven.spss.SpssVariable;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.common.types.TypeProtos;
+import org.apache.drill.exec.physical.impl.scan.file.FileScanFramework.FileSchemaNegotiator;
+import org.apache.drill.exec.physical.impl.scan.framework.ManagedReader;
+import org.apache.drill.exec.physical.resultSet.ResultSetLoader;
+import org.apache.drill.exec.physical.resultSet.RowSetLoader;
+import org.apache.drill.exec.record.metadata.SchemaBuilder;
+import org.apache.drill.exec.record.metadata.TupleMetadata;
+import org.apache.drill.exec.vector.accessor.ScalarWriter;
+import org.apache.hadoop.mapred.FileSplit;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+public class SpssBatchReader implements ManagedReader<FileSchemaNegotiator> {
+
+  private static final Logger logger = LoggerFactory.getLogger(SpssBatchReader.class);
+
+  private static final String VALUE_LABEL = "_value";
+
+  private final SpssReaderConfig readerConfig;
+
+  private FileSplit split;
+
+  private InputStream fsStream;
+
+  private SpssDataFileReader spssReader;
+
+  private RowSetLoader rowWriter;
+
+  private List<SpssVariable> variableList;
+
+  private List<SpssColumnWriter> writerList;
+
+
+  public static class SpssReaderConfig {
+
+    protected final SpssFormatPlugin plugin;
+
+    public SpssReaderConfig(SpssFormatPlugin plugin) {
+      this.plugin = plugin;
+    }
+  }
+
+  public SpssBatchReader(SpssReaderConfig readerConfig) {
+    this.readerConfig = readerConfig;
+  }
+
+  @Override
+  public boolean open(FileSchemaNegotiator negotiator) {
+    split = negotiator.split();
+    openFile(negotiator);
+    negotiator.tableSchema(buildSchema(), true);
+    ResultSetLoader loader = negotiator.build();
+    rowWriter = loader.writer();
+    buildReaderList();
+
+    return true;
+  }
+
+  @Override
+  public boolean next() {
+    while (!rowWriter.isFull()) {
+      if (!processNextRow()) {
+        return false;
+      }
+    }
+    return true;
+  }
+
+  @Override
+  public void close() {
+    if (fsStream != null) {
+      try {
+        fsStream.close();
+      } catch (IOException e) {
+        logger.warn("Error when closing SPSS File Stream resource: {}", e.getMessage());
+      }
+      fsStream = null;
+    }
+  }
+
+  private void openFile(FileSchemaNegotiator negotiator) {
+    try {
+      fsStream = negotiator.fileSystem().openPossiblyCompressedStream(split.getPath());
+      spssReader = new SpssDataFileReader(fsStream);
+    } catch (IOException e) {
+      throw UserException
+        .dataReadError(e)
+        .message("Unable to open SPSS File %s", split.getPath())
+        .addContext(e.getMessage())
+        .build(logger);
+    }
+  }
+
+  private boolean processNextRow() {
+    try {
+      SpssColumnWriter currentColumn;
+
+      // Stop reading when you run out of data
+      if (!spssReader.readNextCase() ) {
+        return false;
+      }
+
+      rowWriter.start();
+      for (SpssColumnWriter spssColumnWriter : writerList) {
+        currentColumn = spssColumnWriter;
+        currentColumn.load(spssReader);
+      }
+      rowWriter.save();
+
+    } catch (IOException e) {
+      throw UserException
+        .dataReadError(e)
+        .message("Error reading SPSS File.")
+        .addContext(e.getMessage())
+        .build(logger);
+    }
+    return true;
+  }
+
+  private TupleMetadata buildSchema() {
+    SchemaBuilder builder = new SchemaBuilder();
+    variableList = spssReader.getVariables();
+
+    for(SpssVariable variable : variableList) {
+      String varName = variable.getVariableName();
+
+      if (variable.isNumeric()) {
+        builder.addNullable(varName, TypeProtos.MinorType.FLOAT8);
+
+        // Check if the column has lookups associated with it
+        if (variable.getValueLabels() != null && variable.getValueLabels().size() > 0) {
+          builder.addNullable(varName + VALUE_LABEL, TypeProtos.MinorType.VARCHAR);
+        }
+
+      } else {
+        builder.addNullable(varName, TypeProtos.MinorType.VARCHAR);
+      }
+    }
+    return builder.buildSchema();
+  }
+
+  private void buildReaderList() {
+    writerList = new ArrayList<>();
+
+    for(SpssVariable variable : variableList) {
+      if (variable.isNumeric()) {
+        writerList.add(new NumericSpssColumnWriter(variable.getVariableName(), rowWriter, spssReader));
+      } else {
+        writerList.add(new StringSpssColumnWriter(variable.getVariableName(), rowWriter));
+      }
+    }
+  }
+
+  public abstract static class SpssColumnWriter {
+    String columnName;

Review comment:
       Added constructor.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org