You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@systemds.apache.org by GitBox <gi...@apache.org> on 2021/01/13 17:11:21 UTC

[GitHub] [systemds] kpretterhofer opened a new pull request #1153: Gaussian Classifier

kpretterhofer opened a new pull request #1153:
URL: https://github.com/apache/systemds/pull/1153


   This implementation computes a simple gaussian classifier, e.g. it outputs the respective parameters which are needed for classifying.
   
   As input, the function basically just receives a feature matrix, and a target vector (and some small value for smoothing along the variances, to prevent numerical errors). The function computes and returns:
   
   - prior probability
   - means
   - determinants
   - inverse covariance matrix
   
   per class. 
   For classifying one can compute: p(C=c | x) = p(x | c) * p(c)
   where p(x | c) is the (multivariate) Gaussian PDF for class c, and p(c) is the prior probability for class c. 
   
   Please let me know if and how I can still improve the code, s.t. it fits well into SystemDS. 
   
   One thing where I was quite unsure was the unit tests. Since calculating determinants and the inverse of the covariance matrices can lead to floating point errors,  I was not quite sure how to compare the results.  I did compare most of them, as suggested in the mailing list, with the avg. bit distance, with a quite high maxUnitsOfLeastPrecssion. 
   Although the values from the inverse covariance matrices  can differ a lot (systemDS vs R), i am pretty sure that the computation  is correct, since multiplying it with the covariance matrix itself, leads to the identity (which I tested during development). 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [systemds] kpretterhofer commented on pull request #1153: Gaussian Classifier

Posted by GitBox <gi...@apache.org>.
kpretterhofer commented on pull request #1153:
URL: https://github.com/apache/systemds/pull/1153#issuecomment-790654858


   > Hi,
   > Thank you for your contribution. Could you please add a test to predict some data using the classifier. We have iris dataset inside function/transform/input directory. You can write a test to predict the class labels for iris.
   
   Thanks for the feedback! 
   Probably just one follow up question: Should I only write a test for predictions, or should I also implement a builtin prediction function? (The current implementation only computes and returns the parameters needed for prediction, thats why I am asking)


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [systemds] kpretterhofer commented on a change in pull request #1153: Gaussian Classifier

Posted by GitBox <gi...@apache.org>.
kpretterhofer commented on a change in pull request #1153:
URL: https://github.com/apache/systemds/pull/1153#discussion_r561930893



##########
File path: src/test/java/org/apache/sysds/test/functions/builtin/BuiltinGaussianClassifierTest.java
##########
@@ -0,0 +1,142 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.sysds.test.functions.builtin;
+
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+
+import org.apache.sysds.runtime.matrix.data.MatrixValue.CellIndex;
+import org.apache.sysds.test.AutomatedTestBase;
+import org.apache.sysds.test.TestConfiguration;
+import org.apache.sysds.test.TestUtils;
+import org.junit.Test;
+
+public class BuiltinGaussianClassifierTest extends AutomatedTestBase
+{
+	private final static String TEST_NAME = "GaussianClassifier";
+	private final static String TEST_DIR = "functions/builtin/";
+	private final static String TEST_CLASS_DIR = TEST_DIR + BuiltinGaussianClassifierTest.class.getSimpleName() + "/";
+
+
+	@Override
+	public void setUp() {
+		addTestConfiguration(TEST_NAME,new TestConfiguration(TEST_CLASS_DIR, TEST_NAME,new String[]{"B"})); 
+	}
+
+
+	@Test
+	public void testSmallDenseFiveClasses() {
+		testGaussianClassifier(80, 30, 0.9, 5);
+	}
+
+	@Test
+	public void testSmallDenseTenClasses() {
+		testGaussianClassifier(80, 30, 0.9, 10);
+	}
+
+	@Test
+	public void testBiggerDenseFiveClasses() {
+		testGaussianClassifier(200, 50, 0.9, 5);
+	}
+
+	@Test
+	public void testBiggerDenseTenClasses() {
+		testGaussianClassifier(200, 50, 0.9, 10);
+	}
+
+	@Test
+	public void testBiggerSparseFiveClasses() {
+		testGaussianClassifier(200, 50, 0.3, 5);
+	}
+
+	@Test
+	public void testBiggerSparseTenClasses() {
+		testGaussianClassifier(200, 50, 0.3, 10);
+	}
+
+	@Test
+	public void testSmallSparseFiveClasses() {
+		testGaussianClassifier(80, 30, 0.3, 5);
+	}
+
+	@Test
+	public void testSmallSparseTenClasses() {
+		testGaussianClassifier(80, 30, 0.3, 10);
+	}
+
+	public void testGaussianClassifier(int rows, int cols, double sparsity, int classes)
+	{
+		loadTestConfiguration(getTestConfiguration(TEST_NAME));
+		String HOME = SCRIPT_DIR + TEST_DIR;
+		fullDMLScriptName = HOME + TEST_NAME + ".dml";
+		;
+		double varSmoothing = 1e-9;
+
+		List<String> proArgs = new ArrayList<>();
+		proArgs.add("-args");
+		proArgs.add(input("X"));
+		proArgs.add(input("Y"));
+		proArgs.add(String.valueOf(varSmoothing));
+		proArgs.add(output("priors"));
+		proArgs.add(output("means"));
+		proArgs.add(output("determinants"));
+		proArgs.add(output("invcovs"));
+
+		programArgs = proArgs.toArray(new String[proArgs.size()]);
+
+		rCmd = getRCmd(inputDir(), Double.toString(varSmoothing), expectedDir());
+		
+		double[][] X = getRandomMatrix(rows, cols, 0, 100, sparsity, -1);
+		double[][] Y = getRandomMatrix(rows, 1, 0, 1, 1, -1);
+		for(int i=0; i<rows; i++){
+			Y[i][0] = (int)(Y[i][0]*classes) + 1;
+			Y[i][0] = (Y[i][0] > classes) ? classes : Y[i][0];
+		}
+
+		writeInputMatrixWithMTD("X", X, true);
+		writeInputMatrixWithMTD("Y", Y, true);
+
+		runTest(true, EXCEPTION_NOT_EXPECTED, null, -1);
+
+		runRScript(true);
+
+		HashMap<CellIndex, Double> priorR = readRMatrixFromExpectedDir("priors");
+		HashMap<CellIndex, Double> priorSYSTEMDS= readDMLMatrixFromOutputDir("priors");
+		HashMap<CellIndex, Double> meansRtemp = readRMatrixFromExpectedDir("means");
+		HashMap<CellIndex, Double> meansSYSTEMDStemp = readDMLMatrixFromOutputDir("means");
+		HashMap<CellIndex, Double> determinantsRtemp = readRMatrixFromExpectedDir("determinants");
+		HashMap<CellIndex, Double> determinantsSYSTEMDStemp = readDMLMatrixFromOutputDir("determinants");
+		HashMap<CellIndex, Double> invcovsRtemp = readRMatrixFromExpectedDir("invcovs");
+		HashMap<CellIndex, Double> invcovsSYSTEMDStemp = readDMLMatrixFromOutputDir("invcovs");
+
+		double[][] meansR = TestUtils.convertHashMapToDoubleArray(meansRtemp);
+		double[][] meansSYSTEMDS = TestUtils.convertHashMapToDoubleArray(meansSYSTEMDStemp);
+		double[][] determinantsR = TestUtils.convertHashMapToDoubleArray(determinantsRtemp);
+		double[][] determinantsSYSTEMDS = TestUtils.convertHashMapToDoubleArray(determinantsSYSTEMDStemp);
+		double[][] invcovsR = TestUtils.convertHashMapToDoubleArray(invcovsRtemp);
+		double[][] invcovsSYSTEMDS = TestUtils.convertHashMapToDoubleArray(invcovsSYSTEMDStemp);
+
+		TestUtils.compareMatrices(priorR, priorSYSTEMDS, Math.pow(10, -5.0), "priorR", "priorSYSTEMDS");
+		TestUtils.compareMatricesBitAvgDistance(meansR, meansSYSTEMDS, 5L,5L, this.toString());
+		TestUtils.compareMatricesBitAvgDistance(determinantsR, determinantsSYSTEMDS, (long)2E+12,(long)2E+12, this.toString());
+		TestUtils.compareMatricesBitAvgDistance(invcovsR, invcovsSYSTEMDS, (long)2E+20,(long)2E+20, this.toString());

Review comment:
       For now I have not added such a test, since the cov matrix itself is not part of the output parameters. The only way to perform such a test is to calculate the cov matrices again inside the dml test file, and then multiply them with their respective inverses. Of course I can implement it like this, but I just wanted to make sure if this is the correct approach to do, since it seems a bit weird to me to recalculate the cov matrices inside the test files.  




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [systemds] Baunsgaard commented on pull request #1153: Gaussian Classifier

Posted by GitBox <gi...@apache.org>.
Baunsgaard commented on pull request #1153:
URL: https://github.com/apache/systemds/pull/1153#issuecomment-760750295


   
   > One thing where I was quite unsure was the unit tests. Since calculating determinants and the inverse of the covariance matrices can lead to floating point errors, I was not quite sure how to compare the results. I did compare most of them, as suggested in the mailing list, with the avg. bit distance, with a quite high maxUnitsOfLeastPrecssion.
   
   Maybe since the correction value is overwritten in DML and therefore not the same parameter used for the algorithm, but still as long as the difference is this small i consider it fine.
   
   > Although the values from the inverse covariance matrices can differ a lot (systemDS vs R), i am pretty sure that the computation is correct, since multiplying it with the covariance matrix itself, leads to the identity (which I tested during development).
   
   please do add such a test back, since it reflects "a function of the algorithm", not "similarity to R"  since in the end what we want is the first, not necessarily the second.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [systemds] kpretterhofer commented on a change in pull request #1153: Gaussian Classifier

Posted by GitBox <gi...@apache.org>.
kpretterhofer commented on a change in pull request #1153:
URL: https://github.com/apache/systemds/pull/1153#discussion_r561928757



##########
File path: scripts/builtin/gaussianClassifier.dml
##########
@@ -0,0 +1,127 @@
+#-------------------------------------------------------------
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+#-------------------------------------------------------------
+#
+# Computes the parameters needed for Gaussian Classification.
+# Thus it computes the following per class: the prior probability,
+# the inverse covariance matrix, the mean per feature and the determinant
+# of the covariance matrix. Furthermore (if not explicitely defined), it
+# adds some small smoothing value along the variances, to prevent
+# numerical errors / instabilities.
+#
+#
+# INPUT PARAMETERS:
+# -------------------------------------------------------------------------------------------------
+# NAME           TYPE               DEFAULT  MEANING
+# -------------------------------------------------------------------------------------------------
+# D              Matrix[Double]     ---      Input matrix (training set)
+# C              Matrix[Double]     ---      Target vector
+# varSmoothing   Double             1e-9     Smoothing factor for variances
+# verbose        Boolean            TRUE     Print accuracy of the training set
+# ---------------------------------------------------------------------------------------------
+# OUTPUT:
+# ---------------------------------------------------------------------------------------------
+# NAME                  TYPE             DEFAULT  MEANING
+# ---------------------------------------------------------------------------------------------
+# classPriors           Matrix[Double]   ---      Vector storing the class prior probabilities
+# classMeans            Matrix[Double]   ---      Matrix storing the means of the classes
+# classInvCovariances   List[Unknown]    ---      List of inverse covariance matrices
+# determinants          Matrix[Double]   ---      Vector storing the determinants of the classes
+# ---------------------------------------------------------------------------------------------
+#
+
+
+m_gaussianClassifier = function(Matrix[Double] D, Matrix[Double] C, Double varSmoothing=1e-9, Boolean verbose = TRUE)
+  return (Matrix[Double] classPriors, Matrix[Double] classMeans,
+  List[Unknown] classInvCovariances, Matrix[Double] determinants)
+{
+  #Retrieve number of samples, classes and features
+  nSamples = nrow(D)
+  nClasses = max(C)
+  nFeats = ncol(D)
+
+  #Set varSmoothing (to prevent numerical errors)
+  varSmoothing = 1e-9
+
+  #Compute means, variances and priors
+  classCounts = aggregate(target=C, groups=C, fn="count", ngroups=as.integer(nClasses));
+  classMeans = aggregate(target=D, groups=C, fn="mean", ngroups=as.integer(nClasses));
+  classVars = aggregate(target=D, groups=C, fn="variance", ngroups=as.integer(nClasses));
+  classPriors = classCounts / nSamples
+
+  smoothedVar = diag(matrix(1.0, rows=nFeats, cols=1)) * max(classVars) * varSmoothing
+
+  classInvCovariances = list()
+  determinants = matrix(0, rows=nClasses, cols=1)
+
+  #Compute determinants and inverseCovariances
+  for (class in 1:nClasses)

Review comment:
       I pushed a new commit. Unfortunately parfor in this case was not possible, since I append the results to a list, which can not be parallelized.

##########
File path: src/test/java/org/apache/sysds/test/functions/builtin/BuiltinGaussianClassifierTest.java
##########
@@ -0,0 +1,142 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.sysds.test.functions.builtin;
+
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+
+import org.apache.sysds.runtime.matrix.data.MatrixValue.CellIndex;
+import org.apache.sysds.test.AutomatedTestBase;
+import org.apache.sysds.test.TestConfiguration;
+import org.apache.sysds.test.TestUtils;
+import org.junit.Test;
+
+public class BuiltinGaussianClassifierTest extends AutomatedTestBase
+{
+	private final static String TEST_NAME = "GaussianClassifier";
+	private final static String TEST_DIR = "functions/builtin/";
+	private final static String TEST_CLASS_DIR = TEST_DIR + BuiltinGaussianClassifierTest.class.getSimpleName() + "/";
+
+
+	@Override
+	public void setUp() {
+		addTestConfiguration(TEST_NAME,new TestConfiguration(TEST_CLASS_DIR, TEST_NAME,new String[]{"B"})); 
+	}
+
+
+	@Test
+	public void testSmallDenseFiveClasses() {
+		testGaussianClassifier(80, 30, 0.9, 5);
+	}
+
+	@Test
+	public void testSmallDenseTenClasses() {
+		testGaussianClassifier(80, 30, 0.9, 10);
+	}
+
+	@Test
+	public void testBiggerDenseFiveClasses() {
+		testGaussianClassifier(200, 50, 0.9, 5);
+	}
+
+	@Test
+	public void testBiggerDenseTenClasses() {
+		testGaussianClassifier(200, 50, 0.9, 10);
+	}
+
+	@Test
+	public void testBiggerSparseFiveClasses() {
+		testGaussianClassifier(200, 50, 0.3, 5);
+	}
+
+	@Test
+	public void testBiggerSparseTenClasses() {
+		testGaussianClassifier(200, 50, 0.3, 10);
+	}
+
+	@Test
+	public void testSmallSparseFiveClasses() {
+		testGaussianClassifier(80, 30, 0.3, 5);
+	}
+
+	@Test
+	public void testSmallSparseTenClasses() {
+		testGaussianClassifier(80, 30, 0.3, 10);
+	}
+
+	public void testGaussianClassifier(int rows, int cols, double sparsity, int classes)
+	{
+		loadTestConfiguration(getTestConfiguration(TEST_NAME));
+		String HOME = SCRIPT_DIR + TEST_DIR;
+		fullDMLScriptName = HOME + TEST_NAME + ".dml";
+		;
+		double varSmoothing = 1e-9;
+
+		List<String> proArgs = new ArrayList<>();
+		proArgs.add("-args");
+		proArgs.add(input("X"));
+		proArgs.add(input("Y"));
+		proArgs.add(String.valueOf(varSmoothing));
+		proArgs.add(output("priors"));
+		proArgs.add(output("means"));
+		proArgs.add(output("determinants"));
+		proArgs.add(output("invcovs"));
+
+		programArgs = proArgs.toArray(new String[proArgs.size()]);
+
+		rCmd = getRCmd(inputDir(), Double.toString(varSmoothing), expectedDir());
+		
+		double[][] X = getRandomMatrix(rows, cols, 0, 100, sparsity, -1);
+		double[][] Y = getRandomMatrix(rows, 1, 0, 1, 1, -1);
+		for(int i=0; i<rows; i++){
+			Y[i][0] = (int)(Y[i][0]*classes) + 1;
+			Y[i][0] = (Y[i][0] > classes) ? classes : Y[i][0];
+		}
+
+		writeInputMatrixWithMTD("X", X, true);
+		writeInputMatrixWithMTD("Y", Y, true);
+
+		runTest(true, EXCEPTION_NOT_EXPECTED, null, -1);
+
+		runRScript(true);
+
+		HashMap<CellIndex, Double> priorR = readRMatrixFromExpectedDir("priors");
+		HashMap<CellIndex, Double> priorSYSTEMDS= readDMLMatrixFromOutputDir("priors");
+		HashMap<CellIndex, Double> meansRtemp = readRMatrixFromExpectedDir("means");
+		HashMap<CellIndex, Double> meansSYSTEMDStemp = readDMLMatrixFromOutputDir("means");
+		HashMap<CellIndex, Double> determinantsRtemp = readRMatrixFromExpectedDir("determinants");
+		HashMap<CellIndex, Double> determinantsSYSTEMDStemp = readDMLMatrixFromOutputDir("determinants");
+		HashMap<CellIndex, Double> invcovsRtemp = readRMatrixFromExpectedDir("invcovs");
+		HashMap<CellIndex, Double> invcovsSYSTEMDStemp = readDMLMatrixFromOutputDir("invcovs");
+
+		double[][] meansR = TestUtils.convertHashMapToDoubleArray(meansRtemp);
+		double[][] meansSYSTEMDS = TestUtils.convertHashMapToDoubleArray(meansSYSTEMDStemp);
+		double[][] determinantsR = TestUtils.convertHashMapToDoubleArray(determinantsRtemp);
+		double[][] determinantsSYSTEMDS = TestUtils.convertHashMapToDoubleArray(determinantsSYSTEMDStemp);
+		double[][] invcovsR = TestUtils.convertHashMapToDoubleArray(invcovsRtemp);
+		double[][] invcovsSYSTEMDS = TestUtils.convertHashMapToDoubleArray(invcovsSYSTEMDStemp);
+
+		TestUtils.compareMatrices(priorR, priorSYSTEMDS, Math.pow(10, -5.0), "priorR", "priorSYSTEMDS");
+		TestUtils.compareMatricesBitAvgDistance(meansR, meansSYSTEMDS, 5L,5L, this.toString());
+		TestUtils.compareMatricesBitAvgDistance(determinantsR, determinantsSYSTEMDS, (long)2E+12,(long)2E+12, this.toString());
+		TestUtils.compareMatricesBitAvgDistance(invcovsR, invcovsSYSTEMDS, (long)2E+20,(long)2E+20, this.toString());

Review comment:
       For now I have not added such a test, since the cov matrix itself is not part of the output parameters. The only way to perform such a test is to calculate the cov matrices again inside the dml test file, and then multiply them with their respective inverses. Of course I can implement it like this, but I just wanted to make sure if this is the correct approach to do, since it seems a bit weird to me to recalculate the cov matrices inside the test files.  




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [systemds] Shafaq-Siddiqi closed pull request #1153: Gaussian Classifier

Posted by GitBox <gi...@apache.org>.
Shafaq-Siddiqi closed pull request #1153:
URL: https://github.com/apache/systemds/pull/1153


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [systemds] Shafaq-Siddiqi commented on pull request #1153: Gaussian Classifier

Posted by GitBox <gi...@apache.org>.
Shafaq-Siddiqi commented on pull request #1153:
URL: https://github.com/apache/systemds/pull/1153#issuecomment-790656683


   > > Hi,
   > > Thank you for your contribution. Could you please add a test to predict some data using the classifier. We have iris dataset inside function/transform/input directory. You can write a test to predict the class labels for iris.
   > 
   > Thanks for the feedback!
   > Probably just one follow up question: Should I only write a test for predictions, or should I also implement a builtin prediction function? (The current implementation only computes and returns the parameters needed for prediction, thats why I am asking)
   
   For now, I would suggest writing a test for prediction.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [systemds] Baunsgaard commented on a change in pull request #1153: Gaussian Classifier

Posted by GitBox <gi...@apache.org>.
Baunsgaard commented on a change in pull request #1153:
URL: https://github.com/apache/systemds/pull/1153#discussion_r558124043



##########
File path: src/test/java/org/apache/sysds/test/functions/builtin/BuiltinGaussianClassifierTest.java
##########
@@ -0,0 +1,142 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.sysds.test.functions.builtin;
+
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+
+import org.apache.sysds.runtime.matrix.data.MatrixValue.CellIndex;
+import org.apache.sysds.test.AutomatedTestBase;
+import org.apache.sysds.test.TestConfiguration;
+import org.apache.sysds.test.TestUtils;
+import org.junit.Test;
+
+public class BuiltinGaussianClassifierTest extends AutomatedTestBase
+{
+	private final static String TEST_NAME = "GaussianClassifier";
+	private final static String TEST_DIR = "functions/builtin/";
+	private final static String TEST_CLASS_DIR = TEST_DIR + BuiltinGaussianClassifierTest.class.getSimpleName() + "/";
+
+
+	@Override
+	public void setUp() {
+		addTestConfiguration(TEST_NAME,new TestConfiguration(TEST_CLASS_DIR, TEST_NAME,new String[]{"B"})); 
+	}
+
+
+	@Test
+	public void testSmallDenseFiveClasses() {
+		testGaussianClassifier(80, 30, 0.9, 5);
+	}
+
+	@Test
+	public void testSmallDenseTenClasses() {
+		testGaussianClassifier(80, 30, 0.9, 10);
+	}
+
+	@Test
+	public void testBiggerDenseFiveClasses() {
+		testGaussianClassifier(200, 50, 0.9, 5);
+	}
+
+	@Test
+	public void testBiggerDenseTenClasses() {
+		testGaussianClassifier(200, 50, 0.9, 10);
+	}
+
+	@Test
+	public void testBiggerSparseFiveClasses() {
+		testGaussianClassifier(200, 50, 0.3, 5);
+	}
+
+	@Test
+	public void testBiggerSparseTenClasses() {
+		testGaussianClassifier(200, 50, 0.3, 10);
+	}
+
+	@Test
+	public void testSmallSparseFiveClasses() {
+		testGaussianClassifier(80, 30, 0.3, 5);
+	}
+
+	@Test
+	public void testSmallSparseTenClasses() {
+		testGaussianClassifier(80, 30, 0.3, 10);
+	}
+
+	public void testGaussianClassifier(int rows, int cols, double sparsity, int classes)
+	{
+		loadTestConfiguration(getTestConfiguration(TEST_NAME));
+		String HOME = SCRIPT_DIR + TEST_DIR;
+		fullDMLScriptName = HOME + TEST_NAME + ".dml";
+		;
+		double varSmoothing = 1e-9;
+
+		List<String> proArgs = new ArrayList<>();
+		proArgs.add("-args");
+		proArgs.add(input("X"));
+		proArgs.add(input("Y"));
+		proArgs.add(String.valueOf(varSmoothing));
+		proArgs.add(output("priors"));
+		proArgs.add(output("means"));
+		proArgs.add(output("determinants"));
+		proArgs.add(output("invcovs"));
+
+		programArgs = proArgs.toArray(new String[proArgs.size()]);
+
+		rCmd = getRCmd(inputDir(), Double.toString(varSmoothing), expectedDir());
+		
+		double[][] X = getRandomMatrix(rows, cols, 0, 100, sparsity, -1);
+		double[][] Y = getRandomMatrix(rows, 1, 0, 1, 1, -1);
+		for(int i=0; i<rows; i++){
+			Y[i][0] = (int)(Y[i][0]*classes) + 1;
+			Y[i][0] = (Y[i][0] > classes) ? classes : Y[i][0];
+		}
+
+		writeInputMatrixWithMTD("X", X, true);
+		writeInputMatrixWithMTD("Y", Y, true);
+
+		runTest(true, EXCEPTION_NOT_EXPECTED, null, -1);
+
+		runRScript(true);
+
+		HashMap<CellIndex, Double> priorR = readRMatrixFromExpectedDir("priors");
+		HashMap<CellIndex, Double> priorSYSTEMDS= readDMLMatrixFromOutputDir("priors");
+		HashMap<CellIndex, Double> meansRtemp = readRMatrixFromExpectedDir("means");
+		HashMap<CellIndex, Double> meansSYSTEMDStemp = readDMLMatrixFromOutputDir("means");
+		HashMap<CellIndex, Double> determinantsRtemp = readRMatrixFromExpectedDir("determinants");
+		HashMap<CellIndex, Double> determinantsSYSTEMDStemp = readDMLMatrixFromOutputDir("determinants");
+		HashMap<CellIndex, Double> invcovsRtemp = readRMatrixFromExpectedDir("invcovs");
+		HashMap<CellIndex, Double> invcovsSYSTEMDStemp = readDMLMatrixFromOutputDir("invcovs");
+
+		double[][] meansR = TestUtils.convertHashMapToDoubleArray(meansRtemp);
+		double[][] meansSYSTEMDS = TestUtils.convertHashMapToDoubleArray(meansSYSTEMDStemp);
+		double[][] determinantsR = TestUtils.convertHashMapToDoubleArray(determinantsRtemp);
+		double[][] determinantsSYSTEMDS = TestUtils.convertHashMapToDoubleArray(determinantsSYSTEMDStemp);
+		double[][] invcovsR = TestUtils.convertHashMapToDoubleArray(invcovsRtemp);
+		double[][] invcovsSYSTEMDS = TestUtils.convertHashMapToDoubleArray(invcovsSYSTEMDStemp);
+
+		TestUtils.compareMatrices(priorR, priorSYSTEMDS, Math.pow(10, -5.0), "priorR", "priorSYSTEMDS");
+		TestUtils.compareMatricesBitAvgDistance(meansR, meansSYSTEMDS, 5L,5L, this.toString());
+		TestUtils.compareMatricesBitAvgDistance(determinantsR, determinantsSYSTEMDS, (long)2E+12,(long)2E+12, this.toString());
+		TestUtils.compareMatricesBitAvgDistance(invcovsR, invcovsSYSTEMDS, (long)2E+20,(long)2E+20, this.toString());

Review comment:
       yes covariance * invCovariance == Identity.
   
   and if there are other interesting properties. My initial understanding was that there was a way to reconstruct the input from the output, but i also might have missed a point, and looking a covariance there does not seem to be a way of doing that?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [systemds] kpretterhofer commented on a change in pull request #1153: Gaussian Classifier

Posted by GitBox <gi...@apache.org>.
kpretterhofer commented on a change in pull request #1153:
URL: https://github.com/apache/systemds/pull/1153#discussion_r561928757



##########
File path: scripts/builtin/gaussianClassifier.dml
##########
@@ -0,0 +1,127 @@
+#-------------------------------------------------------------
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+#-------------------------------------------------------------
+#
+# Computes the parameters needed for Gaussian Classification.
+# Thus it computes the following per class: the prior probability,
+# the inverse covariance matrix, the mean per feature and the determinant
+# of the covariance matrix. Furthermore (if not explicitely defined), it
+# adds some small smoothing value along the variances, to prevent
+# numerical errors / instabilities.
+#
+#
+# INPUT PARAMETERS:
+# -------------------------------------------------------------------------------------------------
+# NAME           TYPE               DEFAULT  MEANING
+# -------------------------------------------------------------------------------------------------
+# D              Matrix[Double]     ---      Input matrix (training set)
+# C              Matrix[Double]     ---      Target vector
+# varSmoothing   Double             1e-9     Smoothing factor for variances
+# verbose        Boolean            TRUE     Print accuracy of the training set
+# ---------------------------------------------------------------------------------------------
+# OUTPUT:
+# ---------------------------------------------------------------------------------------------
+# NAME                  TYPE             DEFAULT  MEANING
+# ---------------------------------------------------------------------------------------------
+# classPriors           Matrix[Double]   ---      Vector storing the class prior probabilities
+# classMeans            Matrix[Double]   ---      Matrix storing the means of the classes
+# classInvCovariances   List[Unknown]    ---      List of inverse covariance matrices
+# determinants          Matrix[Double]   ---      Vector storing the determinants of the classes
+# ---------------------------------------------------------------------------------------------
+#
+
+
+m_gaussianClassifier = function(Matrix[Double] D, Matrix[Double] C, Double varSmoothing=1e-9, Boolean verbose = TRUE)
+  return (Matrix[Double] classPriors, Matrix[Double] classMeans,
+  List[Unknown] classInvCovariances, Matrix[Double] determinants)
+{
+  #Retrieve number of samples, classes and features
+  nSamples = nrow(D)
+  nClasses = max(C)
+  nFeats = ncol(D)
+
+  #Set varSmoothing (to prevent numerical errors)
+  varSmoothing = 1e-9
+
+  #Compute means, variances and priors
+  classCounts = aggregate(target=C, groups=C, fn="count", ngroups=as.integer(nClasses));
+  classMeans = aggregate(target=D, groups=C, fn="mean", ngroups=as.integer(nClasses));
+  classVars = aggregate(target=D, groups=C, fn="variance", ngroups=as.integer(nClasses));
+  classPriors = classCounts / nSamples
+
+  smoothedVar = diag(matrix(1.0, rows=nFeats, cols=1)) * max(classVars) * varSmoothing
+
+  classInvCovariances = list()
+  determinants = matrix(0, rows=nClasses, cols=1)
+
+  #Compute determinants and inverseCovariances
+  for (class in 1:nClasses)

Review comment:
       I pushed a new commit. Unfortunately parfor in this case was not possible, since I append the results to a list, which can not be parallelized.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [systemds] kpretterhofer commented on pull request #1153: Gaussian Classifier

Posted by GitBox <gi...@apache.org>.
kpretterhofer commented on pull request #1153:
URL: https://github.com/apache/systemds/pull/1153#issuecomment-790845421


   > > > Hi,
   > > > Thank you for your contribution. Could you please add a test to predict some data using the classifier. We have iris dataset inside function/transform/input directory. You can write a test to predict the class labels for iris.
   > > 
   > > 
   > > Thanks for the feedback!
   > > Probably just one follow up question: Should I only write a test for predictions, or should I also implement a builtin prediction function? (The current implementation only computes and returns the parameters needed for prediction, thats why I am asking)
   > 
   > For now, I would suggest writing a test for prediction.
   
   Thanks! 
   I have added such a test, and took 45 samples per iris to train the algorithm, and 5 sample per iris to predict. Since this test slightly differs from the others, I needed to create a second DML script for testing. Hope that this approach is okay! 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [systemds] Shafaq-Siddiqi commented on pull request #1153: Gaussian Classifier

Posted by GitBox <gi...@apache.org>.
Shafaq-Siddiqi commented on pull request #1153:
URL: https://github.com/apache/systemds/pull/1153#issuecomment-790572145


   Hi,
   Thank you for your contribution. Could you please add a test to predict some data using the classifier. We have iris dataset inside function/transform/input directory. You can write a test to predict the class labels for iris.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [systemds] Shafaq-Siddiqi commented on pull request #1153: Gaussian Classifier

Posted by GitBox <gi...@apache.org>.
Shafaq-Siddiqi commented on pull request #1153:
URL: https://github.com/apache/systemds/pull/1153#issuecomment-796731770


   LGTM,
   Thank you @kpretterhofer for your contribution. During the merge, I fixed some minor formatting and added a TODO for future reference. 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [systemds] kpretterhofer commented on a change in pull request #1153: Gaussian Classifier

Posted by GitBox <gi...@apache.org>.
kpretterhofer commented on a change in pull request #1153:
URL: https://github.com/apache/systemds/pull/1153#discussion_r558095262



##########
File path: src/test/java/org/apache/sysds/test/functions/builtin/BuiltinGaussianClassifierTest.java
##########
@@ -0,0 +1,142 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.sysds.test.functions.builtin;
+
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+
+import org.apache.sysds.runtime.matrix.data.MatrixValue.CellIndex;
+import org.apache.sysds.test.AutomatedTestBase;
+import org.apache.sysds.test.TestConfiguration;
+import org.apache.sysds.test.TestUtils;
+import org.junit.Test;
+
+public class BuiltinGaussianClassifierTest extends AutomatedTestBase
+{
+	private final static String TEST_NAME = "GaussianClassifier";
+	private final static String TEST_DIR = "functions/builtin/";
+	private final static String TEST_CLASS_DIR = TEST_DIR + BuiltinGaussianClassifierTest.class.getSimpleName() + "/";
+
+
+	@Override
+	public void setUp() {
+		addTestConfiguration(TEST_NAME,new TestConfiguration(TEST_CLASS_DIR, TEST_NAME,new String[]{"B"})); 
+	}
+
+
+	@Test
+	public void testSmallDenseFiveClasses() {
+		testGaussianClassifier(80, 30, 0.9, 5);
+	}
+
+	@Test
+	public void testSmallDenseTenClasses() {
+		testGaussianClassifier(80, 30, 0.9, 10);
+	}
+
+	@Test
+	public void testBiggerDenseFiveClasses() {
+		testGaussianClassifier(200, 50, 0.9, 5);
+	}
+
+	@Test
+	public void testBiggerDenseTenClasses() {
+		testGaussianClassifier(200, 50, 0.9, 10);
+	}
+
+	@Test
+	public void testBiggerSparseFiveClasses() {
+		testGaussianClassifier(200, 50, 0.3, 5);
+	}
+
+	@Test
+	public void testBiggerSparseTenClasses() {
+		testGaussianClassifier(200, 50, 0.3, 10);
+	}
+
+	@Test
+	public void testSmallSparseFiveClasses() {
+		testGaussianClassifier(80, 30, 0.3, 5);
+	}
+
+	@Test
+	public void testSmallSparseTenClasses() {
+		testGaussianClassifier(80, 30, 0.3, 10);
+	}
+
+	public void testGaussianClassifier(int rows, int cols, double sparsity, int classes)
+	{
+		loadTestConfiguration(getTestConfiguration(TEST_NAME));
+		String HOME = SCRIPT_DIR + TEST_DIR;
+		fullDMLScriptName = HOME + TEST_NAME + ".dml";
+		;
+		double varSmoothing = 1e-9;
+
+		List<String> proArgs = new ArrayList<>();
+		proArgs.add("-args");
+		proArgs.add(input("X"));
+		proArgs.add(input("Y"));
+		proArgs.add(String.valueOf(varSmoothing));
+		proArgs.add(output("priors"));
+		proArgs.add(output("means"));
+		proArgs.add(output("determinants"));
+		proArgs.add(output("invcovs"));
+
+		programArgs = proArgs.toArray(new String[proArgs.size()]);
+
+		rCmd = getRCmd(inputDir(), Double.toString(varSmoothing), expectedDir());
+		
+		double[][] X = getRandomMatrix(rows, cols, 0, 100, sparsity, -1);
+		double[][] Y = getRandomMatrix(rows, 1, 0, 1, 1, -1);
+		for(int i=0; i<rows; i++){
+			Y[i][0] = (int)(Y[i][0]*classes) + 1;
+			Y[i][0] = (Y[i][0] > classes) ? classes : Y[i][0];
+		}
+
+		writeInputMatrixWithMTD("X", X, true);
+		writeInputMatrixWithMTD("Y", Y, true);
+
+		runTest(true, EXCEPTION_NOT_EXPECTED, null, -1);
+
+		runRScript(true);
+
+		HashMap<CellIndex, Double> priorR = readRMatrixFromExpectedDir("priors");
+		HashMap<CellIndex, Double> priorSYSTEMDS= readDMLMatrixFromOutputDir("priors");
+		HashMap<CellIndex, Double> meansRtemp = readRMatrixFromExpectedDir("means");
+		HashMap<CellIndex, Double> meansSYSTEMDStemp = readDMLMatrixFromOutputDir("means");
+		HashMap<CellIndex, Double> determinantsRtemp = readRMatrixFromExpectedDir("determinants");
+		HashMap<CellIndex, Double> determinantsSYSTEMDStemp = readDMLMatrixFromOutputDir("determinants");
+		HashMap<CellIndex, Double> invcovsRtemp = readRMatrixFromExpectedDir("invcovs");
+		HashMap<CellIndex, Double> invcovsSYSTEMDStemp = readDMLMatrixFromOutputDir("invcovs");
+
+		double[][] meansR = TestUtils.convertHashMapToDoubleArray(meansRtemp);
+		double[][] meansSYSTEMDS = TestUtils.convertHashMapToDoubleArray(meansSYSTEMDStemp);
+		double[][] determinantsR = TestUtils.convertHashMapToDoubleArray(determinantsRtemp);
+		double[][] determinantsSYSTEMDS = TestUtils.convertHashMapToDoubleArray(determinantsSYSTEMDStemp);
+		double[][] invcovsR = TestUtils.convertHashMapToDoubleArray(invcovsRtemp);
+		double[][] invcovsSYSTEMDS = TestUtils.convertHashMapToDoubleArray(invcovsSYSTEMDStemp);
+
+		TestUtils.compareMatrices(priorR, priorSYSTEMDS, Math.pow(10, -5.0), "priorR", "priorSYSTEMDS");
+		TestUtils.compareMatricesBitAvgDistance(meansR, meansSYSTEMDS, 5L,5L, this.toString());
+		TestUtils.compareMatricesBitAvgDistance(determinantsR, determinantsSYSTEMDS, (long)2E+12,(long)2E+12, this.toString());
+		TestUtils.compareMatricesBitAvgDistance(invcovsR, invcovsSYSTEMDS, (long)2E+20,(long)2E+20, this.toString());

Review comment:
       thanks for the feedback. I will commit the suggested changes as soon as possible.
   Just one follow up question to this specific suggestion:
   Do you mean I should check wether covariance * invCovariance is indeed the identity matrix?
   If so, since the covariance matrix is not returned (but just the inverse), should I compute, 
   the respective covariance matrices again in the test dml file, and then return the product, to check
   wether it is indeed the identity? 
   If this is not what you meant, it would be great if you could clarify. 
   Thanks :)




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [systemds] kpretterhofer commented on a change in pull request #1153: Gaussian Classifier

Posted by GitBox <gi...@apache.org>.
kpretterhofer commented on a change in pull request #1153:
URL: https://github.com/apache/systemds/pull/1153#discussion_r558131702



##########
File path: src/test/java/org/apache/sysds/test/functions/builtin/BuiltinGaussianClassifierTest.java
##########
@@ -0,0 +1,142 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.sysds.test.functions.builtin;
+
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+
+import org.apache.sysds.runtime.matrix.data.MatrixValue.CellIndex;
+import org.apache.sysds.test.AutomatedTestBase;
+import org.apache.sysds.test.TestConfiguration;
+import org.apache.sysds.test.TestUtils;
+import org.junit.Test;
+
+public class BuiltinGaussianClassifierTest extends AutomatedTestBase
+{
+	private final static String TEST_NAME = "GaussianClassifier";
+	private final static String TEST_DIR = "functions/builtin/";
+	private final static String TEST_CLASS_DIR = TEST_DIR + BuiltinGaussianClassifierTest.class.getSimpleName() + "/";
+
+
+	@Override
+	public void setUp() {
+		addTestConfiguration(TEST_NAME,new TestConfiguration(TEST_CLASS_DIR, TEST_NAME,new String[]{"B"})); 
+	}
+
+
+	@Test
+	public void testSmallDenseFiveClasses() {
+		testGaussianClassifier(80, 30, 0.9, 5);
+	}
+
+	@Test
+	public void testSmallDenseTenClasses() {
+		testGaussianClassifier(80, 30, 0.9, 10);
+	}
+
+	@Test
+	public void testBiggerDenseFiveClasses() {
+		testGaussianClassifier(200, 50, 0.9, 5);
+	}
+
+	@Test
+	public void testBiggerDenseTenClasses() {
+		testGaussianClassifier(200, 50, 0.9, 10);
+	}
+
+	@Test
+	public void testBiggerSparseFiveClasses() {
+		testGaussianClassifier(200, 50, 0.3, 5);
+	}
+
+	@Test
+	public void testBiggerSparseTenClasses() {
+		testGaussianClassifier(200, 50, 0.3, 10);
+	}
+
+	@Test
+	public void testSmallSparseFiveClasses() {
+		testGaussianClassifier(80, 30, 0.3, 5);
+	}
+
+	@Test
+	public void testSmallSparseTenClasses() {
+		testGaussianClassifier(80, 30, 0.3, 10);
+	}
+
+	public void testGaussianClassifier(int rows, int cols, double sparsity, int classes)
+	{
+		loadTestConfiguration(getTestConfiguration(TEST_NAME));
+		String HOME = SCRIPT_DIR + TEST_DIR;
+		fullDMLScriptName = HOME + TEST_NAME + ".dml";
+		;
+		double varSmoothing = 1e-9;
+
+		List<String> proArgs = new ArrayList<>();
+		proArgs.add("-args");
+		proArgs.add(input("X"));
+		proArgs.add(input("Y"));
+		proArgs.add(String.valueOf(varSmoothing));
+		proArgs.add(output("priors"));
+		proArgs.add(output("means"));
+		proArgs.add(output("determinants"));
+		proArgs.add(output("invcovs"));
+
+		programArgs = proArgs.toArray(new String[proArgs.size()]);
+
+		rCmd = getRCmd(inputDir(), Double.toString(varSmoothing), expectedDir());
+		
+		double[][] X = getRandomMatrix(rows, cols, 0, 100, sparsity, -1);
+		double[][] Y = getRandomMatrix(rows, 1, 0, 1, 1, -1);
+		for(int i=0; i<rows; i++){
+			Y[i][0] = (int)(Y[i][0]*classes) + 1;
+			Y[i][0] = (Y[i][0] > classes) ? classes : Y[i][0];
+		}
+
+		writeInputMatrixWithMTD("X", X, true);
+		writeInputMatrixWithMTD("Y", Y, true);
+
+		runTest(true, EXCEPTION_NOT_EXPECTED, null, -1);
+
+		runRScript(true);
+
+		HashMap<CellIndex, Double> priorR = readRMatrixFromExpectedDir("priors");
+		HashMap<CellIndex, Double> priorSYSTEMDS= readDMLMatrixFromOutputDir("priors");
+		HashMap<CellIndex, Double> meansRtemp = readRMatrixFromExpectedDir("means");
+		HashMap<CellIndex, Double> meansSYSTEMDStemp = readDMLMatrixFromOutputDir("means");
+		HashMap<CellIndex, Double> determinantsRtemp = readRMatrixFromExpectedDir("determinants");
+		HashMap<CellIndex, Double> determinantsSYSTEMDStemp = readDMLMatrixFromOutputDir("determinants");
+		HashMap<CellIndex, Double> invcovsRtemp = readRMatrixFromExpectedDir("invcovs");
+		HashMap<CellIndex, Double> invcovsSYSTEMDStemp = readDMLMatrixFromOutputDir("invcovs");
+
+		double[][] meansR = TestUtils.convertHashMapToDoubleArray(meansRtemp);
+		double[][] meansSYSTEMDS = TestUtils.convertHashMapToDoubleArray(meansSYSTEMDStemp);
+		double[][] determinantsR = TestUtils.convertHashMapToDoubleArray(determinantsRtemp);
+		double[][] determinantsSYSTEMDS = TestUtils.convertHashMapToDoubleArray(determinantsSYSTEMDStemp);
+		double[][] invcovsR = TestUtils.convertHashMapToDoubleArray(invcovsRtemp);
+		double[][] invcovsSYSTEMDS = TestUtils.convertHashMapToDoubleArray(invcovsSYSTEMDStemp);
+
+		TestUtils.compareMatrices(priorR, priorSYSTEMDS, Math.pow(10, -5.0), "priorR", "priorSYSTEMDS");
+		TestUtils.compareMatricesBitAvgDistance(meansR, meansSYSTEMDS, 5L,5L, this.toString());
+		TestUtils.compareMatricesBitAvgDistance(determinantsR, determinantsSYSTEMDS, (long)2E+12,(long)2E+12, this.toString());
+		TestUtils.compareMatricesBitAvgDistance(invcovsR, invcovsSYSTEMDS, (long)2E+20,(long)2E+20, this.toString());

Review comment:
       reconstructing the input from the output is afaik not really possible - since you lose for example the number of input samples provided to the algorithm. However computing the covariance matrix again, and multiplying it with its inverse is definitely a way of proving that the inverse is indeed correct, although its R aquivalent seems to be kinda different (but as already mentioned, just because of floating point differences). 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [systemds] Baunsgaard commented on a change in pull request #1153: Gaussian Classifier

Posted by GitBox <gi...@apache.org>.
Baunsgaard commented on a change in pull request #1153:
URL: https://github.com/apache/systemds/pull/1153#discussion_r558011418



##########
File path: scripts/builtin/gaussianClassifier.dml
##########
@@ -0,0 +1,127 @@
+#-------------------------------------------------------------
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+#-------------------------------------------------------------
+#
+# Computes the parameters needed for Gaussian Classification.
+# Thus it computes the following per class: the prior probability,
+# the inverse covariance matrix, the mean per feature and the determinant
+# of the covariance matrix. Furthermore (if not explicitely defined), it
+# adds some small smoothing value along the variances, to prevent
+# numerical errors / instabilities.
+#
+#
+# INPUT PARAMETERS:
+# -------------------------------------------------------------------------------------------------
+# NAME           TYPE               DEFAULT  MEANING
+# -------------------------------------------------------------------------------------------------
+# D              Matrix[Double]     ---      Input matrix (training set)
+# C              Matrix[Double]     ---      Target vector
+# varSmoothing   Double             1e-9     Smoothing factor for variances
+# verbose        Boolean            TRUE     Print accuracy of the training set
+# ---------------------------------------------------------------------------------------------
+# OUTPUT:
+# ---------------------------------------------------------------------------------------------
+# NAME                  TYPE             DEFAULT  MEANING
+# ---------------------------------------------------------------------------------------------
+# classPriors           Matrix[Double]   ---      Vector storing the class prior probabilities
+# classMeans            Matrix[Double]   ---      Matrix storing the means of the classes
+# classInvCovariances   List[Unknown]    ---      List of inverse covariance matrices
+# determinants          Matrix[Double]   ---      Vector storing the determinants of the classes
+# ---------------------------------------------------------------------------------------------
+#
+
+
+m_gaussianClassifier = function(Matrix[Double] D, Matrix[Double] C, Double varSmoothing=1e-9, Boolean verbose = TRUE)
+  return (Matrix[Double] classPriors, Matrix[Double] classMeans,
+  List[Unknown] classInvCovariances, Matrix[Double] determinants)
+{
+  #Retrieve number of samples, classes and features
+  nSamples = nrow(D)
+  nClasses = max(C)
+  nFeats = ncol(D)
+
+  #Set varSmoothing (to prevent numerical errors)
+  varSmoothing = 1e-9

Review comment:
       here you overwrite the input parameter

##########
File path: scripts/builtin/gaussianClassifier.dml
##########
@@ -0,0 +1,127 @@
+#-------------------------------------------------------------
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+#-------------------------------------------------------------
+#
+# Computes the parameters needed for Gaussian Classification.
+# Thus it computes the following per class: the prior probability,
+# the inverse covariance matrix, the mean per feature and the determinant
+# of the covariance matrix. Furthermore (if not explicitely defined), it
+# adds some small smoothing value along the variances, to prevent
+# numerical errors / instabilities.
+#
+#
+# INPUT PARAMETERS:
+# -------------------------------------------------------------------------------------------------
+# NAME           TYPE               DEFAULT  MEANING
+# -------------------------------------------------------------------------------------------------
+# D              Matrix[Double]     ---      Input matrix (training set)
+# C              Matrix[Double]     ---      Target vector
+# varSmoothing   Double             1e-9     Smoothing factor for variances
+# verbose        Boolean            TRUE     Print accuracy of the training set
+# ---------------------------------------------------------------------------------------------
+# OUTPUT:
+# ---------------------------------------------------------------------------------------------
+# NAME                  TYPE             DEFAULT  MEANING
+# ---------------------------------------------------------------------------------------------
+# classPriors           Matrix[Double]   ---      Vector storing the class prior probabilities
+# classMeans            Matrix[Double]   ---      Matrix storing the means of the classes
+# classInvCovariances   List[Unknown]    ---      List of inverse covariance matrices
+# determinants          Matrix[Double]   ---      Vector storing the determinants of the classes
+# ---------------------------------------------------------------------------------------------
+#
+
+
+m_gaussianClassifier = function(Matrix[Double] D, Matrix[Double] C, Double varSmoothing=1e-9, Boolean verbose = TRUE)
+  return (Matrix[Double] classPriors, Matrix[Double] classMeans,
+  List[Unknown] classInvCovariances, Matrix[Double] determinants)
+{
+  #Retrieve number of samples, classes and features
+  nSamples = nrow(D)
+  nClasses = max(C)
+  nFeats = ncol(D)
+
+  #Set varSmoothing (to prevent numerical errors)
+  varSmoothing = 1e-9
+
+  #Compute means, variances and priors
+  classCounts = aggregate(target=C, groups=C, fn="count", ngroups=as.integer(nClasses));
+  classMeans = aggregate(target=D, groups=C, fn="mean", ngroups=as.integer(nClasses));
+  classVars = aggregate(target=D, groups=C, fn="variance", ngroups=as.integer(nClasses));
+  classPriors = classCounts / nSamples
+
+  smoothedVar = diag(matrix(1.0, rows=nFeats, cols=1)) * max(classVars) * varSmoothing
+
+  classInvCovariances = list()
+  determinants = matrix(0, rows=nClasses, cols=1)
+
+  #Compute determinants and inverseCovariances
+  for (class in 1:nClasses)

Review comment:
       consider if it can be parallelized, if so use a parfor

##########
File path: src/test/scripts/functions/builtin/GaussianClassifier.dml
##########
@@ -0,0 +1,37 @@
+#-------------------------------------------------------------
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+#-------------------------------------------------------------
+
+X = read($1);
+y = read($2);
+
+[prior, means, covs, det] = gaussianClassifier(D=X, C=y, varSmoothing=$3);
+
+#Cbind the inverse covariance matrices, to make them comparable in the unit tests
+invcovs = as.matrix(covs[1])
+for (i in 2:max(y))
+{
+  invcovs = cbind(invcovs, as.matrix(covs[i]))
+}
+
+write(prior, $4);
+write(means, $5);
+write(det, $6);
+write(invcovs, $7);

Review comment:
       newline

##########
File path: src/test/scripts/functions/builtin/GaussianClassifier.R
##########
@@ -0,0 +1,111 @@
+#-------------------------------------------------------------
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+#-------------------------------------------------------------
+args <- commandArgs(TRUE)
+library("Matrix")
+
+D <- as.matrix(readMM(paste(args[1], "X.mtx", sep="")))
+c <- as.matrix(readMM(paste(args[1], "Y.mtx", sep="")))
+
+nClasses <- as.integer(max(c))
+varSmoothing <- as.double(args[2])
+
+nSamples <- nrow(D)
+nFeatures <- ncol(D)
+
+classInvCovariances <- list()
+
+classMeans <- aggregate(D, by=list(c), FUN= mean)
+classMeans <- classMeans[1:nFeatures+1]
+
+classVars <- aggregate(D, by=list(c), FUN=var)
+classVars[is.na(classVars)] <- 0
+smoothedVar <- varSmoothing * max(classVars) * diag(nFeatures)
+
+classCounts <- aggregate(c, by=list(c), FUN=length)
+classCounts <- classCounts[2]
+classPriors <- classCounts / nSamples
+
+determinants <- matrix(0, nrow=nClasses, ncol=1)
+
+for (i in 1:nClasses)
+{
+  classMatrix <- subset(D, c==i)
+  covMatrix <- cov(x=classMatrix, use="all.obs")
+  covMatrix[is.na(covMatrix)] <- 0
+  covMatrix <- covMatrix + smoothedVar
+  #determinant <- det(covMatrix)
+  #determinants[i] <- det(covMatrix)
+
+  ev <- eigen(covMatrix)
+  vecs <- ev$vectors
+  vals <- ev$values
+  lam <- diag(vals^(-1))
+  determinants[i] <- prod(vals)
+
+  invCovMatrix <- vecs %*% lam %*% t(vecs)
+  invCovMatrix[is.na(invCovMatrix)] <- 0
+  classInvCovariances[[i]] <- invCovMatrix
+}
+
+
+#Calc accuracy
+results <- matrix(0, nrow=nSamples, ncol=nClasses)
+for (class in 1:nClasses)
+{
+  for (i in 1:nSamples)
+  {
+    intermediate <- 0
+    meanDiff <- (D[i,] - classMeans[class,])
+    intermediate <- -1/2 * log((2*pi)^nFeatures * determinants[class,])
+    intermediate <- intermediate - 1/2 * (as.matrix(meanDiff) %*% as.matrix(classInvCovariances[[class]]) %*% t(as.matrix(meanDiff)))
+    intermediate <- log(classPriors[class,]) + intermediate
+    results[i, class] <- intermediate
+  }
+}
+
+pred <- max.col(results)
+acc <- sum(pred == c) / nSamples * 100
+print(paste("Training Accuracy (%): ", acc, sep=""))
+
+classPriors <- data.matrix(classPriors)
+classMeans <- data.matrix(classMeans)
+
+#Cbind the inverse covariance matrices, to make them comparable in the unit tests
+stackedInvCovs <- classInvCovariances[[1]]
+for (i in 2:nClasses)
+{
+  stackedInvCovs <- cbind(stackedInvCovs, classInvCovariances[[i]])
+}
+
+writeMM(as(classPriors, "CsparseMatrix"), paste(args[3], "priors", sep=""));
+writeMM(as(classMeans, "CsparseMatrix"), paste(args[3], "means", sep=""));
+writeMM(as(determinants, "CsparseMatrix"), paste(args[3], "determinants", sep=""));
+writeMM(as(stackedInvCovs, "CsparseMatrix"), paste(args[3], "invcovs", sep=""));
+
+
+
+

Review comment:
       to many new lines

##########
File path: scripts/builtin/gaussianClassifier.dml
##########
@@ -0,0 +1,127 @@
+#-------------------------------------------------------------
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+#-------------------------------------------------------------
+#
+# Computes the parameters needed for Gaussian Classification.
+# Thus it computes the following per class: the prior probability,
+# the inverse covariance matrix, the mean per feature and the determinant
+# of the covariance matrix. Furthermore (if not explicitely defined), it
+# adds some small smoothing value along the variances, to prevent
+# numerical errors / instabilities.
+#
+#
+# INPUT PARAMETERS:
+# -------------------------------------------------------------------------------------------------
+# NAME           TYPE               DEFAULT  MEANING
+# -------------------------------------------------------------------------------------------------
+# D              Matrix[Double]     ---      Input matrix (training set)
+# C              Matrix[Double]     ---      Target vector
+# varSmoothing   Double             1e-9     Smoothing factor for variances
+# verbose        Boolean            TRUE     Print accuracy of the training set
+# ---------------------------------------------------------------------------------------------
+# OUTPUT:
+# ---------------------------------------------------------------------------------------------
+# NAME                  TYPE             DEFAULT  MEANING
+# ---------------------------------------------------------------------------------------------
+# classPriors           Matrix[Double]   ---      Vector storing the class prior probabilities
+# classMeans            Matrix[Double]   ---      Matrix storing the means of the classes
+# classInvCovariances   List[Unknown]    ---      List of inverse covariance matrices
+# determinants          Matrix[Double]   ---      Vector storing the determinants of the classes
+# ---------------------------------------------------------------------------------------------
+#
+
+
+m_gaussianClassifier = function(Matrix[Double] D, Matrix[Double] C, Double varSmoothing=1e-9, Boolean verbose = TRUE)
+  return (Matrix[Double] classPriors, Matrix[Double] classMeans,
+  List[Unknown] classInvCovariances, Matrix[Double] determinants)
+{
+  #Retrieve number of samples, classes and features
+  nSamples = nrow(D)
+  nClasses = max(C)
+  nFeats = ncol(D)
+
+  #Set varSmoothing (to prevent numerical errors)
+  varSmoothing = 1e-9
+
+  #Compute means, variances and priors
+  classCounts = aggregate(target=C, groups=C, fn="count", ngroups=as.integer(nClasses));
+  classMeans = aggregate(target=D, groups=C, fn="mean", ngroups=as.integer(nClasses));
+  classVars = aggregate(target=D, groups=C, fn="variance", ngroups=as.integer(nClasses));
+  classPriors = classCounts / nSamples
+
+  smoothedVar = diag(matrix(1.0, rows=nFeats, cols=1)) * max(classVars) * varSmoothing
+
+  classInvCovariances = list()
+  determinants = matrix(0, rows=nClasses, cols=1)
+
+  #Compute determinants and inverseCovariances
+  for (class in 1:nClasses)
+  {
+    covMatrix = matrix(0, rows=nFeats, cols=nFeats)
+    classMatrix = removeEmpty(target=D, margin="rows", select=(C==class))
+
+    for (i in 1:nFeats)
+    {
+      for (j in 1:nFeats)
+      {
+        if (j == i)
+          covMatrix[i,j] = classVars[class, j]
+        else if (j < i)
+          covMatrix[i,j] = covMatrix[j,i]
+        else
+          covMatrix[i,j] = cov(classMatrix[,i], classMatrix[,j])
+      }
+    }
+
+    #Apply smoothing of the variances, to avoid numerical errors
+    covMatrix = covMatrix + smoothedVar
+
+    #Compute inverse
+    [eVals, eVecs] = eigen(covMatrix)
+    lam = diag(eVals^(-1))
+    invCovMatrix = eVecs %*% lam %*% t(eVecs)
+
+    #Compute determinant
+    det = prod(eVals)
+
+    determinants[class, 1] = det
+    classInvCovariances = append(classInvCovariances, invCovMatrix)
+  }
+
+  #Compute accuracy on the training set
+  if (verbose)
+  {
+    results = matrix(0, rows=nSamples, cols=nClasses)
+    for (class in 1:nClasses)

Review comment:
       again maybe parfor

##########
File path: scripts/builtin/gaussianClassifier.dml
##########
@@ -0,0 +1,127 @@
+#-------------------------------------------------------------
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+#-------------------------------------------------------------
+#
+# Computes the parameters needed for Gaussian Classification.
+# Thus it computes the following per class: the prior probability,
+# the inverse covariance matrix, the mean per feature and the determinant
+# of the covariance matrix. Furthermore (if not explicitely defined), it
+# adds some small smoothing value along the variances, to prevent
+# numerical errors / instabilities.
+#
+#
+# INPUT PARAMETERS:
+# -------------------------------------------------------------------------------------------------
+# NAME           TYPE               DEFAULT  MEANING
+# -------------------------------------------------------------------------------------------------
+# D              Matrix[Double]     ---      Input matrix (training set)
+# C              Matrix[Double]     ---      Target vector
+# varSmoothing   Double             1e-9     Smoothing factor for variances
+# verbose        Boolean            TRUE     Print accuracy of the training set
+# ---------------------------------------------------------------------------------------------
+# OUTPUT:
+# ---------------------------------------------------------------------------------------------
+# NAME                  TYPE             DEFAULT  MEANING
+# ---------------------------------------------------------------------------------------------
+# classPriors           Matrix[Double]   ---      Vector storing the class prior probabilities
+# classMeans            Matrix[Double]   ---      Matrix storing the means of the classes
+# classInvCovariances   List[Unknown]    ---      List of inverse covariance matrices
+# determinants          Matrix[Double]   ---      Vector storing the determinants of the classes
+# ---------------------------------------------------------------------------------------------
+#
+
+
+m_gaussianClassifier = function(Matrix[Double] D, Matrix[Double] C, Double varSmoothing=1e-9, Boolean verbose = TRUE)
+  return (Matrix[Double] classPriors, Matrix[Double] classMeans,
+  List[Unknown] classInvCovariances, Matrix[Double] determinants)
+{
+  #Retrieve number of samples, classes and features
+  nSamples = nrow(D)
+  nClasses = max(C)
+  nFeats = ncol(D)
+
+  #Set varSmoothing (to prevent numerical errors)
+  varSmoothing = 1e-9
+
+  #Compute means, variances and priors
+  classCounts = aggregate(target=C, groups=C, fn="count", ngroups=as.integer(nClasses));
+  classMeans = aggregate(target=D, groups=C, fn="mean", ngroups=as.integer(nClasses));
+  classVars = aggregate(target=D, groups=C, fn="variance", ngroups=as.integer(nClasses));
+  classPriors = classCounts / nSamples
+
+  smoothedVar = diag(matrix(1.0, rows=nFeats, cols=1)) * max(classVars) * varSmoothing
+
+  classInvCovariances = list()
+  determinants = matrix(0, rows=nClasses, cols=1)
+
+  #Compute determinants and inverseCovariances
+  for (class in 1:nClasses)
+  {
+    covMatrix = matrix(0, rows=nFeats, cols=nFeats)
+    classMatrix = removeEmpty(target=D, margin="rows", select=(C==class))
+
+    for (i in 1:nFeats)
+    {
+      for (j in 1:nFeats)
+      {
+        if (j == i)
+          covMatrix[i,j] = classVars[class, j]
+        else if (j < i)
+          covMatrix[i,j] = covMatrix[j,i]
+        else
+          covMatrix[i,j] = cov(classMatrix[,i], classMatrix[,j])
+      }
+    }
+
+    #Apply smoothing of the variances, to avoid numerical errors
+    covMatrix = covMatrix + smoothedVar
+
+    #Compute inverse
+    [eVals, eVecs] = eigen(covMatrix)
+    lam = diag(eVals^(-1))
+    invCovMatrix = eVecs %*% lam %*% t(eVecs)
+
+    #Compute determinant
+    det = prod(eVals)
+
+    determinants[class, 1] = det
+    classInvCovariances = append(classInvCovariances, invCovMatrix)
+  }
+
+  #Compute accuracy on the training set
+  if (verbose)
+  {
+    results = matrix(0, rows=nSamples, cols=nClasses)
+    for (class in 1:nClasses)
+    {
+      for (i in 1:nSamples)
+      {
+        intermediate = 0
+        meanDiff = (D[i,] - classMeans[class,])
+        intermediate = -1/2 * log((2*pi)^nFeats * determinants[class,])
+        intermediate = intermediate - 1/2 * (meanDiff %*% as.matrix(classInvCovariances[class]) %*% t(meanDiff))
+        intermediate = log(classPriors[class,]) + intermediate
+        results[i, class] = intermediate
+      }
+    }
+    acc = sum(rowIndexMax(results) == C) / nSamples * 100
+    print("Training Accuracy (%): " + acc)
+  }
+}

Review comment:
       newline in the end of the scripts to make github happy.

##########
File path: src/test/java/org/apache/sysds/test/functions/builtin/BuiltinGaussianClassifierTest.java
##########
@@ -0,0 +1,142 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.sysds.test.functions.builtin;
+
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+
+import org.apache.sysds.runtime.matrix.data.MatrixValue.CellIndex;
+import org.apache.sysds.test.AutomatedTestBase;
+import org.apache.sysds.test.TestConfiguration;
+import org.apache.sysds.test.TestUtils;
+import org.junit.Test;
+
+public class BuiltinGaussianClassifierTest extends AutomatedTestBase
+{
+	private final static String TEST_NAME = "GaussianClassifier";
+	private final static String TEST_DIR = "functions/builtin/";
+	private final static String TEST_CLASS_DIR = TEST_DIR + BuiltinGaussianClassifierTest.class.getSimpleName() + "/";
+
+
+	@Override
+	public void setUp() {
+		addTestConfiguration(TEST_NAME,new TestConfiguration(TEST_CLASS_DIR, TEST_NAME,new String[]{"B"})); 
+	}
+
+
+	@Test
+	public void testSmallDenseFiveClasses() {
+		testGaussianClassifier(80, 30, 0.9, 5);
+	}
+
+	@Test
+	public void testSmallDenseTenClasses() {
+		testGaussianClassifier(80, 30, 0.9, 10);
+	}
+
+	@Test
+	public void testBiggerDenseFiveClasses() {
+		testGaussianClassifier(200, 50, 0.9, 5);
+	}
+
+	@Test
+	public void testBiggerDenseTenClasses() {
+		testGaussianClassifier(200, 50, 0.9, 10);
+	}
+
+	@Test
+	public void testBiggerSparseFiveClasses() {
+		testGaussianClassifier(200, 50, 0.3, 5);
+	}
+
+	@Test
+	public void testBiggerSparseTenClasses() {
+		testGaussianClassifier(200, 50, 0.3, 10);
+	}
+
+	@Test
+	public void testSmallSparseFiveClasses() {
+		testGaussianClassifier(80, 30, 0.3, 5);
+	}
+
+	@Test
+	public void testSmallSparseTenClasses() {
+		testGaussianClassifier(80, 30, 0.3, 10);
+	}
+
+	public void testGaussianClassifier(int rows, int cols, double sparsity, int classes)
+	{
+		loadTestConfiguration(getTestConfiguration(TEST_NAME));
+		String HOME = SCRIPT_DIR + TEST_DIR;
+		fullDMLScriptName = HOME + TEST_NAME + ".dml";
+		;
+		double varSmoothing = 1e-9;
+
+		List<String> proArgs = new ArrayList<>();
+		proArgs.add("-args");
+		proArgs.add(input("X"));
+		proArgs.add(input("Y"));
+		proArgs.add(String.valueOf(varSmoothing));
+		proArgs.add(output("priors"));
+		proArgs.add(output("means"));
+		proArgs.add(output("determinants"));
+		proArgs.add(output("invcovs"));
+
+		programArgs = proArgs.toArray(new String[proArgs.size()]);
+
+		rCmd = getRCmd(inputDir(), Double.toString(varSmoothing), expectedDir());
+		
+		double[][] X = getRandomMatrix(rows, cols, 0, 100, sparsity, -1);
+		double[][] Y = getRandomMatrix(rows, 1, 0, 1, 1, -1);
+		for(int i=0; i<rows; i++){
+			Y[i][0] = (int)(Y[i][0]*classes) + 1;
+			Y[i][0] = (Y[i][0] > classes) ? classes : Y[i][0];
+		}
+
+		writeInputMatrixWithMTD("X", X, true);
+		writeInputMatrixWithMTD("Y", Y, true);
+
+		runTest(true, EXCEPTION_NOT_EXPECTED, null, -1);
+
+		runRScript(true);
+
+		HashMap<CellIndex, Double> priorR = readRMatrixFromExpectedDir("priors");
+		HashMap<CellIndex, Double> priorSYSTEMDS= readDMLMatrixFromOutputDir("priors");
+		HashMap<CellIndex, Double> meansRtemp = readRMatrixFromExpectedDir("means");
+		HashMap<CellIndex, Double> meansSYSTEMDStemp = readDMLMatrixFromOutputDir("means");
+		HashMap<CellIndex, Double> determinantsRtemp = readRMatrixFromExpectedDir("determinants");
+		HashMap<CellIndex, Double> determinantsSYSTEMDStemp = readDMLMatrixFromOutputDir("determinants");
+		HashMap<CellIndex, Double> invcovsRtemp = readRMatrixFromExpectedDir("invcovs");
+		HashMap<CellIndex, Double> invcovsSYSTEMDStemp = readDMLMatrixFromOutputDir("invcovs");
+
+		double[][] meansR = TestUtils.convertHashMapToDoubleArray(meansRtemp);
+		double[][] meansSYSTEMDS = TestUtils.convertHashMapToDoubleArray(meansSYSTEMDStemp);
+		double[][] determinantsR = TestUtils.convertHashMapToDoubleArray(determinantsRtemp);
+		double[][] determinantsSYSTEMDS = TestUtils.convertHashMapToDoubleArray(determinantsSYSTEMDStemp);
+		double[][] invcovsR = TestUtils.convertHashMapToDoubleArray(invcovsRtemp);
+		double[][] invcovsSYSTEMDS = TestUtils.convertHashMapToDoubleArray(invcovsSYSTEMDStemp);
+
+		TestUtils.compareMatrices(priorR, priorSYSTEMDS, Math.pow(10, -5.0), "priorR", "priorSYSTEMDS");
+		TestUtils.compareMatricesBitAvgDistance(meansR, meansSYSTEMDS, 5L,5L, this.toString());
+		TestUtils.compareMatricesBitAvgDistance(determinantsR, determinantsSYSTEMDS, (long)2E+12,(long)2E+12, this.toString());
+		TestUtils.compareMatricesBitAvgDistance(invcovsR, invcovsSYSTEMDS, (long)2E+20,(long)2E+20, this.toString());

Review comment:
       i would probably add the execution of the inverse multiplied with the result,
   just as you stated in the pr it gives the original.
   
   if you add this to the tests, would it still make sense to run the R script?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org