You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@opennlp.apache.org by ma...@apache.org on 2023/04/30 17:11:33 UTC

[opennlp-addons] branch adjust_opennlp-addons_to_be_compatible_with_latest_opennlp-tools_release created (now 355ea88)

This is an automated email from the ASF dual-hosted git repository.

mawiesne pushed a change to branch adjust_opennlp-addons_to_be_compatible_with_latest_opennlp-tools_release
in repository https://gitbox.apache.org/repos/asf/opennlp-addons.git


      at 355ea88  adjusts 'opennlp-addons' to be compatible with latest opennlp-tools release

This branch includes the following new commits:

     new 355ea88  adjusts 'opennlp-addons' to be compatible with latest opennlp-tools release

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



[opennlp-addons] 01/01: adjusts 'opennlp-addons' to be compatible with latest opennlp-tools release

Posted by ma...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

mawiesne pushed a commit to branch adjust_opennlp-addons_to_be_compatible_with_latest_opennlp-tools_release
in repository https://gitbox.apache.org/repos/asf/opennlp-addons.git

commit 355ea880dfe3ff878f5e3c82473477402400d3f8
Author: Martin Wiesner <ma...@hs-heilbronn.de>
AuthorDate: Sun Apr 30 19:11:26 2023 +0200

    adjusts 'opennlp-addons' to be compatible with latest opennlp-tools release
    
    - adjusts opennlp-tools to 2.2.0
    - adjusts parent project (org.apache.apache) to version 29
    - adjusts Java language level to 11
    - fixes compile incompatibilities
    - transforms existing JUnit tests towards JUnit 5.x
    - improves resource handling in some spots
    - modernizes code style along the path
    - adds GH actions config
    - adjusts .gitignore to cover an additional IDE flavor
    - improves JavaDoc along the path
---
 .github/CONTRIBUTING.md                            |  11 +
 .github/PULL_REQUEST_TEMPLATE.md                   |  27 ++
 .github/workflows/maven.yml                        |  51 ++
 .gitignore                                         |   2 +
 NOTICE                                             |   2 +-
 geoentitylinker-addon/pom.xml                      |  77 ++-
 .../AdminBoundaryContextGenerator.java             |  60 +--
 .../addons/geoentitylinker/GazetteerEntry.java     |  37 +-
 .../addons/geoentitylinker/GazetteerSearcher.java  |  75 ++-
 .../addons/geoentitylinker/GeoEntityLinker.java    |  61 ++-
 .../geoentitylinker/indexing/USGSProcessor.java    |   2 +-
 .../scoring/FuzzyStringMatchScorer.java            |   1 +
 .../scoring/GeoHashBinningScorer.java              |   3 +-
 .../geoentitylinker/scoring/ModelBasedScorer.java  |  27 +-
 .../test/java/apache/opennlp/addons/AppTest.java   |  38 --
 japanese-addon/build.xml                           |   2 +-
 japanese-addon/pom.xml                             |  89 +++-
 .../tools/namefind/AuxiliaryInfoUtilTest.java      |  53 +-
 ...liaryInfoAwareDelegateFeatureGeneratorTest.java |  17 +-
 .../lang/jpn/BigramNameFeatureGeneratorTest.java   |  36 +-
 .../lang/jpn/FeatureGeneratorUtilTest.java         |  59 +--
 .../lang/jpn/TokenClassFeatureGeneratorTest.java   |  21 +-
 .../lang/jpn/TokenPatternFeatureGeneratorTest.java |  45 +-
 .../lang/jpn/TrigramNameFeatureGeneratorTest.java  |  41 +-
 jwnl-addon/pom.xml                                 |  86 ++--
 .../opennlp/jwnl/lemmatizer/JWNLLemmatizer.java    |  33 +-
 liblinear-addon/pom.xml                            |  41 +-
 .../src/main/java/LiblinearTrainer.java            |  84 +---
 modelbuilder-addon/pom.xml                         |  73 ++-
 .../modelbuilder/DefaultModelBuilderUtil.java      |  38 +-
 .../addons/modelbuilder/KnownEntityProvider.java   |  33 +-
 .../modelbuilder/ModelGenerationValidator.java     |   4 +-
 .../opennlp/addons/modelbuilder/Modelable.java     |   6 +-
 .../impls/FileKnownEntityProvider.java             |   2 +-
 .../modelbuilder/impls/FileModelValidatorImpl.java |  20 +-
 .../modelbuilder/impls/FileSentenceProvider.java   |  27 +-
 .../modelbuilder/impls/GenericModelableImpl.java   |  39 +-
 .../src/test/java/modelbuilder/AppTest.java        |  38 --
 morfologik-addon/pom.xml                           | 181 ++++---
 .../lemmatizer/MorfologikLemmatizer.java           |  36 +-
 .../tagdict/MorfologikPOSTaggerFactory.java        |  36 +-
 .../tagdict/MorfologikTagDictionary.java           |  20 +-
 .../builder/POSDictionayBuilderTest.java           |   8 +-
 .../lemmatizer/MorfologikLemmatizerTest.java       |  36 +-
 .../tagdict/MorfologikTagDictionaryTest.java       |  33 +-
 .../morfologik/tagdict/POSTaggerFactoryTest.java   |  57 ++-
 pom.xml                                            | 532 +++++++++++++++++++++
 47 files changed, 1476 insertions(+), 824 deletions(-)

diff --git a/.github/CONTRIBUTING.md b/.github/CONTRIBUTING.md
new file mode 100644
index 0000000..5e06386
--- /dev/null
+++ b/.github/CONTRIBUTING.md
@@ -0,0 +1,11 @@
+# How to contribute to Apache OpenNLP
+
+Thank you for your intention to contribute to the Apache OpenNLP project. As an open-source community, we highly appreciate external contributions to our project.
+
+To make the process smooth for the project *committers* (those who review and accept changes) and *contributors* (those who propose new changes via pull requests), there are a few rules to follow.
+
+## Contribution Guidelines
+
+Please check out the [How to get involved](https://opennlp.apache.org/get-involved.html) to understand how contributions are made. 
+A detailed list of coding standards can be found at [Apache OpenNLP Code Conventions](https://opennlp.apache.org/code-conventions.html) which also contains a list of coding guidelines that you should follow.
+For pull requests, there is a [checklist](PULL_REQUEST_TEMPLATE.md) with criteria for acceptable contributions.
diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md
new file mode 100644
index 0000000..3f6f388
--- /dev/null
+++ b/.github/PULL_REQUEST_TEMPLATE.md
@@ -0,0 +1,27 @@
+Thank you for contributing to Apache OpenNLP.
+
+In order to streamline the review of the contribution we ask you
+to ensure the following steps have been taken:
+
+### For all changes:
+- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
+     in the commit message?
+
+- [ ] Does your PR title start with OPENNLP-XXXX where XXXX is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character.
+
+- [ ] Has your PR been rebased against the latest commit within the target branch (typically main)?
+
+- [ ] Is your initial contribution a single, squashed commit?
+
+### For code changes:
+- [ ] Have you ensured that the full suite of tests is executed via `mvn clean install` at the root opennlp-sandbox folder?
+- [ ] Have you written or updated unit tests to verify your changes?
+- [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](https://www.apache.org/legal/resolved.html#category-a)? 
+- [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file in opennlp-sandbox folder?
+- [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found in opennlp-sandbox folder?
+
+### For documentation related changes:
+- [ ] Have you ensured that format looks appropriate for the output in which it is rendered?
+
+### Note:
+Please ensure that once the PR is submitted, you check GitHub Actions for build issues and submit an update to your PR as soon as possible.
diff --git a/.github/workflows/maven.yml b/.github/workflows/maven.yml
new file mode 100644
index 0000000..a31237c
--- /dev/null
+++ b/.github/workflows/maven.yml
@@ -0,0 +1,51 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+name: Java CI
+
+on: [push, pull_request]
+
+jobs:
+  build:
+    runs-on: ${{ matrix.os }}
+    continue-on-error: ${{ matrix.experimental }}
+    strategy:
+      fail-fast: false
+      matrix:
+        os: [ubuntu-latest, windows-latest]
+        java: [ 11, 17, 19 ]
+        experimental: [false]
+#        include:
+#          - java: 18-ea
+#            os: ubuntu-latest
+#            experimental: true
+
+    steps:
+    - uses: actions/checkout@v3
+    - uses: actions/cache@v3
+      with:
+        path: ~/.m2/repository
+        key: ${{ runner.os }}-maven-${{ hashFiles('**/pom.xml') }}
+        restore-keys: |
+          ${{ runner.os }}-maven-
+    - name: Set up JDK ${{ matrix.java }}
+      uses: actions/setup-java@v3
+      with:
+        distribution: adopt
+        java-version: ${{ matrix.java }}
+    - name: Build with Maven
+      run: mvn -V clean test install --no-transfer-progress -Pjacoco
+    - name: Jacoco
+      run: mvn jacoco:report
diff --git a/.gitignore b/.gitignore
index 1f7965a..a70e935 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1,3 +1,5 @@
+*.iml
+.idea
 target
 .classpath
 .project
diff --git a/NOTICE b/NOTICE
index deb40c8..a892eb2 100644
--- a/NOTICE
+++ b/NOTICE
@@ -1,5 +1,5 @@
 Apache OpenNLP
-Copyright 2010, 2014 The Apache Software Foundation
+Copyright 2010, 2023 The Apache Software Foundation
 
 This product includes software developed at
 The Apache Software Foundation (http://www.apache.org/).
\ No newline at end of file
diff --git a/geoentitylinker-addon/pom.xml b/geoentitylinker-addon/pom.xml
index 07d03cc..75be328 100644
--- a/geoentitylinker-addon/pom.xml
+++ b/geoentitylinker-addon/pom.xml
@@ -1,3 +1,5 @@
+<?xml version="1.0" encoding="UTF-8"?>
+
 <!--
    Licensed to the Apache Software Foundation (ASF) under one
    or more contributor license agreements.  See the NOTICE file
@@ -22,48 +24,21 @@
     <modelVersion>4.0.0</modelVersion>
     <parent>
         <groupId>org.apache.opennlp</groupId>
-        <artifactId>opennlp</artifactId>
-        <version>1.6.0</version>
-        <relativePath>../opennlp/pom.xml</relativePath>
+        <artifactId>opennlp-addons</artifactId>
+        <version>2.2.1-SNAPSHOT</version>
     </parent>
 
     <artifactId>geoentitylinker-addon</artifactId>
-    <version>1.0-SNAPSHOT</version>
+    <version>2.2.1-SNAPSHOT</version>
     <packaging>jar</packaging>
-    <name>geoentitylinker-addon</name>
-
-    <url>http://maven.apache.org</url>
-    <build>
-        <plugins>
-            <plugin>
-                <groupId>org.apache.maven.plugins</groupId>
-                <artifactId>maven-compiler-plugin</artifactId>
-                <version>2.3.2</version>
-                <configuration>
-                    <source>1.7</source>
-                    <target>1.7</target>
-                </configuration>
-            </plugin>
-        </plugins>
-    </build>
-    <properties>
-        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
-    </properties>
+    <name>Apache OpenNLP GeoentityLinker Addon</name>
 
     <dependencies>
         <dependency>
-            <groupId>junit</groupId>
-            <artifactId>junit</artifactId>
-            <version>3.8.1</version>
-            <scope>test</scope>
-        </dependency>
-        <dependency>
-            <groupId>log4j</groupId>
-            <artifactId>log4j</artifactId>
-            <version>1.2.16</version>
+            <groupId>org.apache.opennlp</groupId>
+            <artifactId>opennlp-tools</artifactId>
         </dependency>
-      
-            
+        
         <dependency>
             <groupId>org.apache.lucene</groupId>
             <artifactId>lucene-core</artifactId>
@@ -79,16 +54,40 @@
             <artifactId>lucene-queryparser</artifactId>
             <version>6.0.0</version>
         </dependency>
-        <dependency>
-            <groupId>org.apache.opennlp</groupId>
-            <artifactId>opennlp-tools</artifactId>
-            <version>1.6.0</version>
-        </dependency>
         <dependency>
             <groupId>com.spatial4j</groupId>
             <artifactId>spatial4j</artifactId>
             <version>0.4.1</version>
             <type>jar</type>
         </dependency>
+
+        <dependency>
+            <groupId>org.junit.jupiter</groupId>
+            <artifactId>junit-jupiter-api</artifactId>
+        </dependency>
+
+        <dependency>
+            <groupId>org.junit.jupiter</groupId>
+            <artifactId>junit-jupiter-engine</artifactId>
+        </dependency>
+
+        <dependency>
+            <groupId>org.junit.jupiter</groupId>
+            <artifactId>junit-jupiter-params</artifactId>
+        </dependency>
     </dependencies>
+
+    <build>
+        <plugins>
+            <plugin>
+                <groupId>org.apache.maven.plugins</groupId>
+                <artifactId>maven-compiler-plugin</artifactId>
+                <configuration>
+                    <source>${maven.compiler.source}</source>
+                    <target>${maven.compiler.target}</target>
+                    <compilerArgument>-Xlint</compilerArgument>
+                </configuration>
+            </plugin>
+        </plugins>
+    </build>
 </project>
diff --git a/geoentitylinker-addon/src/main/java/opennlp/addons/geoentitylinker/AdminBoundaryContextGenerator.java b/geoentitylinker-addon/src/main/java/opennlp/addons/geoentitylinker/AdminBoundaryContextGenerator.java
index b645156..a6741fe 100644
--- a/geoentitylinker-addon/src/main/java/opennlp/addons/geoentitylinker/AdminBoundaryContextGenerator.java
+++ b/geoentitylinker-addon/src/main/java/opennlp/addons/geoentitylinker/AdminBoundaryContextGenerator.java
@@ -19,6 +19,7 @@ import java.io.BufferedReader;
 import java.io.File;
 import java.io.FileReader;
 import java.io.IOException;
+import java.lang.invoke.MethodHandles;
 import java.util.ArrayList;
 import java.util.HashMap;
 import java.util.HashSet;
@@ -30,7 +31,8 @@ import java.util.logging.Level;
 import java.util.regex.Matcher;
 import java.util.regex.Pattern;
 import opennlp.tools.entitylinker.EntityLinkerProperties;
-import org.apache.log4j.Logger;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  * Finds instances of country mentions in a String, typically a document text.
@@ -39,19 +41,19 @@ import org.apache.log4j.Logger;
  */
 public class AdminBoundaryContextGenerator {
 
-  private static final Logger LOGGER = Logger.getLogger(AdminBoundaryContextGenerator.class);
+  private static final Logger LOG = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
   private List<CountryContextEntry> countrydata;
   private Map<String, Set<String>> nameCodesMap = new HashMap<>();
-  private Map<String, Set<Integer>> countryMentions = new HashMap<>();
+  private final Map<String, Set<Integer>> countryMentions = new HashMap<>();
 
   Map<String, String> countryRegexMap = new HashMap<>();
   Map<String, String> provinceRegexMap = new HashMap<>();
   Map<String, String> countyRegexMap = new HashMap<>();
 
-  private Set<CountryContextEntry> countryHits = new HashSet<>();
-  private EntityLinkerProperties properties;
-  private List<AdminBoundary> adminBoundaryData= new ArrayList<>();
-  private Set<AdminBoundary> adminBoundaryHits = new HashSet<>();
+  private final Set<CountryContextEntry> countryHits = new HashSet<>();
+  private final EntityLinkerProperties properties;
+  private final List<AdminBoundary> adminBoundaryData= new ArrayList<>();
+  private final Set<AdminBoundary> adminBoundaryHits = new HashSet<>();
   private AdminBoundaryContext context;
 
   public AdminBoundaryContext getContext(String text) {
@@ -62,16 +64,16 @@ public class AdminBoundaryContextGenerator {
     return context;
   }
 
-  private Set<String> countryHitSet = new HashSet<>();
-  private Map<String, String> countryMap = new HashMap<>();
-  private Map<String, Map<String, String>> provMap = new HashMap<>();
-  private Map<String, Map<String, String>> countyMap = new HashMap<>();
+  private final Set<String> countryHitSet = new HashSet<>();
+  private final Map<String, String> countryMap = new HashMap<>();
+  private final Map<String, Map<String, String>> provMap = new HashMap<>();
+  private final Map<String, Map<String, String>> countyMap = new HashMap<>();
 
   private Map<String, Set<Integer>> provMentions = new HashMap<>();
   private Map<String, Set<Integer>> countyMentions = new HashMap<>();
 
-  private Set<String> provHits = new HashSet<String>();
-  private Set<String> countyHits = new HashSet<String>();
+  private final Set<String> provHits = new HashSet<>();
+  private final Set<String> countyHits = new HashSet<>();
 
   public static void main(String[] args) {
     try {
@@ -109,18 +111,14 @@ public class AdminBoundaryContextGenerator {
   }
 
   /**
-   * returns the last set of hits after calling regexFind
-   *
-   * @return
+   * @return returns the last set of hits after calling regexFind
    */
   public Set<CountryContextEntry> getCountryHits() {
     return countryHits;
   }
 
   /**
-   * returns the last name to codes map after calling regexFind
-   *
-   * @return
+   * @return returns the last name to codes map after calling regexFind
    */
   public Map<String, Set<String>> getNameCodesMap() {
     return nameCodesMap;
@@ -148,11 +146,9 @@ public class AdminBoundaryContextGenerator {
    * downstream. The full text of a document should be passed in here.
    *
    * @param text the full text of the document (block of text).
-   * @return
    */
   private AdminBoundaryContext process(String text) {
     try {
-
       reset();
       Map<String, Set<Integer>> countryhitMap = regexfind(text, countryMap, countryHitSet, "country");
       if (!countryhitMap.isEmpty()) {
@@ -203,13 +199,10 @@ public class AdminBoundaryContextGenerator {
         }
       }
 
-      AdminBoundaryContext context
-          = new AdminBoundaryContext(countryhitMap, provMentions, countyMentions, countryHitSet, provHits, countyHits,
+      return new AdminBoundaryContext(countryhitMap, provMentions, countyMentions, countryHitSet, provHits, countyHits,
               countryRefMap, provMap, countyMap, nameCodesMap, countryRegexMap, provinceRegexMap, countyRegexMap);
-
-      return context;
     } catch (Exception e) {
-      e.printStackTrace();
+      LOG.error(e.getLocalizedMessage(), e);
     }
     return null;
   }
@@ -221,7 +214,6 @@ public class AdminBoundaryContextGenerator {
    * @param lookupMap a map to use to find names. the key=a location code, the
    * value is an actual name.
    * @param hitsRef a reference to a set that stores the hits by id
-   * @return
    */
   private Map<String, Set<Integer>> regexfind(String docText, Map<String, String> lookupMap, Set<String> hitsRef, String locationType) {
     Map<String, Set<Integer>> mentions = new HashMap<>();
@@ -268,7 +260,7 @@ public class AdminBoundaryContextGenerator {
           if (mentions.containsKey(code)) {
             mentions.get(code).add(start);
           } else {
-            Set<Integer> newset = new HashSet<Integer>();
+            Set<Integer> newset = new HashSet<>();
             newset.add(start);
             mentions.put(code, newset);
           }
@@ -276,7 +268,7 @@ public class AdminBoundaryContextGenerator {
             if (this.nameCodesMap.containsKey(hit)) {
               nameCodesMap.get(hit).add(code);
             } else {
-              HashSet<String> newset = new HashSet<String>();
+              HashSet<String> newset = new HashSet<>();
               newset.add(code);
               nameCodesMap.put(hit, newset);
             }
@@ -290,9 +282,7 @@ public class AdminBoundaryContextGenerator {
       }
 
     } catch (Exception ex) {
-      LOGGER.error(ex);
-      ex.printStackTrace();
-
+      LOG.error(ex.getLocalizedMessage(), ex);
     }
 
     return mentions;
@@ -306,7 +296,7 @@ public class AdminBoundaryContextGenerator {
     BufferedReader reader;
     try {
       reader = new BufferedReader(new FileReader(countryContextFile));
-      String line = "";
+      String line;
       int lineNum = 0;
       while ((line = reader.readLine()) != null) {
         String[] values = line.split("\t");
@@ -334,7 +324,7 @@ public class AdminBoundaryContextGenerator {
       }
       reader.close();
     } catch (IOException ex) {
-      LOGGER.error(ex);
+      LOG.error(ex.getLocalizedMessage(), ex);
     }
 
     loadMaps(this.adminBoundaryData);
@@ -365,7 +355,7 @@ public class AdminBoundaryContextGenerator {
           provMap.put(adm.getCountryCode(), provs);
           // }
 
-          if (!adm.getCountyCode().toLowerCase().equals("no_data_found") && !adm.getCountyName().toLowerCase().equals("no_data_found")) {
+          if (!adm.getCountyCode().equalsIgnoreCase("no_data_found") && !adm.getCountyName().equalsIgnoreCase("no_data_found")) {
             Map<String, String> counties = countyMap.get(adm.getCountryCode() + "." + adm.getProvCode());
             if (counties == null) {
               counties = new HashMap<>();
diff --git a/geoentitylinker-addon/src/main/java/opennlp/addons/geoentitylinker/GazetteerEntry.java b/geoentitylinker-addon/src/main/java/opennlp/addons/geoentitylinker/GazetteerEntry.java
index 6f3ac87..56e6eef 100644
--- a/geoentitylinker-addon/src/main/java/opennlp/addons/geoentitylinker/GazetteerEntry.java
+++ b/geoentitylinker-addon/src/main/java/opennlp/addons/geoentitylinker/GazetteerEntry.java
@@ -21,8 +21,7 @@ import java.util.Objects;
 import opennlp.tools.entitylinker.BaseLink;
 
 /**
- *
- * Stores a minimal amount of information from a geographic placenames gazateer
+ * Stores a minimal amount of information from a geographic placenames gazetteer.
  */
 public class GazetteerEntry extends BaseLink {
 
@@ -35,43 +34,39 @@ public class GazetteerEntry extends BaseLink {
   private String provinceCode;
   private String hierarchy;
 
+  public GazetteerEntry(String parentID, String itemID, String itemName, String itemType) {
+    super(parentID, itemID, itemName, itemType);
+  }
+
   /**
-   * returns the id from the lucene document
-   *
-   * @return
+   * @return returns the id from the lucene document
    */
   public String getIndexID() {
     return indexID;
   }
-  /*
+  /**
    * sets the id from the lucene document
    */
-
   public void setIndexID(String indexID) {
     this.indexID = indexID;
   }
 
   /**
-   * returns the latitude from the gazateer
-   *
-   * @return
+   * @return Retrieves the latitude from the gazetteer
    */
   public Double getLatitude() {
     return latitude;
   }
 
   /**
-   * sets the latitude from the gazateer
-   *
+   * sets the latitude from the gazetteer
    */
   public void setLatitude(Double latitude) {
     this.latitude = latitude;
   }
 
   /**
-   * returns the longitude from the gaz
-   *
-   * @return
+   * @return Retrieves the longitude from the gaz
    */
   public Double getLongitude() {
     return longitude;
@@ -87,16 +82,14 @@ public class GazetteerEntry extends BaseLink {
   }
 
   /**
-   * returns the source of the gazateer data
-   *
-   * @return
+   * @return Retrieves the source of the gazetteer data
    */
   public String getSource() {
     return source;
   }
 
   /**
-   * sets the source (the source of the gazateer data)
+   * sets the source (the source of the gazetteer data)
    *
    * @param source
    */
@@ -105,16 +98,14 @@ public class GazetteerEntry extends BaseLink {
   }
 
   /**
-   * Returns all the other fields in the gazateer in the form of a map
-   *
-   * @return
+   * @return Retrieves all the other fields in the gazetteer in the form of a map
    */
   public Map<String, String> getIndexData() {
     return indexData;
   }
 
   /**
-   * sets the other fields in the gazeteer in the form of a map
+   * sets the other fields in the gazetteer in the form of a map
    *
    * @param indexData stores all fields in the index as fieldname:value
    */
diff --git a/geoentitylinker-addon/src/main/java/opennlp/addons/geoentitylinker/GazetteerSearcher.java b/geoentitylinker-addon/src/main/java/opennlp/addons/geoentitylinker/GazetteerSearcher.java
index e18253d..60997e2 100644
--- a/geoentitylinker-addon/src/main/java/opennlp/addons/geoentitylinker/GazetteerSearcher.java
+++ b/geoentitylinker-addon/src/main/java/opennlp/addons/geoentitylinker/GazetteerSearcher.java
@@ -17,12 +17,12 @@ package opennlp.addons.geoentitylinker;
 
 import java.io.File;
 import java.io.IOException;
+import java.lang.invoke.MethodHandles;
 import java.nio.file.Paths;
 import java.util.ArrayList;
 import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
-import java.util.logging.Level;
 import org.apache.lucene.analysis.Analyzer;
 import org.apache.lucene.analysis.standard.StandardAnalyzer;
 import org.apache.lucene.document.Document;
@@ -37,12 +37,12 @@ import org.apache.lucene.search.Query;
 import org.apache.lucene.search.TopDocs;
 import org.apache.lucene.store.Directory;
 import org.apache.lucene.store.MMapDirectory;
-import org.apache.lucene.util.Version;
 import opennlp.tools.entitylinker.EntityLinkerProperties;
-import org.apache.log4j.Logger;
 import org.apache.lucene.analysis.core.KeywordAnalyzer;
 import org.apache.lucene.analysis.miscellaneous.PerFieldAnalyzerWrapper;
 import org.apache.lucene.analysis.util.CharArraySet;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  *
@@ -53,12 +53,12 @@ import org.apache.lucene.analysis.util.CharArraySet;
 public class GazetteerSearcher {
 
   private final String REGEX_CLEAN = "[^\\p{L}\\p{Nd}]";
-  private static final Logger LOGGER = Logger.getLogger(GazetteerSearcher.class);
+  private static final Logger LOG = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
   private double scoreCutoff = .70;
-  private boolean doubleQuoteAllSearchTerms = false;
+  private final boolean doubleQuoteAllSearchTerms = false;
   private boolean useHierarchyField = false;
 
-  private EntityLinkerProperties properties;
+  private final EntityLinkerProperties properties;
 
   private Directory opennlpIndex;//= new MMapDirectory(new File(indexloc));
   private IndexReader opennlpReader;// = DirectoryReader.open(geonamesIndex);
@@ -67,13 +67,10 @@ public class GazetteerSearcher {
 
   public static void main(String[] args) {
     try {
-      boolean b = Boolean.valueOf("true");
-
+      boolean b = true;
       new GazetteerSearcher(new EntityLinkerProperties(new File("c:\\temp\\entitylinker.properties"))).find("alabama", 5, " countrycode:us AND gazsource:usgs");
     } catch (IOException ex) {
-      java.util.logging.Logger.getLogger(GazetteerSearcher.class.getName()).log(Level.SEVERE, null, ex);
-    } catch (Exception ex) {
-      java.util.logging.Logger.getLogger(GazetteerSearcher.class.getName()).log(Level.SEVERE, null, ex);
+      LOG.error(ex.getLocalizedMessage(), ex);
     }
   }
 
@@ -98,7 +95,7 @@ public class GazetteerSearcher {
       return linkedData;
     }
     try {
-      /**
+      /*
        * build the search string Sometimes no country context is found. In this
        * case the code variables will be empty strings
        */
@@ -108,7 +105,7 @@ public class GazetteerSearcher {
             + " AND " + whereClause;
       }
 
-      /**
+      /*
        * check the cache and go no further if the records already exist
        */
       ArrayList<GazetteerEntry> get = GazetteerSearchCache.get(placeNameQueryString);
@@ -116,7 +113,7 @@ public class GazetteerSearcher {
 
         return get;
       }
-      /**
+      /*
        * search the placename
        */
       QueryParser parser = new QueryParser(placeNameQueryString, opennlpAnalyzer);
@@ -126,17 +123,12 @@ public class GazetteerSearcher {
       TopDocs bestDocs = opennlpSearcher.search(q, rowsReturned);
       Double maxscore = 0d;
       for (int i = 0; i < bestDocs.scoreDocs.length; ++i) {
-        GazetteerEntry entry = new GazetteerEntry();
         int docId = bestDocs.scoreDocs[i].doc;
         double sc = bestDocs.scoreDocs[i].score;
         if (maxscore.compareTo(sc) < 0) {
           maxscore = sc;
         }
-        entry.getScoreMap().put("lucene", sc);
-        entry.setIndexID(docId + "");
-
         Document d = opennlpSearcher.doc(docId);
-
         List<IndexableField> fields = d.getFields();
 
         String lat = d.get("latitude");
@@ -147,41 +139,39 @@ public class GazetteerSearcher {
         String itemtype = d.get("loctype");
         String source = d.get("gazsource");
         String hier = d.get("hierarchy");
-        entry.setSource(source);
 
-        entry.setItemID(docId + "");
-        entry.setLatitude(Double.valueOf(lat));
-        entry.setLongitude(Double.valueOf(lon));
-        entry.setItemType(itemtype);
-        entry.setItemParentID(parentid);
-        entry.setProvinceCode(provid);
-        entry.setCountryCode(parentid);
-        entry.setItemName(placename);
-        entry.setHierarchy(hier);
+        GazetteerEntry ge = new GazetteerEntry(parentid, String.valueOf(docId), placename, itemtype);
+        ge.getScoreMap().put("lucene", sc);
+        ge.setIndexID(String.valueOf(docId));
+        ge.setSource(source);
+        ge.setLatitude(Double.valueOf(lat));
+        ge.setLongitude(Double.valueOf(lon));
+        ge.setProvinceCode(provid);
+        ge.setCountryCode(parentid);
+        ge.setHierarchy(hier);
         for (int idx = 0; idx < fields.size(); idx++) {
-          entry.getIndexData().put(fields.get(idx).name(), d.get(fields.get(idx).name()));
+          ge.getIndexData().put(fields.get(idx).name(), d.get(fields.get(idx).name()));
         }
 
-        /**
-         * only want hits above the levenstein thresh. This should be a low
+        /*
+         * only want hits above the levenshtein thresh. This should be a low
          * thresh due to the use of the hierarchy field in the index
          */
         // if (normLev > scoreCutoff) {
-        if (entry.getItemParentID().toLowerCase().equals(parentid.toLowerCase()) || parentid.toLowerCase().equals("")) {
+        if (ge.getItemParentID().equalsIgnoreCase(parentid) || parentid.equalsIgnoreCase("")) {
           //make sure we don't produce a duplicate
-          if (!linkedData.contains(entry)) {
-            linkedData.add(entry);
-            /**
+          if (!linkedData.contains(ge)) {
+            linkedData.add(ge);
+            /*
              * add the records to the cache for this query
              */
             GazetteerSearchCache.put(placeNameQueryString, linkedData);
           }
         }
-        //}
       }
 
     } catch (IOException | ParseException ex) {
-      LOGGER.error(ex);
+      LOG.error(ex.getLocalizedMessage(), ex);
     }
 
     return linkedData;
@@ -210,8 +200,7 @@ public class GazetteerSearcher {
     if (opennlpIndex == null) {
       String indexloc = properties.getProperty("opennlp.geoentitylinker.gaz", "");
       if (indexloc.equals("")) {
-        LOGGER.error(new Exception("Opennlp combined Gaz directory location not found"));
-
+        LOG.error("Opennlp combined Gaz directory location not found!");
       }
 
       opennlpIndex = new MMapDirectory(Paths.get(indexloc));
@@ -219,7 +208,7 @@ public class GazetteerSearcher {
       opennlpSearcher = new IndexSearcher(opennlpReader);
       opennlpAnalyzer
           = //new StandardAnalyzer(Version.LUCENE_48, new CharArraySet(Version.LUCENE_48, new ArrayList(), true));
-          new StandardAnalyzer(new CharArraySet(new ArrayList(), true));
+          new StandardAnalyzer(new CharArraySet(new ArrayList<>(), true));
       Map<String, Analyzer> analyMap = new HashMap<>();
 
       analyMap.put("countrycode", new KeywordAnalyzer());
@@ -234,10 +223,10 @@ public class GazetteerSearcher {
       String cutoff = properties.getProperty("opennlp.geoentitylinker.gaz.lucenescore.min", String.valueOf(scoreCutoff));
       String usehierarchy = properties.getProperty("opennlp.geoentitylinker.gaz.hierarchyfield", String.valueOf("0"));
       if (cutoff != null && !cutoff.isEmpty()) {
-        scoreCutoff = Double.valueOf(cutoff);
+        scoreCutoff = Double.parseDouble(cutoff);
       }
       if (usehierarchy != null && !usehierarchy.isEmpty()) {
-        useHierarchyField = Boolean.valueOf(usehierarchy);
+        useHierarchyField = Boolean.parseBoolean(usehierarchy);
       }
       //  opennlp.geoentitylinker.gaz.doublequote=false
       //opennlp.geoentitylinker.gaz.hierarchyfield=false
diff --git a/geoentitylinker-addon/src/main/java/opennlp/addons/geoentitylinker/GeoEntityLinker.java b/geoentitylinker-addon/src/main/java/opennlp/addons/geoentitylinker/GeoEntityLinker.java
index 43be5d5..7b77b47 100644
--- a/geoentitylinker-addon/src/main/java/opennlp/addons/geoentitylinker/GeoEntityLinker.java
+++ b/geoentitylinker-addon/src/main/java/opennlp/addons/geoentitylinker/GeoEntityLinker.java
@@ -23,10 +23,10 @@ import opennlp.addons.geoentitylinker.scoring.GeoHashBinningScorer;
 import opennlp.addons.geoentitylinker.scoring.FuzzyStringMatchScorer;
 import java.util.ArrayList;
 import java.util.Collections;
-import java.util.Comparator;
-import java.util.HashMap;
 import java.util.Iterator;
 import java.util.List;
+import java.util.Map;
+
 import opennlp.addons.geoentitylinker.scoring.PlacetypeScorer;
 import opennlp.addons.geoentitylinker.scoring.ProvinceProximityScorer;
 import opennlp.tools.entitylinker.BaseLink;
@@ -47,11 +47,11 @@ public class GeoEntityLinker implements EntityLinker<LinkedSpan> {
   private AdminBoundaryContextGenerator countryContext;
   private EntityLinkerProperties linkerProperties;
   private GazetteerSearcher gazateerSearcher;
-  private List<LinkedEntityScorer<AdminBoundaryContext>> scorers = new ArrayList<>();
+  private final List<LinkedEntityScorer<AdminBoundaryContext>> scorers = new ArrayList<>();
 
   @Override
   public List<LinkedSpan> find(String doctext, Span[] sentences, Span[][] tokensBySentence, Span[][] namesBySentence) {
-    ArrayList<LinkedSpan> spans = new ArrayList<LinkedSpan>();
+    ArrayList<LinkedSpan> spans = new ArrayList<>();
 
     if (linkerProperties == null) {
       throw new IllegalArgumentException("EntityLinkerProperties cannot be null");
@@ -90,7 +90,7 @@ public class GeoEntityLinker implements EntityLinker<LinkedSpan> {
         if (geoNamesEntries.isEmpty()) {
           continue;
         }
-        /**
+        /*
          * Normalize the returned scores for this name... this will assist the
          * sort
          */
@@ -109,7 +109,7 @@ public class GeoEntityLinker implements EntityLinker<LinkedSpan> {
             gazetteerEntry.getScoreMap().put("normlucene", normalize);
           }
         }
-        LinkedSpan newspan = new LinkedSpan(geoNamesEntries, names[i], 0);
+        LinkedSpan<BaseLink> newspan = new LinkedSpan<>(geoNamesEntries, names[i], 0);
         newspan.setSearchTerm(matches[i]);
         newspan.setLinkedEntries(geoNamesEntries);
         newspan.setSentenceid(s);
@@ -123,40 +123,37 @@ public class GeoEntityLinker implements EntityLinker<LinkedSpan> {
         scorer.score(spans, doctext, sentences, linkerProperties, context);
       }
     }
-    /**
+    /*
      * sort the data with the best score on top based on the sum of the scores
      * below from the score map for each baselink object
      */
     for (LinkedSpan<BaseLink> s : spans) {
       ArrayList<BaseLink> linkedData = s.getLinkedEntries();
-      Collections.sort(linkedData, Collections.reverseOrder(new Comparator<BaseLink>() {
-        @Override
-        public int compare(BaseLink o1, BaseLink o2) {
-          HashMap<String, Double> o1scoreMap = o1.getScoreMap();
-          HashMap<String, Double> o2scoreMap = o2.getScoreMap();
-          if (o1scoreMap.size() != o2scoreMap.size()) {
-            return 0;
-          }
-          double sumo1 = 0d;
-          double sumo2 = 0d;
-          for (String object : o1scoreMap.keySet()) {
-            if (object.equals("typescore")
-                || object.equals("countrycontext")
-                || object.equals("placenamedicecoef")
-                || object.equals("provincecontext")
-                || object.equals("geohashbin")
-                || object.equals("normlucene")) {
-              sumo1 += o1scoreMap.get(object);
-              sumo2 += o2scoreMap.get(object);
-            }
+      linkedData.sort(Collections.reverseOrder((o1, o2) -> {
+        Map<String, Double> o1scoreMap = o1.getScoreMap();
+        Map<String, Double> o2scoreMap = o2.getScoreMap();
+        if (o1scoreMap.size() != o2scoreMap.size()) {
+          return 0;
+        }
+        double sumo1 = 0d;
+        double sumo2 = 0d;
+        for (String object : o1scoreMap.keySet()) {
+          if (object.equals("typescore")
+                  || object.equals("countrycontext")
+                  || object.equals("placenamedicecoef")
+                  || object.equals("provincecontext")
+                  || object.equals("geohashbin")
+                  || object.equals("normlucene")) {
+            sumo1 += o1scoreMap.get(object);
+            sumo2 += o2scoreMap.get(object);
           }
-
-          return Double.compare(sumo1,
-              sumo2);
         }
+
+        return Double.compare(sumo1,
+                sumo2);
       }));
       //prune the list to topN
-      Iterator iterator = linkedData.iterator();
+      Iterator<BaseLink> iterator = linkedData.iterator();
       int n = 0;
       while (iterator.hasNext()) {
         if (n >= topN) {
@@ -203,7 +200,7 @@ public class GeoEntityLinker implements EntityLinker<LinkedSpan> {
     countryContext = new AdminBoundaryContextGenerator(this.linkerProperties);
     gazateerSearcher = new GazetteerSearcher(this.linkerProperties);
     String rowsRetStr = this.linkerProperties.getProperty("opennlp.geoentitylinker.gaz.rowsreturned", "2");
-    Integer rws = 2;
+    int rws;
     try {
       rws = Integer.valueOf(rowsRetStr);
     } catch (NumberFormatException e) {
diff --git a/geoentitylinker-addon/src/main/java/opennlp/addons/geoentitylinker/indexing/USGSProcessor.java b/geoentitylinker-addon/src/main/java/opennlp/addons/geoentitylinker/indexing/USGSProcessor.java
index 59c94b7..804c2db 100644
--- a/geoentitylinker-addon/src/main/java/opennlp/addons/geoentitylinker/indexing/USGSProcessor.java
+++ b/geoentitylinker-addon/src/main/java/opennlp/addons/geoentitylinker/indexing/USGSProcessor.java
@@ -241,7 +241,7 @@ public class USGSProcessor {
          */
         String line = adm.getCountryCode() + "\t" + adm.getProvCode() + "\t" + adm.getCountyCode() + "\t" + country + "\t" + province + "\t" + adm.getCountyName() + "\t"
             + "(U\\.S\\.[ $]|U\\.S\\.A\\.[ $]|United States|the US[ $]|a us[ $])" + "\t" + adm.getProvinceName() + "\t" + adm.getCountyName() + "\n";
-        bw.write(line);i
+        bw.write(line);
         ///  System.out.println(line);
 
       }
diff --git a/geoentitylinker-addon/src/main/java/opennlp/addons/geoentitylinker/scoring/FuzzyStringMatchScorer.java b/geoentitylinker-addon/src/main/java/opennlp/addons/geoentitylinker/scoring/FuzzyStringMatchScorer.java
index e9634d9..34c58fb 100644
--- a/geoentitylinker-addon/src/main/java/opennlp/addons/geoentitylinker/scoring/FuzzyStringMatchScorer.java
+++ b/geoentitylinker-addon/src/main/java/opennlp/addons/geoentitylinker/scoring/FuzzyStringMatchScorer.java
@@ -34,6 +34,7 @@ public class FuzzyStringMatchScorer implements LinkedEntityScorer<AdminBoundaryC
 
   @Override
   public void score(List<LinkedSpan> linkedSpans, String docText, Span[] sentenceSpans, EntityLinkerProperties properties, AdminBoundaryContext additionalContext) {
+
     for (LinkedSpan<BaseLink> linkedSpan : linkedSpans) {
       for (BaseLink link : linkedSpan.getLinkedEntries()) {
         if (link instanceof GazetteerEntry) {
diff --git a/geoentitylinker-addon/src/main/java/opennlp/addons/geoentitylinker/scoring/GeoHashBinningScorer.java b/geoentitylinker-addon/src/main/java/opennlp/addons/geoentitylinker/scoring/GeoHashBinningScorer.java
index d3494e0..acbc19f 100644
--- a/geoentitylinker-addon/src/main/java/opennlp/addons/geoentitylinker/scoring/GeoHashBinningScorer.java
+++ b/geoentitylinker-addon/src/main/java/opennlp/addons/geoentitylinker/scoring/GeoHashBinningScorer.java
@@ -38,10 +38,9 @@ public class GeoHashBinningScorer implements LinkedEntityScorer<AdminBoundaryCon
 
   @Override
   public void score(List<LinkedSpan> linkedSpans, String docText, Span[] sentenceSpans, EntityLinkerProperties properties,  AdminBoundaryContext additionalContext) {
-     //Map<Double, Double> latLongs = new HashMap<Double, Double>();
     List<GazetteerEntry> allGazEntries = new ArrayList<>();
 
-    /**
+    /*
      * collect all the gaz entry references
      */
     for (LinkedSpan<BaseLink> ls : linkedSpans) {
diff --git a/geoentitylinker-addon/src/main/java/opennlp/addons/geoentitylinker/scoring/ModelBasedScorer.java b/geoentitylinker-addon/src/main/java/opennlp/addons/geoentitylinker/scoring/ModelBasedScorer.java
index 01b3269..1ec9fea 100644
--- a/geoentitylinker-addon/src/main/java/opennlp/addons/geoentitylinker/scoring/ModelBasedScorer.java
+++ b/geoentitylinker-addon/src/main/java/opennlp/addons/geoentitylinker/scoring/ModelBasedScorer.java
@@ -16,8 +16,7 @@
 package opennlp.addons.geoentitylinker.scoring;
 
 import java.io.File;
-import java.io.FileNotFoundException;
-import java.io.IOException;
+import java.lang.invoke.MethodHandles;
 import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
@@ -28,7 +27,8 @@ import opennlp.tools.entitylinker.EntityLinkerProperties;
 import opennlp.tools.entitylinker.BaseLink;
 import opennlp.tools.entitylinker.LinkedSpan;
 import opennlp.tools.util.Span;
-import org.apache.log4j.Logger;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  *
@@ -36,7 +36,8 @@ import org.apache.log4j.Logger;
  */
 public class ModelBasedScorer implements LinkedEntityScorer<AdminBoundaryContext> {
 
-  private static final Logger LOGGER = Logger.getLogger(ModelBasedScorer.class);
+  private static final Logger LOG = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
+  
   DocumentCategorizerME documentCategorizerME;
   DoccatModel doccatModel;
   public static final int RADIUS = 200;
@@ -66,12 +67,8 @@ public class ModelBasedScorer implements LinkedEntityScorer<AdminBoundaryContext
         }
       }
 
-    } catch (FileNotFoundException ex) {
-      LOGGER.error(ex);
-    } catch (IOException ex) {
-      LOGGER.error(ex);
     } catch (Exception ex) {
-      LOGGER.error(ex);
+      LOG.error(ex.getLocalizedMessage(), ex);
     }
   }
 
@@ -89,7 +86,7 @@ public class ModelBasedScorer implements LinkedEntityScorer<AdminBoundaryContext
   public Map<Integer, String> generateProximalFeatures(List<LinkedSpan> linkedSpans, Span[] sentenceSpans, String docText, int radius) {
     Map<Integer, String> featureBags = new HashMap<>();
     Map<Integer, Integer> nameMentionMap = new HashMap<>();
-    /**
+    /*
      * iterator over the map that contains a mapping of every country code to
      * all of its mentions in the document
      */
@@ -99,7 +96,7 @@ public class ModelBasedScorer implements LinkedEntityScorer<AdminBoundaryContext
         //don't care about spans that did not get linked to anything at all; nothing to work with
         continue;
       }
-      /**
+      /*
        * get the sentence the name span was found in, the beginning of the
        * sentence will suffice as a centroid for feature generation around the
        * named entity
@@ -107,7 +104,7 @@ public class ModelBasedScorer implements LinkedEntityScorer<AdminBoundaryContext
       Integer mentionIdx = sentenceSpans[span.getSentenceid()].getStart();
       nameMentionMap.put(i, mentionIdx);
     }
-    /**
+    /*
      * now associate each span to a string that will be used for categorization
      * against the model.
      */
@@ -127,7 +124,7 @@ public class ModelBasedScorer implements LinkedEntityScorer<AdminBoundaryContext
     if (right <= left) {
       chunk = "";
     } else {
-      /**
+      /*
        * don't want to chop any words in half, so take fron the first space to
        * the last space in the chunk string
        */
@@ -136,7 +133,7 @@ public class ModelBasedScorer implements LinkedEntityScorer<AdminBoundaryContext
         left = chunk.indexOf(" ");
       }
       right = chunk.lastIndexOf(" ");
-      /**
+      /*
        * now get the substring again with only whole words
        */
       if (left < right) {
@@ -149,7 +146,7 @@ public class ModelBasedScorer implements LinkedEntityScorer<AdminBoundaryContext
 
   private Map<String, Double> getScore(String text) throws Exception {
     Map<String, Double> scoreMap = new HashMap<>();
-    double[] categorize = documentCategorizerME.categorize(text);
+    double[] categorize = documentCategorizerME.categorize(List.of(text).toArray(new String[0]));
     int catSize = documentCategorizerME.getNumberOfCategories();
     for (int i = 0; i < catSize; i++) {
       String category = documentCategorizerME.getCategory(i);
diff --git a/geoentitylinker-addon/src/test/java/apache/opennlp/addons/AppTest.java b/geoentitylinker-addon/src/test/java/apache/opennlp/addons/AppTest.java
deleted file mode 100644
index 60ea0f2..0000000
--- a/geoentitylinker-addon/src/test/java/apache/opennlp/addons/AppTest.java
+++ /dev/null
@@ -1,38 +0,0 @@
-package apache.opennlp.addons;
-
-import junit.framework.Test;
-import junit.framework.TestCase;
-import junit.framework.TestSuite;
-
-/**
- * Unit test for simple App.
- */
-public class AppTest 
-    extends TestCase
-{
-    /**
-     * Create the test case
-     *
-     * @param testName name of the test case
-     */
-    public AppTest( String testName )
-    {
-        super( testName );
-    }
-
-    /**
-     * @return the suite of tests being tested
-     */
-    public static Test suite()
-    {
-        return new TestSuite( AppTest.class );
-    }
-
-    /**
-     * Rigourous Test :-)
-     */
-    public void testApp()
-    {
-        assertTrue( true );
-    }
-}
diff --git a/japanese-addon/build.xml b/japanese-addon/build.xml
index 925f888..1bb60b5 100644
--- a/japanese-addon/build.xml
+++ b/japanese-addon/build.xml
@@ -23,7 +23,7 @@
     <property name="cls.dir" value="classes"/>
     <property name="lib.dir" value="lib"/>
     <property name="test.result.dir" value="test-result"/>
-    <property name="product.jar" value="opennlp-japanese-addon-1.0-SNAPSHOT.jar"/>
+    <property name="product.jar" value="opennlp-japanese-addon-2.2.1-SNAPSHOT.jar"/>
 
     <target name="compile" description="compile source and test code">
         <mkdir dir="${cls.dir}"/>
diff --git a/japanese-addon/pom.xml b/japanese-addon/pom.xml
index 36ac3d1..d10c307 100644
--- a/japanese-addon/pom.xml
+++ b/japanese-addon/pom.xml
@@ -1,31 +1,86 @@
+<?xml version="1.0" encoding="UTF-8"?>
+
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one
+   or more contributor license agreements.  See the NOTICE file
+   distributed with this work for additional information
+   regarding copyright ownership.  The ASF licenses this file
+   to you under the Apache License, Version 2.0 (the
+   "License"); you may not use this file except in compliance
+   with the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing,
+   software distributed under the License is distributed on an
+   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+   KIND, either express or implied.  See the License for the
+   specific language governing permissions and limitations
+   under the License.
+-->
+
 <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
   <modelVersion>4.0.0</modelVersion>
-  <groupId>org.apache.opennlp</groupId>
+  <parent>
+    <groupId>org.apache.opennlp</groupId>
+    <artifactId>opennlp-addons</artifactId>
+    <version>2.2.1-SNAPSHOT</version>
+  </parent>
+
   <artifactId>japanese-addon</artifactId>
   <packaging>jar</packaging>
-  <version>1.0-SNAPSHOT</version>
-  <name>japanese-addon</name>
-  <url>http://maven.apache.org</url>
-
-  <properties>
-    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
-    <maven.compiler.source>1.8</maven.compiler.source>
-    <maven.compiler.target>1.8</maven.compiler.target>
-  </properties>
+  <version>2.2.1-SNAPSHOT</version>
+  <name>Apache OpenNLP Japanese Addon</name>
 
   <dependencies>
     <dependency>
-      <groupId>junit</groupId>
-      <artifactId>junit</artifactId>
-      <version>4.13.1</version>
-      <scope>test</scope>
+      <groupId>org.apache.opennlp</groupId>
+      <artifactId>opennlp-tools</artifactId>
     </dependency>
 
     <dependency>
-      <groupId>org.apache.opennlp</groupId>
-      <artifactId>opennlp-tools</artifactId>
-      <version>1.9.1-SNAPSHOT</version>
+      <groupId>org.junit.jupiter</groupId>
+      <artifactId>junit-jupiter-api</artifactId>
+    </dependency>
+
+    <dependency>
+      <groupId>org.junit.jupiter</groupId>
+      <artifactId>junit-jupiter-engine</artifactId>
+    </dependency>
+
+    <dependency>
+      <groupId>org.junit.jupiter</groupId>
+      <artifactId>junit-jupiter-params</artifactId>
+    </dependency>
+
+    <dependency>
+      <groupId>org.apache.logging.log4j</groupId>
+      <artifactId>log4j-api</artifactId>
+    </dependency>
+
+    <dependency>
+      <groupId>org.apache.logging.log4j</groupId>
+      <artifactId>log4j-core</artifactId>
+    </dependency>
+    
+    <dependency>
+      <groupId>org.apache.logging.log4j</groupId>
+      <artifactId>log4j-slf4j-impl</artifactId>
     </dependency>
   </dependencies>
+
+  <build>
+    <plugins>
+      <plugin>
+        <groupId>org.apache.maven.plugins</groupId>
+        <artifactId>maven-compiler-plugin</artifactId>
+        <configuration>
+          <source>${maven.compiler.source}</source>
+          <target>${maven.compiler.target}</target>
+          <compilerArgument>-Xlint</compilerArgument>
+        </configuration>
+      </plugin>
+    </plugins>
+  </build>
 </project>
diff --git a/japanese-addon/src/test/java/opennlp/tools/namefind/AuxiliaryInfoUtilTest.java b/japanese-addon/src/test/java/opennlp/tools/namefind/AuxiliaryInfoUtilTest.java
index 91bd756..4c0bcc1 100644
--- a/japanese-addon/src/test/java/opennlp/tools/namefind/AuxiliaryInfoUtilTest.java
+++ b/japanese-addon/src/test/java/opennlp/tools/namefind/AuxiliaryInfoUtilTest.java
@@ -17,56 +17,59 @@
 
 package opennlp.tools.namefind;
 
-import org.junit.Assert;
-import org.junit.Test;
+import org.junit.jupiter.api.Assertions;
+import org.junit.jupiter.api.Test;
+
+import static org.junit.jupiter.api.Assertions.assertEquals;
 
 public class AuxiliaryInfoUtilTest {
 
   @Test
   public void testGetSeparatorIndex() throws Exception {
-    Assert.assertEquals(0, AuxiliaryInfoUtil.getSeparatorIndex("/POStag"));
-    Assert.assertEquals(1, AuxiliaryInfoUtil.getSeparatorIndex("1/POStag"));
-    Assert.assertEquals(10, AuxiliaryInfoUtil.getSeparatorIndex("word/stuff/POStag"));
+    assertEquals(0, AuxiliaryInfoUtil.getSeparatorIndex("/POStag"));
+    assertEquals(1, AuxiliaryInfoUtil.getSeparatorIndex("1/POStag"));
+    assertEquals(10, AuxiliaryInfoUtil.getSeparatorIndex("word/stuff/POStag"));
   }
 
-  @Test(expected = RuntimeException.class)
+  @Test
   public void testGetSeparatorIndexNoPos() throws Exception {
-    AuxiliaryInfoUtil.getSeparatorIndex("NOPOStags");
+    Assertions.assertThrows(RuntimeException.class, () ->
+            AuxiliaryInfoUtil.getSeparatorIndex("NOPOStags"));
   }
 
   @Test
   public void testGetWordPart() throws Exception {
-    Assert.assertEquals(" ", AuxiliaryInfoUtil.getWordPart("/POStag"));
-    Assert.assertEquals("1", AuxiliaryInfoUtil.getWordPart("1/POStag"));
-    Assert.assertEquals("word", AuxiliaryInfoUtil.getWordPart("word/POStag"));
-    Assert.assertEquals("word/stuff", AuxiliaryInfoUtil.getWordPart("word/stuff/POStag"));
+    assertEquals(" ", AuxiliaryInfoUtil.getWordPart("/POStag"));
+    assertEquals("1", AuxiliaryInfoUtil.getWordPart("1/POStag"));
+    assertEquals("word", AuxiliaryInfoUtil.getWordPart("word/POStag"));
+    assertEquals("word/stuff", AuxiliaryInfoUtil.getWordPart("word/stuff/POStag"));
   }
 
   @Test
   public void testGetWordParts() throws Exception {
     String[] results = AuxiliaryInfoUtil.getWordParts(new String[]{"1/A", "234/B", "3456/C", "/D"});
-    Assert.assertEquals(4, results.length);
-    Assert.assertEquals("1", results[0]);
-    Assert.assertEquals("234", results[1]);
-    Assert.assertEquals("3456", results[2]);
-    Assert.assertEquals(" ", results[3]);
+    assertEquals(4, results.length);
+    assertEquals("1", results[0]);
+    assertEquals("234", results[1]);
+    assertEquals("3456", results[2]);
+    assertEquals(" ", results[3]);
   }
 
   @Test
   public void testGetAuxPart() throws Exception {
-    Assert.assertEquals("POStag", AuxiliaryInfoUtil.getAuxPart("/POStag"));
-    Assert.assertEquals("POStag", AuxiliaryInfoUtil.getAuxPart("1/POStag"));
-    Assert.assertEquals("POStag", AuxiliaryInfoUtil.getAuxPart("word/POStag"));
-    Assert.assertEquals("POStag", AuxiliaryInfoUtil.getAuxPart("word/stuff/POStag"));
+    assertEquals("POStag", AuxiliaryInfoUtil.getAuxPart("/POStag"));
+    assertEquals("POStag", AuxiliaryInfoUtil.getAuxPart("1/POStag"));
+    assertEquals("POStag", AuxiliaryInfoUtil.getAuxPart("word/POStag"));
+    assertEquals("POStag", AuxiliaryInfoUtil.getAuxPart("word/stuff/POStag"));
   }
 
   @Test
   public void testGetAuxParts() throws Exception {
     String[] results = AuxiliaryInfoUtil.getAuxParts(new String[] {"1/ABC", "234/B", "3456/CD", "/DEFGH"});
-    Assert.assertEquals(4, results.length);
-    Assert.assertEquals("ABC", results[0]);
-    Assert.assertEquals("B", results[1]);
-    Assert.assertEquals("CD", results[2]);
-    Assert.assertEquals("DEFGH", results[3]);
+    assertEquals(4, results.length);
+    assertEquals("ABC", results[0]);
+    assertEquals("B", results[1]);
+    assertEquals("CD", results[2]);
+    assertEquals("DEFGH", results[3]);
   }
 }
diff --git a/japanese-addon/src/test/java/opennlp/tools/util/featuregen/AuxiliaryInfoAwareDelegateFeatureGeneratorTest.java b/japanese-addon/src/test/java/opennlp/tools/util/featuregen/AuxiliaryInfoAwareDelegateFeatureGeneratorTest.java
index aa03110..8c8f44d 100644
--- a/japanese-addon/src/test/java/opennlp/tools/util/featuregen/AuxiliaryInfoAwareDelegateFeatureGeneratorTest.java
+++ b/japanese-addon/src/test/java/opennlp/tools/util/featuregen/AuxiliaryInfoAwareDelegateFeatureGeneratorTest.java
@@ -20,9 +20,10 @@ package opennlp.tools.util.featuregen;
 import java.util.ArrayList;
 import java.util.List;
 
-import org.junit.Assert;
-import org.junit.Before;
-import org.junit.Test;
+import org.junit.jupiter.api.BeforeEach;
+import org.junit.jupiter.api.Test;
+
+import static org.junit.jupiter.api.Assertions.assertEquals;
 
 public class AuxiliaryInfoAwareDelegateFeatureGeneratorTest {
 
@@ -30,7 +31,7 @@ public class AuxiliaryInfoAwareDelegateFeatureGeneratorTest {
 
   private List<String> features;
 
-  @Before
+  @BeforeEach
   public void setUp() throws Exception {
     features = new ArrayList<>();
   }
@@ -41,8 +42,8 @@ public class AuxiliaryInfoAwareDelegateFeatureGeneratorTest {
         new IdentityFeatureGenerator(), false);
 
     featureGenerator.createFeatures(features, testSentence, 2, null);
-    Assert.assertEquals(1, features.size());
-    Assert.assertEquals("w3", features.get(0));
+    assertEquals(1, features.size());
+    assertEquals("w3", features.get(0));
   }
 
   @Test
@@ -51,8 +52,8 @@ public class AuxiliaryInfoAwareDelegateFeatureGeneratorTest {
         new IdentityFeatureGenerator(), true);
 
     featureGenerator.createFeatures(features, testSentence, 3, null);
-    Assert.assertEquals(1, features.size());
-    Assert.assertEquals("pos4", features.get(0));
+    assertEquals(1, features.size());
+    assertEquals("pos4", features.get(0));
   }
 
   static class IdentityFeatureGenerator implements AdaptiveFeatureGenerator {
diff --git a/japanese-addon/src/test/java/opennlp/tools/util/featuregen/lang/jpn/BigramNameFeatureGeneratorTest.java b/japanese-addon/src/test/java/opennlp/tools/util/featuregen/lang/jpn/BigramNameFeatureGeneratorTest.java
index ddfa024..46d952e 100644
--- a/japanese-addon/src/test/java/opennlp/tools/util/featuregen/lang/jpn/BigramNameFeatureGeneratorTest.java
+++ b/japanese-addon/src/test/java/opennlp/tools/util/featuregen/lang/jpn/BigramNameFeatureGeneratorTest.java
@@ -20,18 +20,19 @@ package opennlp.tools.util.featuregen.lang.jpn;
 import java.util.ArrayList;
 import java.util.List;
 
-import org.junit.Assert;
-import org.junit.Before;
-import org.junit.Test;
-
 import opennlp.tools.util.featuregen.AdaptiveFeatureGenerator;
 
+import org.junit.jupiter.api.BeforeEach;
+import org.junit.jupiter.api.Test;
+
+import static org.junit.jupiter.api.Assertions.assertEquals;
+
 public class BigramNameFeatureGeneratorTest {
 
   private List<String> features;
   static String[] testSentence = new String[] {"This", "is", "an", "example", "sentence"};
 
-  @Before
+  @BeforeEach
   public void setUp() throws Exception {
     features = new ArrayList<>();
   }
@@ -45,9 +46,9 @@ public class BigramNameFeatureGeneratorTest {
 
     generator.createFeatures(features, testSentence, testTokenIndex, null);
 
-    Assert.assertEquals(2, features.size());
-    Assert.assertEquals("w,nw=This,is", features.get(0));
-    Assert.assertEquals("wc,nc=alpha,alpha", features.get(1));
+    assertEquals(2, features.size());
+    assertEquals("w,nw=This,is", features.get(0));
+    assertEquals("wc,nc=alpha,alpha", features.get(1));
   }
 
   @Test
@@ -59,11 +60,11 @@ public class BigramNameFeatureGeneratorTest {
 
     generator.createFeatures(features, testSentence, testTokenIndex, null);
 
-    Assert.assertEquals(4, features.size());
-    Assert.assertEquals("pw,w=is,an", features.get(0));
-    Assert.assertEquals("pwc,wc=alpha,alpha", features.get(1));
-    Assert.assertEquals("w,nw=an,example", features.get(2));
-    Assert.assertEquals("wc,nc=alpha,alpha", features.get(3));
+    assertEquals(4, features.size());
+    assertEquals("pw,w=is,an", features.get(0));
+    assertEquals("pwc,wc=alpha,alpha", features.get(1));
+    assertEquals("w,nw=an,example", features.get(2));
+    assertEquals("wc,nc=alpha,alpha", features.get(3));
   }
 
   @Test
@@ -75,9 +76,9 @@ public class BigramNameFeatureGeneratorTest {
 
     generator.createFeatures(features, testSentence, testTokenIndex, null);
 
-    Assert.assertEquals(2, features.size());
-    Assert.assertEquals("pw,w=example,sentence", features.get(0));
-    Assert.assertEquals("pwc,wc=alpha,alpha", features.get(1));
+    assertEquals(2, features.size());
+    assertEquals("pw,w=example,sentence", features.get(0));
+    assertEquals("pwc,wc=alpha,alpha", features.get(1));
   }
 
   @Test
@@ -88,9 +89,8 @@ public class BigramNameFeatureGeneratorTest {
     final int testTokenIndex = 0;
 
     AdaptiveFeatureGenerator generator = new BigramNameFeatureGenerator();
-
     generator.createFeatures(features, shortSentence, testTokenIndex, null);
 
-    Assert.assertEquals(0, features.size());
+    assertEquals(0, features.size());
   }
 }
diff --git a/japanese-addon/src/test/java/opennlp/tools/util/featuregen/lang/jpn/FeatureGeneratorUtilTest.java b/japanese-addon/src/test/java/opennlp/tools/util/featuregen/lang/jpn/FeatureGeneratorUtilTest.java
index ce5816f..1b52161 100644
--- a/japanese-addon/src/test/java/opennlp/tools/util/featuregen/lang/jpn/FeatureGeneratorUtilTest.java
+++ b/japanese-addon/src/test/java/opennlp/tools/util/featuregen/lang/jpn/FeatureGeneratorUtilTest.java
@@ -17,51 +17,52 @@
 
 package opennlp.tools.util.featuregen.lang.jpn;
 
-import org.junit.Assert;
-import org.junit.Test;
+import org.junit.jupiter.api.Test;
+
+import static org.junit.jupiter.api.Assertions.assertEquals;
 
 public class FeatureGeneratorUtilTest {
 
   @Test
   public void test() {
     // digits
-    Assert.assertEquals("digit", FeatureGeneratorUtil.tokenFeature("12"));
-    Assert.assertEquals("digit", FeatureGeneratorUtil.tokenFeature("1234"));
-    Assert.assertEquals("other", FeatureGeneratorUtil.tokenFeature("abcd234"));
-    Assert.assertEquals("other", FeatureGeneratorUtil.tokenFeature("1234-56"));
-    Assert.assertEquals("other", FeatureGeneratorUtil.tokenFeature("4/6/2017"));
-    Assert.assertEquals("other", FeatureGeneratorUtil.tokenFeature("1,234,567"));
-    Assert.assertEquals("other", FeatureGeneratorUtil.tokenFeature("12.34567"));
-    Assert.assertEquals("other", FeatureGeneratorUtil.tokenFeature("123(456)7890"));
+    assertEquals("digit", FeatureGeneratorUtil.tokenFeature("12"));
+    assertEquals("digit", FeatureGeneratorUtil.tokenFeature("1234"));
+    assertEquals("other", FeatureGeneratorUtil.tokenFeature("abcd234"));
+    assertEquals("other", FeatureGeneratorUtil.tokenFeature("1234-56"));
+    assertEquals("other", FeatureGeneratorUtil.tokenFeature("4/6/2017"));
+    assertEquals("other", FeatureGeneratorUtil.tokenFeature("1,234,567"));
+    assertEquals("other", FeatureGeneratorUtil.tokenFeature("12.34567"));
+    assertEquals("other", FeatureGeneratorUtil.tokenFeature("123(456)7890"));
 
     // letters
-    Assert.assertEquals("alpha", FeatureGeneratorUtil.tokenFeature("opennlp"));
-    Assert.assertEquals("alpha", FeatureGeneratorUtil.tokenFeature("O"));
-    Assert.assertEquals("alpha", FeatureGeneratorUtil.tokenFeature("OPENNLP"));
-    Assert.assertEquals("other", FeatureGeneratorUtil.tokenFeature("A."));
-    Assert.assertEquals("alpha", FeatureGeneratorUtil.tokenFeature("Mike"));
-    Assert.assertEquals("alpha", FeatureGeneratorUtil.tokenFeature("somethingStupid"));
+    assertEquals("alpha", FeatureGeneratorUtil.tokenFeature("opennlp"));
+    assertEquals("alpha", FeatureGeneratorUtil.tokenFeature("O"));
+    assertEquals("alpha", FeatureGeneratorUtil.tokenFeature("OPENNLP"));
+    assertEquals("other", FeatureGeneratorUtil.tokenFeature("A."));
+    assertEquals("alpha", FeatureGeneratorUtil.tokenFeature("Mike"));
+    assertEquals("alpha", FeatureGeneratorUtil.tokenFeature("somethingStupid"));
 
     // symbols
-    Assert.assertEquals("other", FeatureGeneratorUtil.tokenFeature(","));
-    Assert.assertEquals("other", FeatureGeneratorUtil.tokenFeature("."));
-    Assert.assertEquals("other", FeatureGeneratorUtil.tokenFeature("?"));
-    Assert.assertEquals("other", FeatureGeneratorUtil.tokenFeature("!"));
-    Assert.assertEquals("other", FeatureGeneratorUtil.tokenFeature("#"));
-    Assert.assertEquals("other", FeatureGeneratorUtil.tokenFeature("%"));
-    Assert.assertEquals("other", FeatureGeneratorUtil.tokenFeature("&"));
+    assertEquals("other", FeatureGeneratorUtil.tokenFeature(","));
+    assertEquals("other", FeatureGeneratorUtil.tokenFeature("."));
+    assertEquals("other", FeatureGeneratorUtil.tokenFeature("?"));
+    assertEquals("other", FeatureGeneratorUtil.tokenFeature("!"));
+    assertEquals("other", FeatureGeneratorUtil.tokenFeature("#"));
+    assertEquals("other", FeatureGeneratorUtil.tokenFeature("%"));
+    assertEquals("other", FeatureGeneratorUtil.tokenFeature("&"));
   }
 
   @Test
   public void testJapanese() {
     // Hiragana
-    Assert.assertEquals("hira", FeatureGeneratorUtil.tokenFeature("そういえば"));
-    Assert.assertEquals("hira", FeatureGeneratorUtil.tokenFeature("おーぷん・そ〜す・そふとうぇあ"));
-    Assert.assertEquals("other", FeatureGeneratorUtil.tokenFeature("あぱっち・そふとうぇあ財団"));
+    assertEquals("hira", FeatureGeneratorUtil.tokenFeature("そういえば"));
+    assertEquals("hira", FeatureGeneratorUtil.tokenFeature("おーぷん・そ〜す・そふとうぇあ"));
+    assertEquals("other", FeatureGeneratorUtil.tokenFeature("あぱっち・そふとうぇあ財団"));
 
     // Katakana
-    Assert.assertEquals("kata", FeatureGeneratorUtil.tokenFeature("ジャパン"));
-    Assert.assertEquals("kata", FeatureGeneratorUtil.tokenFeature("オープン・ソ〜ス・ソフトウェア"));
-    Assert.assertEquals("other", FeatureGeneratorUtil.tokenFeature("アパッチ・ソフトウェア財団"));
+    assertEquals("kata", FeatureGeneratorUtil.tokenFeature("ジャパン"));
+    assertEquals("kata", FeatureGeneratorUtil.tokenFeature("オープン・ソ〜ス・ソフトウェア"));
+    assertEquals("other", FeatureGeneratorUtil.tokenFeature("アパッチ・ソフトウェア財団"));
   }
 }
diff --git a/japanese-addon/src/test/java/opennlp/tools/util/featuregen/lang/jpn/TokenClassFeatureGeneratorTest.java b/japanese-addon/src/test/java/opennlp/tools/util/featuregen/lang/jpn/TokenClassFeatureGeneratorTest.java
index be9359f..dc6962d 100644
--- a/japanese-addon/src/test/java/opennlp/tools/util/featuregen/lang/jpn/TokenClassFeatureGeneratorTest.java
+++ b/japanese-addon/src/test/java/opennlp/tools/util/featuregen/lang/jpn/TokenClassFeatureGeneratorTest.java
@@ -20,18 +20,19 @@ package opennlp.tools.util.featuregen.lang.jpn;
 import java.util.ArrayList;
 import java.util.List;
 
-import org.junit.Assert;
-import org.junit.Before;
-import org.junit.Test;
-
 import opennlp.tools.util.featuregen.AdaptiveFeatureGenerator;
 
+import org.junit.jupiter.api.BeforeEach;
+import org.junit.jupiter.api.Test;
+
+import static org.junit.jupiter.api.Assertions.assertEquals;
+
 public class TokenClassFeatureGeneratorTest {
 
   private List<String> features;
   static String[] testSentence = new String[] {"This", "is", "an", "Example", "sentence"};
 
-  @Before
+  @BeforeEach
   public void setUp() throws Exception {
     features = new ArrayList<>();
   }
@@ -45,9 +46,9 @@ public class TokenClassFeatureGeneratorTest {
 
     generator.createFeatures(features, testSentence, testTokenIndex, null);
 
-    Assert.assertEquals(2, features.size());
-    Assert.assertEquals("wc=alpha", features.get(0));
-    Assert.assertEquals("w&c=example,alpha", features.get(1));
+   assertEquals(2, features.size());
+   assertEquals("wc=alpha", features.get(0));
+   assertEquals("w&c=example,alpha", features.get(1));
   }
 
   @Test
@@ -59,7 +60,7 @@ public class TokenClassFeatureGeneratorTest {
 
     generator.createFeatures(features, testSentence, testTokenIndex, null);
 
-    Assert.assertEquals(1, features.size());
-    Assert.assertEquals("wc=alpha", features.get(0));
+   assertEquals(1, features.size());
+   assertEquals("wc=alpha", features.get(0));
   }
 }
diff --git a/japanese-addon/src/test/java/opennlp/tools/util/featuregen/lang/jpn/TokenPatternFeatureGeneratorTest.java b/japanese-addon/src/test/java/opennlp/tools/util/featuregen/lang/jpn/TokenPatternFeatureGeneratorTest.java
index d74051e..24509ef 100644
--- a/japanese-addon/src/test/java/opennlp/tools/util/featuregen/lang/jpn/TokenPatternFeatureGeneratorTest.java
+++ b/japanese-addon/src/test/java/opennlp/tools/util/featuregen/lang/jpn/TokenPatternFeatureGeneratorTest.java
@@ -20,17 +20,18 @@ package opennlp.tools.util.featuregen.lang.jpn;
 import java.util.ArrayList;
 import java.util.List;
 
-import org.junit.Assert;
-import org.junit.Before;
-import org.junit.Test;
-
 import opennlp.tools.util.featuregen.AdaptiveFeatureGenerator;
 
+import org.junit.jupiter.api.BeforeEach;
+import org.junit.jupiter.api.Test;
+
+import static org.junit.jupiter.api.Assertions.assertEquals;
+
 public class TokenPatternFeatureGeneratorTest {
 
   private List<String> features;
 
-  @Before
+  @BeforeEach
   public void setUp() throws Exception {
     features = new ArrayList<>();
   }
@@ -44,8 +45,8 @@ public class TokenPatternFeatureGeneratorTest {
     AdaptiveFeatureGenerator generator = new TokenPatternFeatureGenerator();
 
     generator.createFeatures(features, testSentence, testTokenIndex, null);
-    Assert.assertEquals(1, features.size());
-    Assert.assertEquals("st=example", features.get(0));
+    assertEquals(1, features.size());
+    assertEquals("st=example", features.get(0));
   }
 
   @Test
@@ -57,20 +58,20 @@ public class TokenPatternFeatureGeneratorTest {
     AdaptiveFeatureGenerator generator = new TokenPatternFeatureGenerator();
 
     generator.createFeatures(features, testSentence, testTokenIndex, null);
-    Assert.assertEquals(14, features.size());
-    Assert.assertEquals("stn=5", features.get(0));
-    Assert.assertEquals("pt2=alphaalpha", features.get(1));
-    Assert.assertEquals("pt3=alphaalphaalpha", features.get(2));
-    Assert.assertEquals("st=this", features.get(3));
-    Assert.assertEquals("pt2=alphaalpha", features.get(4));
-    Assert.assertEquals("pt3=alphaalphaalpha", features.get(5));
-    Assert.assertEquals("st=is", features.get(6));
-    Assert.assertEquals("pt2=alphaalpha", features.get(7));
-    Assert.assertEquals("pt3=alphaalphaalpha", features.get(8));
-    Assert.assertEquals("st=an", features.get(9));
-    Assert.assertEquals("pt2=alphaalpha", features.get(10));
-    Assert.assertEquals("st=example", features.get(11));
-    Assert.assertEquals("st=sentence", features.get(12));
-    Assert.assertEquals("pta=alphaalphaalphaalphaalpha", features.get(13));
+    assertEquals(14, features.size());
+    assertEquals("stn=5", features.get(0));
+    assertEquals("pt2=alphaalpha", features.get(1));
+    assertEquals("pt3=alphaalphaalpha", features.get(2));
+    assertEquals("st=this", features.get(3));
+    assertEquals("pt2=alphaalpha", features.get(4));
+    assertEquals("pt3=alphaalphaalpha", features.get(5));
+    assertEquals("st=is", features.get(6));
+    assertEquals("pt2=alphaalpha", features.get(7));
+    assertEquals("pt3=alphaalphaalpha", features.get(8));
+    assertEquals("st=an", features.get(9));
+    assertEquals("pt2=alphaalpha", features.get(10));
+    assertEquals("st=example", features.get(11));
+    assertEquals("st=sentence", features.get(12));
+    assertEquals("pta=alphaalphaalphaalphaalpha", features.get(13));
   }
 }
diff --git a/japanese-addon/src/test/java/opennlp/tools/util/featuregen/lang/jpn/TrigramNameFeatureGeneratorTest.java b/japanese-addon/src/test/java/opennlp/tools/util/featuregen/lang/jpn/TrigramNameFeatureGeneratorTest.java
index 546a0bd..789c508 100644
--- a/japanese-addon/src/test/java/opennlp/tools/util/featuregen/lang/jpn/TrigramNameFeatureGeneratorTest.java
+++ b/japanese-addon/src/test/java/opennlp/tools/util/featuregen/lang/jpn/TrigramNameFeatureGeneratorTest.java
@@ -20,18 +20,19 @@ package opennlp.tools.util.featuregen.lang.jpn;
 import java.util.ArrayList;
 import java.util.List;
 
-import org.junit.Assert;
-import org.junit.Before;
-import org.junit.Test;
-
 import opennlp.tools.util.featuregen.AdaptiveFeatureGenerator;
 
+import org.junit.jupiter.api.BeforeEach;
+import org.junit.jupiter.api.Test;
+
+import static org.junit.jupiter.api.Assertions.assertEquals;
+
 public class TrigramNameFeatureGeneratorTest {
 
   private List<String> features;
   static String[] testSentence = new String[] {"This", "is", "an", "example", "sentence"};
 
-  @Before
+  @BeforeEach
   public void setUp() throws Exception {
     features = new ArrayList<>();
   }
@@ -45,9 +46,9 @@ public class TrigramNameFeatureGeneratorTest {
 
     generator.createFeatures(features, testSentence, testTokenIndex, null);
 
-    Assert.assertEquals(2, features.size());
-    Assert.assertEquals("w,nw,nnw=This,is,an", features.get(0));
-    Assert.assertEquals("wc,nwc,nnwc=alpha,alpha,alpha", features.get(1));
+    assertEquals(2, features.size());
+    assertEquals("w,nw,nnw=This,is,an", features.get(0));
+    assertEquals("wc,nwc,nnwc=alpha,alpha,alpha", features.get(1));
   }
 
   @Test
@@ -59,9 +60,9 @@ public class TrigramNameFeatureGeneratorTest {
 
     generator.createFeatures(features, testSentence, testTokenIndex, null);
 
-    Assert.assertEquals(2, features.size());
-    Assert.assertEquals("w,nw,nnw=is,an,example", features.get(0));
-    Assert.assertEquals("wc,nwc,nnwc=alpha,alpha,alpha", features.get(1));
+    assertEquals(2, features.size());
+    assertEquals("w,nw,nnw=is,an,example", features.get(0));
+    assertEquals("wc,nwc,nnwc=alpha,alpha,alpha", features.get(1));
   }
 
   @Test
@@ -73,11 +74,11 @@ public class TrigramNameFeatureGeneratorTest {
 
     generator.createFeatures(features, testSentence, testTokenIndex, null);
 
-    Assert.assertEquals(4, features.size());
-    Assert.assertEquals("ppw,pw,w=This,is,an", features.get(0));
-    Assert.assertEquals("ppwc,pwc,wc=alpha,alpha,alpha", features.get(1));
-    Assert.assertEquals("w,nw,nnw=an,example,sentence", features.get(2));
-    Assert.assertEquals("wc,nwc,nnwc=alpha,alpha,alpha", features.get(3));
+    assertEquals(4, features.size());
+    assertEquals("ppw,pw,w=This,is,an", features.get(0));
+    assertEquals("ppwc,pwc,wc=alpha,alpha,alpha", features.get(1));
+    assertEquals("w,nw,nnw=an,example,sentence", features.get(2));
+    assertEquals("wc,nwc,nnwc=alpha,alpha,alpha", features.get(3));
   }
 
   @Test
@@ -89,9 +90,9 @@ public class TrigramNameFeatureGeneratorTest {
 
     generator.createFeatures(features, testSentence, testTokenIndex, null);
 
-    Assert.assertEquals(2, features.size());
-    Assert.assertEquals("ppw,pw,w=an,example,sentence", features.get(0));
-    Assert.assertEquals("ppwc,pwc,wc=alpha,alpha,alpha", features.get(1));
+    assertEquals(2, features.size());
+    assertEquals("ppw,pw,w=an,example,sentence", features.get(0));
+    assertEquals("ppwc,pwc,wc=alpha,alpha,alpha", features.get(1));
   }
 
   @Test
@@ -105,6 +106,6 @@ public class TrigramNameFeatureGeneratorTest {
 
     generator.createFeatures(features, shortSentence, testTokenIndex, null);
 
-    Assert.assertEquals(0, features.size());
+    assertEquals(0, features.size());
   }
 }
diff --git a/jwnl-addon/pom.xml b/jwnl-addon/pom.xml
index be38764..9b0a8ed 100644
--- a/jwnl-addon/pom.xml
+++ b/jwnl-addon/pom.xml
@@ -1,50 +1,78 @@
+<?xml version="1.0" encoding="UTF-8"?>
+
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one
+   or more contributor license agreements.  See the NOTICE file
+   distributed with this work for additional information
+   regarding copyright ownership.  The ASF licenses this file
+   to you under the Apache License, Version 2.0 (the
+   "License"); you may not use this file except in compliance
+   with the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing,
+   software distributed under the License is distributed on an
+   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+   KIND, either express or implied.  See the License for the
+   specific language governing permissions and limitations
+   under the License.
+-->
+
 <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
   <modelVersion>4.0.0</modelVersion>
-
-  <groupId>org.apache.opennlp</groupId>
+  <parent>
+    <groupId>org.apache.opennlp</groupId>
+    <artifactId>opennlp-addons</artifactId>
+    <version>2.2.1-SNAPSHOT</version>
+  </parent>
+  
   <artifactId>jwnl-addon</artifactId>
-  <version>1.0-SNAPSHOT</version>
+  <version>2.2.1-SNAPSHOT</version>
   <packaging>jar</packaging>
-  <name>JWNL Addon</name>
-
-  <url>http://maven.apache.org</url>
-    <build>
-        <plugins>
-            <plugin>
-                <groupId>org.apache.maven.plugins</groupId>
-                <artifactId>maven-compiler-plugin</artifactId>
-                <version>2.3.2</version>
-                <configuration>
-                    <source>1.7</source>
-                    <target>1.7</target>
-                </configuration>
-            </plugin>
-        </plugins>
-    </build>
-    <properties>
-    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
-  </properties>
+  <name>Apache OpenNLP JWNL Addon</name>
 
   <dependencies>
+    <dependency>
+      <groupId>org.apache.opennlp</groupId>
+      <artifactId>opennlp-tools</artifactId>
+    </dependency>
+
     <dependency>
       <groupId>net.sf.jwordnet</groupId>
       <artifactId>jwnl</artifactId>
       <version>1.3.3</version>
       <scope>compile</scope>
     </dependency>
+    
+    <dependency>
+      <groupId>org.junit.jupiter</groupId>
+      <artifactId>junit-jupiter-api</artifactId>
+    </dependency>
 
     <dependency>
-      <groupId>org.apache.opennlp</groupId>
-      <artifactId>opennlp-tools</artifactId>
-      <version>1.6.0-SNAPSHOT</version>
+      <groupId>org.junit.jupiter</groupId>
+      <artifactId>junit-jupiter-engine</artifactId>
     </dependency>
 
     <dependency>
-      <groupId>junit</groupId>
-      <artifactId>junit</artifactId>
-      <version>3.8.1</version>
-      <scope>test</scope>
+      <groupId>org.junit.jupiter</groupId>
+      <artifactId>junit-jupiter-params</artifactId>
     </dependency>
   </dependencies>
+
+  <build>
+    <plugins>
+      <plugin>
+        <groupId>org.apache.maven.plugins</groupId>
+        <artifactId>maven-compiler-plugin</artifactId>
+        <configuration>
+          <source>${maven.compiler.source}</source>
+          <target>${maven.compiler.target}</target>
+          <compilerArgument>-Xlint</compilerArgument>
+        </configuration>
+      </plugin>
+    </plugins>
+  </build>
 </project>
diff --git a/jwnl-addon/src/main/java/opennlp/jwnl/lemmatizer/JWNLLemmatizer.java b/jwnl-addon/src/main/java/opennlp/jwnl/lemmatizer/JWNLLemmatizer.java
index f84530a..d8f12ac 100644
--- a/jwnl-addon/src/main/java/opennlp/jwnl/lemmatizer/JWNLLemmatizer.java
+++ b/jwnl-addon/src/main/java/opennlp/jwnl/lemmatizer/JWNLLemmatizer.java
@@ -18,11 +18,11 @@
 package opennlp.jwnl.lemmatizer;
 
 import java.io.IOException;
+import java.util.ArrayList;
 import java.util.HashMap;
+import java.util.List;
 import java.util.Map;
 
-import opennlp.tools.lemmatizer.DictionaryLemmatizer;
-
 import net.didion.jwnl.JWNLException;
 import net.didion.jwnl.data.Adjective;
 import net.didion.jwnl.data.FileDictionaryElementFactory;
@@ -42,8 +42,9 @@ import net.didion.jwnl.dictionary.morph.Operation;
 import net.didion.jwnl.dictionary.morph.TokenizerOperation;
 import net.didion.jwnl.princeton.data.PrincetonWN17FileDictionaryElementFactory;
 import net.didion.jwnl.princeton.file.PrincetonRandomAccessDictionaryFile;
+import opennlp.tools.lemmatizer.Lemmatizer;
 
-public class JWNLLemmatizer implements DictionaryLemmatizer {
+public class JWNLLemmatizer implements Lemmatizer {
 
   private net.didion.jwnl.dictionary.Dictionary dict;
   private MorphologicalProcessor morphy;
@@ -52,18 +53,18 @@ public class JWNLLemmatizer implements DictionaryLemmatizer {
    * Creates JWNL dictionary and morphological processor objects in
    * JWNLemmatizer constructor. It also loads the JWNL configuration into the
    * constructor. 
-   * 
+   * <p>
    * Constructor code based on Apache OpenNLP JWNLDictionary class. 
    * 
    * @param wnDirectory
    * @throws IOException
-   * @throws JWNLException
    */
-  public JWNLLemmatizer(String wnDirectory) throws IOException, JWNLException {
+  public JWNLLemmatizer(String wnDirectory) throws IOException {
+    super();
     PointerType.initialize();
     Adjective.initialize();
     VerbFrame.initialize();
-    Map<POS, String[][]> suffixMap = new HashMap<POS, String[][]>();
+    Map<POS, String[][]> suffixMap = new HashMap<>();
     suffixMap.put(POS.NOUN, new String[][] { { "s", "" }, { "ses", "s" },
         { "xes", "x" }, { "zes", "z" }, { "ches", "ch" }, { "shes", "sh" },
         { "men", "man" }, { "ies", "y" } });
@@ -91,8 +92,7 @@ public class JWNLLemmatizer implements DictionaryLemmatizer {
     dict = net.didion.jwnl.dictionary.Dictionary.getInstance();
     morphy = dict.getMorphologicalProcessor();
   }
-  
-  
+
   /**
    * It takes a word and a POS tag and obtains a word's lemma from WordNet.
    * 
@@ -121,7 +121,7 @@ public class JWNLLemmatizer implements DictionaryLemmatizer {
       if (baseForm != null) {
         lemma = baseForm.getLemma().toString();
       }
-      else if (baseForm == null && postag.startsWith(String.valueOf(constantTag))) {
+      else if (baseForm == null && postag.startsWith(constantTag)) {
           lemma = word;
         }
         else {
@@ -134,5 +134,18 @@ public class JWNLLemmatizer implements DictionaryLemmatizer {
     return lemma;
   }
 
+  @Override
+  public String[] lemmatize(final String[] tokens, final String[] postags) {
+    List<String> lemmas = new ArrayList<>();
+    for (int i = 0; i < tokens.length; i++) {
+      lemmas.add(this.lemmatize(tokens[i], postags[i]));
+    }
+    return lemmas.toArray(new String[0]);
+  }
+
+  @Override
+  public List<List<String>> lemmatize(final List<String> tokens, final List<String> posTags) {
+    throw new UnsupportedOperationException("Method not implemented here!");
+  }
 }
 
diff --git a/liblinear-addon/pom.xml b/liblinear-addon/pom.xml
index 7fa3e51..52f67a3 100644
--- a/liblinear-addon/pom.xml
+++ b/liblinear-addon/pom.xml
@@ -21,32 +21,20 @@
 
 <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
 	<modelVersion>4.0.0</modelVersion>
-	
 	<parent>
-	    <groupId>org.apache.opennlp</groupId>
-	    <artifactId>opennlp</artifactId>
-	    <version>1.6.0-SNAPSHOT</version>
-	    <relativePath>../opennlp/pom.xml</relativePath>
-    </parent>
-    
-	<artifactId>opennlp-liblinear-addon</artifactId>
+		<groupId>org.apache.opennlp</groupId>
+		<artifactId>opennlp-addons</artifactId>
+		<version>2.2.1-SNAPSHOT</version>
+	</parent>
+	
+	<artifactId>liblinear-addon</artifactId>
 	<packaging>jar</packaging>
-	<name>Apache OpenNLP Liblinear Addon</name>
-
-	<repositories>
-		<repository>
-			<id>ApacheIncubatorRepository</id>
-			<url>
-				http://people.apache.org/repo/m2-incubating-repository/
-			</url>
-		</repository>
-	</repositories>
+	<name>Apache OpenNLP LibLinear Addon</name>
 
 	<dependencies>
 		<dependency>
 			<groupId>org.apache.opennlp</groupId>
 			<artifactId>opennlp-tools</artifactId>
-			<version>1.6.0-SNAPSHOT</version>
 		</dependency>
 
 		<dependency>
@@ -56,9 +44,18 @@
 		</dependency>
 
 		<dependency>
-			<groupId>junit</groupId>
-			<artifactId>junit</artifactId>
-			<scope>test</scope>
+			<groupId>org.junit.jupiter</groupId>
+			<artifactId>junit-jupiter-api</artifactId>
+		</dependency>
+
+		<dependency>
+			<groupId>org.junit.jupiter</groupId>
+			<artifactId>junit-jupiter-engine</artifactId>
+		</dependency>
+
+		<dependency>
+			<groupId>org.junit.jupiter</groupId>
+			<artifactId>junit-jupiter-params</artifactId>
 		</dependency>
 	</dependencies>
 
diff --git a/liblinear-addon/src/main/java/LiblinearTrainer.java b/liblinear-addon/src/main/java/LiblinearTrainer.java
index e3bc0fd..ba0e4a4 100644
--- a/liblinear-addon/src/main/java/LiblinearTrainer.java
+++ b/liblinear-addon/src/main/java/LiblinearTrainer.java
@@ -23,9 +23,6 @@ import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
 
-import opennlp.tools.ml.AbstractEventTrainer;
-import opennlp.tools.ml.model.DataIndexer;
-import opennlp.tools.ml.model.MaxentModel;
 import de.bwaldvogel.liblinear.Feature;
 import de.bwaldvogel.liblinear.FeatureNode;
 import de.bwaldvogel.liblinear.Linear;
@@ -34,26 +31,25 @@ import de.bwaldvogel.liblinear.Parameter;
 import de.bwaldvogel.liblinear.Problem;
 import de.bwaldvogel.liblinear.SolverType;
 
+import opennlp.tools.ml.AbstractEventTrainer;
+import opennlp.tools.ml.model.DataIndexer;
+import opennlp.tools.ml.model.MaxentModel;
+import opennlp.tools.util.TrainingParameters;
+
 public class LiblinearTrainer extends AbstractEventTrainer {
 
-  private SolverType solverType;
-  private Double c;
-  private Double eps;
-  private Double p;
-  
-  private int bias;
+  private final SolverType solverType;
+  private final double c;
+  private final double eps;
+  private final double p;
+  private final int bias;
   
-  public LiblinearTrainer() {
-  }
-
-  @Override
-  public void init(Map<String, String> trainParams,
-      Map<String, String> reportMap) {
-    String solverTypeName = trainParams.get("solverType");
+  public LiblinearTrainer(TrainingParameters trainParams) {
+    String solverTypeName = trainParams.getStringParameter("solverType", "");
     
     if (solverTypeName != null) {
       try {
-        solverType = SolverType.valueOf(trainParams.get("solverType"));
+        solverType = SolverType.valueOf(trainParams.getStringParameter("solverType", ""));
       }
       catch (IllegalArgumentException e) {
         throw new IllegalArgumentException("solverType [" + solverTypeName + "] is not available!");
@@ -63,42 +59,10 @@ public class LiblinearTrainer extends AbstractEventTrainer {
       throw new IllegalArgumentException("solverType needs to be specified!");
     }
     
-    String cValueString = trainParams.get("c");
-    
-    if (cValueString != null) {
-      c = Double.valueOf(cValueString);
-    }
-    else {
-      throw new IllegalArgumentException("c must be specified");
-    }
-    
-    // eps
-    String epsValueString = trainParams.get("eps");
-
-    if (epsValueString != null) {
-      eps = Double.valueOf(epsValueString);
-    }
-    else {
-      throw new IllegalArgumentException("eps must be specified");
-    }
-
-    String pValueString = trainParams.get("p");
-
-    if (pValueString != null) {
-      p = Double.valueOf(pValueString);
-    }
-    else {
-      throw new IllegalArgumentException("p must be specified");
-    }
-    
-    String biasValueString = trainParams.get("bias");
-    
-    if (biasValueString != null) {
-      bias = Integer.valueOf(biasValueString);
-    }
-    else {
-      throw new IllegalArgumentException("eps must be specified");
-    }    
+    c = trainParams.getDoubleParameter("c", 0);
+    eps = trainParams.getDoubleParameter("eps", 0);
+    p = trainParams.getDoubleParameter("p", 0);
+    bias = trainParams.getIntParameter("bias", 0);
   }
   
   private static Problem constructProblem(List<Double> vy, List<Feature[]> vx, int maxIndex, double bias) {
@@ -135,8 +99,8 @@ public class LiblinearTrainer extends AbstractEventTrainer {
   @Override
   public MaxentModel doTrain(DataIndexer indexer) throws IOException {
 
-    List<Double> vy = new ArrayList<Double>();
-    List<Feature[]> vx = new ArrayList<Feature[]>();
+    List<Double> vy = new ArrayList<>();
+    List<Feature[]> vx = new ArrayList<>();
 
     // outcomes
     int outcomes[] = indexer.getOutcomeList();
@@ -147,9 +111,9 @@ public class LiblinearTrainer extends AbstractEventTrainer {
     for (int i = 0; i < indexer.getContexts().length; i++) {
 
       int outcome = outcomes[i];
-      vy.add(Double.valueOf(outcome));
+      vy.add((double) outcome);
 
-      int features[] = indexer.getContexts()[i];
+      int[] features = indexer.getContexts()[i];
 
       Feature[] x;
       if (bias >= 0) {
@@ -160,7 +124,7 @@ public class LiblinearTrainer extends AbstractEventTrainer {
 
       // for each feature ...
       for (int fi = 0; fi < features.length; fi++) {
-        // TODO: SHOUDL BE indexer.getNumTimesEventsSeen()[i] and not fi !!!
+        // TODO: SHOULD BE indexer.getNumTimesEventsSeen()[i] and not fi !!!
         x[fi] = new FeatureNode(features[fi] + 1, indexer.getNumTimesEventsSeen()[i]);
       } 
 
@@ -176,9 +140,9 @@ public class LiblinearTrainer extends AbstractEventTrainer {
     
     Model liblinearModel = Linear.train(problem, parameter);
 
-    Map<String, Integer> predMap = new HashMap<String, Integer>();
+    Map<String, Integer> predMap = new HashMap<>();
     
-    String predLabels[] = indexer.getPredLabels();
+    String[] predLabels = indexer.getPredLabels();
     for (int i = 0; i < predLabels.length; i++) {
       predMap.put(predLabels[i], i);
     }
diff --git a/modelbuilder-addon/pom.xml b/modelbuilder-addon/pom.xml
index 4a9c886..b139d3b 100644
--- a/modelbuilder-addon/pom.xml
+++ b/modelbuilder-addon/pom.xml
@@ -1,35 +1,72 @@
+<?xml version="1.0" encoding="UTF-8"?>
+
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one
+   or more contributor license agreements.  See the NOTICE file
+   distributed with this work for additional information
+   regarding copyright ownership.  The ASF licenses this file
+   to you under the Apache License, Version 2.0 (the
+   "License"); you may not use this file except in compliance
+   with the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing,
+   software distributed under the License is distributed on an
+   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+   KIND, either express or implied.  See the License for the
+   specific language governing permissions and limitations
+   under the License.
+-->
+
 <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
   <modelVersion>4.0.0</modelVersion>
- <parent>
+  <parent>
     <groupId>org.apache.opennlp</groupId>
-    <artifactId>opennlp</artifactId>
-    <version>1.6.0-SNAPSHOT</version>
-    <relativePath>../opennlp/pom.xml</relativePath>
+    <artifactId>opennlp-addons</artifactId>
+    <version>2.2.1-SNAPSHOT</version>
   </parent>
 
   <artifactId>modelbuilder-addon</artifactId>
-  <version>1.0-SNAPSHOT</version>
+  <version>2.2.1-SNAPSHOT</version>
   <packaging>jar</packaging>
 
-  <name>modelbuilder-addon</name>
-  <url>http://maven.apache.org</url>
-
-  <properties>
-    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
-  </properties>
+  <name>Apache OpenNLP ModelBuilder Addon</name>
 
   <dependencies>
     <dependency>
-      <groupId>junit</groupId>
-      <artifactId>junit</artifactId>
-      <version>3.8.1</version>
-      <scope>test</scope>
-    </dependency>
-      <dependency>
       <groupId>org.apache.opennlp</groupId>
       <artifactId>opennlp-tools</artifactId>
-      <version>1.6.0-SNAPSHOT</version>
+    </dependency>
+
+    <dependency>
+      <groupId>org.junit.jupiter</groupId>
+      <artifactId>junit-jupiter-api</artifactId>
+    </dependency>
+
+    <dependency>
+      <groupId>org.junit.jupiter</groupId>
+      <artifactId>junit-jupiter-engine</artifactId>
+    </dependency>
+
+    <dependency>
+      <groupId>org.junit.jupiter</groupId>
+      <artifactId>junit-jupiter-params</artifactId>
     </dependency>
   </dependencies>
+
+  <build>
+    <plugins>
+      <plugin>
+        <groupId>org.apache.maven.plugins</groupId>
+        <artifactId>maven-compiler-plugin</artifactId>
+        <configuration>
+          <source>${maven.compiler.source}</source>
+          <target>${maven.compiler.target}</target>
+          <compilerArgument>-Xlint</compilerArgument>
+        </configuration>
+      </plugin>
+    </plugins>
+  </build>
 </project>
diff --git a/modelbuilder-addon/src/main/java/opennlp/addons/modelbuilder/DefaultModelBuilderUtil.java b/modelbuilder-addon/src/main/java/opennlp/addons/modelbuilder/DefaultModelBuilderUtil.java
index 81ff9fd..b613877 100644
--- a/modelbuilder-addon/src/main/java/opennlp/addons/modelbuilder/DefaultModelBuilderUtil.java
+++ b/modelbuilder-addon/src/main/java/opennlp/addons/modelbuilder/DefaultModelBuilderUtil.java
@@ -24,17 +24,17 @@ import opennlp.addons.modelbuilder.impls.GenericModelGenerator;
 import opennlp.addons.modelbuilder.impls.GenericModelableImpl;
 
 /**
- *
- * Utilizes the filebased implementations to produce an NER model from user
- * The basic processing is such
- * read in the list of known entities
- * annotate the sentences based on the list of known entities
- * create a model from the annotations
- * perform NER with the model on the sentences
- * add the NER results to the annotations
- * rebuild the model
- * loop
- * defined data
+ * Utilizes the file-based implementations to produce an NER model from user defined data.
+ * <p>
+ * The basic processing is such read in the list of known entities
+ * <ul>
+ *   <li>annotate the sentences based on the list of known entities,</li>
+ *   <li>create a model from the annotations,</li>
+ *   <li>perform NER with the model on the sentences,</li>
+ *   <li>add the NER results to the annotations,</li>
+ *   <li>rebuild the model,</li>
+ *   <li>loop</li>
+ * </ul>
  */
 public class DefaultModelBuilderUtil {
 
@@ -74,20 +74,23 @@ public class DefaultModelBuilderUtil {
     params.setKnownEntitiesFile(knownEntities);
     params.setModelFile(modelOutFile);
     params.setKnownEntityBlacklist(knownEntitiesBlacklist);
-    /**
+
+    /*
      * sentence providers feed this process with user data derived sentences
      * this impl just reads line by line through a file
      */
     SentenceProvider sentenceProvider = new FileSentenceProvider();
     sentenceProvider.setParameters(params);
-    /**
+
+    /*
      * KnownEntityProviders provide a seed list of known entities... such as
      * Barack Obama for person, or Germany for location obviously these would
-     * want to be prolific, non ambiguous names
+     * want to be prolific, non-ambiguous names
      */
     KnownEntityProvider knownEntityProvider = new FileKnownEntityProvider();
     knownEntityProvider.setParameters(params);
-    /**
+
+    /*
      * ModelGenerationValidators try to weed out bad hits by the iterations of
      * the name finder. Since this is a recursive process, with each iteration
      * the namefinder will get more and more greedy if bad entities are allowed
@@ -98,14 +101,15 @@ public class DefaultModelBuilderUtil {
      */
     ModelGenerationValidator validator = new FileModelValidatorImpl();
     validator.setParameters(params);
-    /**
+
+    /*
      * Modelable's write and read the annotated sentences, as well as create and
      * write the NER models
      */
     Modelable modelable = new GenericModelableImpl();
     modelable.setParameters(params);
 
-    /**
+    /*
      * the modelGenerator actually runs the process with a set number of
      * iterations... could be better by actually calculating the diff between
      * runs and stopping based on a thresh, but for extrememly large sentence
diff --git a/modelbuilder-addon/src/main/java/opennlp/addons/modelbuilder/KnownEntityProvider.java b/modelbuilder-addon/src/main/java/opennlp/addons/modelbuilder/KnownEntityProvider.java
index 694250e..5135c2b 100644
--- a/modelbuilder-addon/src/main/java/opennlp/addons/modelbuilder/KnownEntityProvider.java
+++ b/modelbuilder-addon/src/main/java/opennlp/addons/modelbuilder/KnownEntityProvider.java
@@ -15,6 +15,8 @@
  */
 package opennlp.addons.modelbuilder;
 
+import opennlp.addons.modelbuilder.impls.BaseModelBuilderParams;
+
 import java.util.Set;
 
 
@@ -23,23 +25,24 @@ import java.util.Set;
  *
 Supplies a list of known entities (a list of names or locations)
  */
-public interface KnownEntityProvider extends ModelParameter{
+public interface KnownEntityProvider extends ModelParameter<BaseModelBuilderParams> {
+
   /**
- * returns a list of known non ambiguous entities.
- * @return a set of entities
- */
+   * returns a list of known non ambiguous entities.
+   * @return a set of entities
+   */
   Set<String> getKnownEntities();
-/**
- * adds to the set of known entities. Overriding classes should hold this list in a class level set.
- * @param unambiguousEntity 
- */
+
+  /**
+   * adds to the set of known entities. Overriding classes should hold this list in a class level set.
+   * @param unambiguousEntity
+   */
   void addKnownEntity(String unambiguousEntity);
-/**
- * defines the type of entity that the set contains, ie person, location, organization.
- * @return 
- */
-  String getKnownEntitiesType();
-  
-  
   
+  /**
+   * defines the type of entity that the set contains, ie person, location, organization.
+   * @return
+   */
+  String getKnownEntitiesType();
+
 }
diff --git a/modelbuilder-addon/src/main/java/opennlp/addons/modelbuilder/ModelGenerationValidator.java b/modelbuilder-addon/src/main/java/opennlp/addons/modelbuilder/ModelGenerationValidator.java
index 4bd5fe2..d05a1d5 100644
--- a/modelbuilder-addon/src/main/java/opennlp/addons/modelbuilder/ModelGenerationValidator.java
+++ b/modelbuilder-addon/src/main/java/opennlp/addons/modelbuilder/ModelGenerationValidator.java
@@ -15,13 +15,15 @@
  */
 package opennlp.addons.modelbuilder;
 
+import opennlp.addons.modelbuilder.impls.BaseModelBuilderParams;
+
 import java.util.Collection;
 
 /**
  *
 Validates results from the iterative namefinding
  */
-public interface ModelGenerationValidator extends ModelParameter {
+public interface ModelGenerationValidator extends ModelParameter<BaseModelBuilderParams> {
 
   Boolean validSentence(String sentence);
 
diff --git a/modelbuilder-addon/src/main/java/opennlp/addons/modelbuilder/Modelable.java b/modelbuilder-addon/src/main/java/opennlp/addons/modelbuilder/Modelable.java
index 80b0170..8d2e06c 100644
--- a/modelbuilder-addon/src/main/java/opennlp/addons/modelbuilder/Modelable.java
+++ b/modelbuilder-addon/src/main/java/opennlp/addons/modelbuilder/Modelable.java
@@ -16,13 +16,14 @@
 package opennlp.addons.modelbuilder;
 
 import java.util.Set;
+
+import opennlp.addons.modelbuilder.impls.BaseModelBuilderParams;
 import opennlp.tools.namefind.TokenNameFinderModel;
 
 /**
  *
  */
-public interface Modelable extends ModelParameter{
-
+public interface Modelable extends ModelParameter<BaseModelBuilderParams> {
 
 
   String annotate(String sentence, String namedEntity, String entityType);
@@ -40,6 +41,5 @@ public interface Modelable extends ModelParameter{
   TokenNameFinderModel getModel();
 
   String[] tokenizeSentenceToWords(String sentence);
-  
 
 }
diff --git a/modelbuilder-addon/src/main/java/opennlp/addons/modelbuilder/impls/FileKnownEntityProvider.java b/modelbuilder-addon/src/main/java/opennlp/addons/modelbuilder/impls/FileKnownEntityProvider.java
index 0de043c..b8adfee 100644
--- a/modelbuilder-addon/src/main/java/opennlp/addons/modelbuilder/impls/FileKnownEntityProvider.java
+++ b/modelbuilder-addon/src/main/java/opennlp/addons/modelbuilder/impls/FileKnownEntityProvider.java
@@ -33,7 +33,7 @@ import opennlp.addons.modelbuilder.KnownEntityProvider;
  */
 public class FileKnownEntityProvider implements KnownEntityProvider {
  
-  Set<String> knownEntities = new HashSet<String>();
+  Set<String> knownEntities = new HashSet<>();
   BaseModelBuilderParams params;
   @Override
   public Set<String> getKnownEntities() {
diff --git a/modelbuilder-addon/src/main/java/opennlp/addons/modelbuilder/impls/FileModelValidatorImpl.java b/modelbuilder-addon/src/main/java/opennlp/addons/modelbuilder/impls/FileModelValidatorImpl.java
index ea4bb05..e40779a 100644
--- a/modelbuilder-addon/src/main/java/opennlp/addons/modelbuilder/impls/FileModelValidatorImpl.java
+++ b/modelbuilder-addon/src/main/java/opennlp/addons/modelbuilder/impls/FileModelValidatorImpl.java
@@ -17,16 +17,15 @@ package opennlp.addons.modelbuilder.impls;
 
 import java.io.BufferedReader;
 import java.io.FileInputStream;
-import java.io.FileNotFoundException;
 import java.io.IOException;
-import java.io.InputStream;
 import java.io.InputStreamReader;
-import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
 import java.util.Collection;
 import java.util.HashSet;
 import java.util.Set;
 import java.util.logging.Level;
 import java.util.logging.Logger;
+
 import opennlp.addons.modelbuilder.ModelGenerationValidator;
 
 /**
@@ -34,7 +33,7 @@ import opennlp.addons.modelbuilder.ModelGenerationValidator;
  */
 public class FileModelValidatorImpl implements ModelGenerationValidator {
 
-  private Set<String> badentities = new HashSet<String>();
+  private final Set<String> badentities = new HashSet<>();
   BaseModelBuilderParams params;
 
   @Override
@@ -72,21 +71,12 @@ public class FileModelValidatorImpl implements ModelGenerationValidator {
       return badentities;
     }
     if (!badentities.isEmpty()) {
-      try {
-        InputStream fis;
-        BufferedReader br;
+      try (BufferedReader br = new BufferedReader(new InputStreamReader(
+              new FileInputStream(params.getKnownEntityBlacklist()), StandardCharsets.UTF_8))){
         String line;
-
-        fis = new FileInputStream(params.getKnownEntityBlacklist());
-        br = new BufferedReader(new InputStreamReader(fis, Charset.forName("UTF-8")));
         while ((line = br.readLine()) != null) {
           badentities.add(line);
         }
-        br.close();
-        br = null;
-        fis = null;
-      } catch (FileNotFoundException ex) {
-        Logger.getLogger(FileKnownEntityProvider.class.getName()).log(Level.SEVERE, null, ex);
       } catch (IOException ex) {
         Logger.getLogger(FileKnownEntityProvider.class.getName()).log(Level.SEVERE, null, ex);
       }
diff --git a/modelbuilder-addon/src/main/java/opennlp/addons/modelbuilder/impls/FileSentenceProvider.java b/modelbuilder-addon/src/main/java/opennlp/addons/modelbuilder/impls/FileSentenceProvider.java
index bea55f5..9f1f5e1 100644
--- a/modelbuilder-addon/src/main/java/opennlp/addons/modelbuilder/impls/FileSentenceProvider.java
+++ b/modelbuilder-addon/src/main/java/opennlp/addons/modelbuilder/impls/FileSentenceProvider.java
@@ -17,15 +17,14 @@ package opennlp.addons.modelbuilder.impls;
 
 import java.io.BufferedReader;
 import java.io.FileInputStream;
-import java.io.FileNotFoundException;
 import java.io.IOException;
-import java.io.InputStream;
 import java.io.InputStreamReader;
-import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
 import java.util.HashSet;
 import java.util.Set;
 import java.util.logging.Level;
 import java.util.logging.Logger;
+
 import opennlp.addons.modelbuilder.SentenceProvider;
 
 /**
@@ -33,30 +32,18 @@ import opennlp.addons.modelbuilder.SentenceProvider;
  */
 public class FileSentenceProvider implements SentenceProvider {
 
-  BaseModelBuilderParams params ;
-  Set<String> sentences = new HashSet<String>();
+  BaseModelBuilderParams params;
+  private final Set<String> sentences = new HashSet<>();
 
+  @Override
   public Set<String> getSentences() {
      if (sentences.isEmpty()) {
-      try {
-        InputStream fis;
-        BufferedReader br;
+      try (BufferedReader br = new BufferedReader(new InputStreamReader(
+              new FileInputStream(params.getSentenceFile()), StandardCharsets.UTF_8))){
         String line;
-
-        fis = new FileInputStream(params.getSentenceFile());
-        br = new BufferedReader(new InputStreamReader(fis, Charset.forName("UTF-8")));
-        int i=0;
         while ((line = br.readLine()) != null) {
-         
           sentences.add(line);
         }
-
-        // Done with the file
-        br.close();
-        br = null;
-        fis = null;
-      } catch (FileNotFoundException ex) {
-        Logger.getLogger(FileKnownEntityProvider.class.getName()).log(Level.SEVERE, null, ex);
       } catch (IOException ex) {
         Logger.getLogger(FileKnownEntityProvider.class.getName()).log(Level.SEVERE, null, ex);
       }
diff --git a/modelbuilder-addon/src/main/java/opennlp/addons/modelbuilder/impls/GenericModelableImpl.java b/modelbuilder-addon/src/main/java/opennlp/addons/modelbuilder/impls/GenericModelableImpl.java
index ccfddcb..8c1f754 100644
--- a/modelbuilder-addon/src/main/java/opennlp/addons/modelbuilder/impls/GenericModelableImpl.java
+++ b/modelbuilder-addon/src/main/java/opennlp/addons/modelbuilder/impls/GenericModelableImpl.java
@@ -22,12 +22,14 @@ import java.io.FileWriter;
 import java.io.BufferedWriter;
 import java.io.IOException;
 import java.io.OutputStream;
-import java.nio.charset.Charset;
+import java.io.Writer;
+import java.nio.charset.StandardCharsets;
 import java.util.HashSet;
 import java.util.Set;
 import java.util.logging.Level;
 import java.util.logging.Logger;
 import opennlp.addons.modelbuilder.Modelable;
+import opennlp.tools.namefind.TokenNameFinderFactory;
 import opennlp.tools.util.MarkableFileInputStreamFactory;
 
 import opennlp.tools.namefind.NameFinderME;
@@ -37,13 +39,14 @@ import opennlp.tools.namefind.TokenNameFinderModel;
 
 import opennlp.tools.util.ObjectStream;
 import opennlp.tools.util.PlainTextByLineStream;
+import opennlp.tools.util.TrainingParameters;
 
 /**
  * Creates annotations, writes annotations to file, and creates a model and writes to a file
  */
 public class GenericModelableImpl implements Modelable {
 
-  private Set<String> annotatedSentences = new HashSet<String>();
+  private Set<String> annotatedSentences = new HashSet<>();
   BaseModelBuilderParams params;
 
   @Override
@@ -53,21 +56,15 @@ public class GenericModelableImpl implements Modelable {
 
   @Override
   public String annotate(String sentence, String namedEntity, String entityType) {
-    String annotation = sentence.replace(namedEntity, " <START:" + entityType + "> " + namedEntity + " <END> ");
-    return annotation;
+    return sentence.replace(namedEntity, " <START:" + entityType + "> " + namedEntity + " <END> ");
   }
 
   @Override
   public void writeAnnotatedSentences() {
-    try {
-
-      FileWriter writer = new FileWriter(params.getAnnotatedTrainingDataFile(), false);
-      BufferedWriter bw = new BufferedWriter(writer);
-
+    try (Writer bw = new BufferedWriter(new FileWriter(params.getAnnotatedTrainingDataFile(), false))) {
       for (String s : annotatedSentences) {
         bw.write(s.replace("\n", " ").trim() + "\n");
       }
-      bw.close();
     } catch (IOException ex) {
       ex.printStackTrace();
     }
@@ -90,31 +87,24 @@ public class GenericModelableImpl implements Modelable {
 
   @Override
   public void buildModel(String entityType) {
-    try {
+    try (ObjectStream<NameSample> sampleStream = new NameSampleDataStream(new PlainTextByLineStream(
+            new MarkableFileInputStreamFactory(params.getAnnotatedTrainingDataFile()), StandardCharsets.UTF_8));
+         OutputStream modelOut = new BufferedOutputStream(new FileOutputStream(params.getModelFile()))) {
       System.out.println("\tBuilding Model using " + annotatedSentences.size() + " annotations");
       System.out.println("\t\treading training data...");
-      Charset charset = Charset.forName("UTF-8");
-      ObjectStream<String> lineStream =
-              new PlainTextByLineStream(new MarkableFileInputStreamFactory(params.getAnnotatedTrainingDataFile()), charset);
-      ObjectStream<NameSample> sampleStream = new NameSampleDataStream(lineStream);
-
       TokenNameFinderModel model;
-      model = NameFinderME.train("en", entityType, sampleStream, null);
+      model = NameFinderME.train("en", entityType, sampleStream,
+                TrainingParameters.defaultParams(), new TokenNameFinderFactory());
       sampleStream.close();
-      OutputStream modelOut = new BufferedOutputStream(new FileOutputStream(params.getModelFile()));
       model.serialize(modelOut);
-      if (modelOut != null) {
-        modelOut.close();
-      }
       System.out.println("\tmodel generated");
     } catch (Exception e) {
+      e.printStackTrace();
     }
   }
 
   @Override
   public TokenNameFinderModel getModel() {
-
-
     TokenNameFinderModel nerModel = null;
     try {
       nerModel = new TokenNameFinderModel(new FileInputStream(params.getModelFile()));
@@ -126,7 +116,6 @@ public class GenericModelableImpl implements Modelable {
 
   @Override
   public String[] tokenizeSentenceToWords(String sentence) {
-    return sentence.split(" ");
-
+    return sentence.split("\\s+");
   }
 }
diff --git a/modelbuilder-addon/src/test/java/modelbuilder/AppTest.java b/modelbuilder-addon/src/test/java/modelbuilder/AppTest.java
deleted file mode 100644
index 2b04731..0000000
--- a/modelbuilder-addon/src/test/java/modelbuilder/AppTest.java
+++ /dev/null
@@ -1,38 +0,0 @@
-package modelbuilder;
-
-import junit.framework.Test;
-import junit.framework.TestCase;
-import junit.framework.TestSuite;
-
-/**
- * Unit test for simple App.
- */
-public class AppTest 
-    extends TestCase
-{
-    /**
-     * Create the test case
-     *
-     * @param testName name of the test case
-     */
-    public AppTest( String testName )
-    {
-        super( testName );
-    }
-
-    /**
-     * @return the suite of tests being tested
-     */
-    public static Test suite()
-    {
-        return new TestSuite( AppTest.class );
-    }
-
-    /**
-     * Rigourous Test :-)
-     */
-    public void testApp()
-    {
-        assertTrue( true );
-    }
-}
diff --git a/morfologik-addon/pom.xml b/morfologik-addon/pom.xml
index f6eef6a..632d0a7 100644
--- a/morfologik-addon/pom.xml
+++ b/morfologik-addon/pom.xml
@@ -1,23 +1,98 @@
+<?xml version="1.0" encoding="UTF-8"?>
+
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one
+   or more contributor license agreements.  See the NOTICE file
+   distributed with this work for additional information
+   regarding copyright ownership.  The ASF licenses this file
+   to you under the Apache License, Version 2.0 (the
+   "License"); you may not use this file except in compliance
+   with the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing,
+   software distributed under the License is distributed on an
+   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+   KIND, either express or implied.  See the License for the
+   specific language governing permissions and limitations
+   under the License.
+-->
+
 <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
 	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
 	<modelVersion>4.0.0</modelVersion>
+	<parent>
+		<groupId>org.apache.opennlp</groupId>
+		<artifactId>opennlp-addons</artifactId>
+		<version>2.2.1-SNAPSHOT</version>
+	</parent>
 
-	<groupId>org.apache.opennlp</groupId>
 	<artifactId>morfologik-addon</artifactId>
-	<version>1.0-SNAPSHOT</version>
+	<version>2.2.1-SNAPSHOT</version>
 	<packaging>jar</packaging>
-	<name>Morfologik Addon</name>
+	<name>Apache OpenNLP Morfologik Addon</name>
+
+	<dependencies>
+		<dependency>
+			<groupId>org.apache.opennlp</groupId>
+			<artifactId>opennlp-tools</artifactId>
+		</dependency>
+
+		<dependency>
+			<groupId>org.carrot2</groupId>
+			<artifactId>morfologik-stemming</artifactId>
+			<version>2.1.0</version>
+			<scope>compile</scope>
+		</dependency>
+		
+		<dependency>
+			<groupId>org.carrot2</groupId>
+			<artifactId>morfologik-tools</artifactId>
+			<version>2.1.0</version>
+			<scope>compile</scope>
+		</dependency>
+
+		<dependency>
+			<groupId>org.junit.jupiter</groupId>
+			<artifactId>junit-jupiter-api</artifactId>
+		</dependency>
+
+		<dependency>
+			<groupId>org.junit.jupiter</groupId>
+			<artifactId>junit-jupiter-engine</artifactId>
+		</dependency>
+
+		<dependency>
+			<groupId>org.junit.jupiter</groupId>
+			<artifactId>junit-jupiter-params</artifactId>
+		</dependency>
+
+		<dependency>
+			<groupId>org.apache.logging.log4j</groupId>
+			<artifactId>log4j-api</artifactId>
+		</dependency>
+
+		<dependency>
+			<groupId>org.apache.logging.log4j</groupId>
+			<artifactId>log4j-core</artifactId>
+		</dependency>
+
+		<dependency>
+			<groupId>org.apache.logging.log4j</groupId>
+			<artifactId>log4j-slf4j-impl</artifactId>
+		</dependency>
+	</dependencies>
 
-	<url>http://maven.apache.org</url>
 	<build>
 		<plugins>
 			<plugin>
 				<groupId>org.apache.maven.plugins</groupId>
 				<artifactId>maven-compiler-plugin</artifactId>
-				<version>2.3.2</version>
 				<configuration>
-					<source>1.7</source>
-					<target>1.7</target>
+					<source>${maven.compiler.source}</source>
+					<target>${maven.compiler.target}</target>
+					<compilerArgument>-Xlint</compilerArgument>
 				</configuration>
 			</plugin>
 			<plugin>
@@ -38,72 +113,40 @@
 							     many file have more than 100 chars.
 							     Right now only javadoc files are too long.
 							 -->
-							 <tarLongFileMode>gnu</tarLongFileMode>
-							 
-							 <finalName>apache-opennlp-morfologik-addon-${project.version}</finalName>
+							<tarLongFileMode>gnu</tarLongFileMode>
+
+							<finalName>apache-opennlp-morfologik-addon-${project.version}</finalName>
+						</configuration>
+					</execution>
+				</executions>
+			</plugin>
+			<plugin>
+				<artifactId>maven-antrun-plugin</artifactId>
+				<version>1.6</version>
+				<executions>
+					<execution>
+						<id>generate checksums for binary artifacts</id>
+						<goals><goal>run</goal></goals>
+						<phase>verify</phase>
+						<configuration>
+							<target>
+								<checksum algorithm="sha1" format="MD5SUM">
+									<fileset dir="${project.build.directory}">
+										<include name="*.zip" />
+										<include name="*.gz" />
+									</fileset>
+								</checksum>
+								<checksum algorithm="md5" format="MD5SUM">
+									<fileset dir="${project.build.directory}">
+										<include name="*.zip" />
+										<include name="*.gz" />
+									</fileset>
+								</checksum>
+							</target>
 						</configuration>
 					</execution>
 				</executions>
 			</plugin>
-			<plugin> 
-	        <artifactId>maven-antrun-plugin</artifactId> 
-	        <version>1.6</version> 
-	        <executions> 
-	          <execution> 
-	            <id>generate checksums for binary artifacts</id> 
-	            <goals><goal>run</goal></goals> 
-	            <phase>verify</phase> 
-	            <configuration> 
-	              <target> 
-	                <checksum algorithm="sha1" format="MD5SUM"> 
-	                  <fileset dir="${project.build.directory}"> 
-	                    <include name="*.zip" /> 
-	                    <include name="*.gz" /> 
-	                  </fileset> 
-	                </checksum> 
-	                <checksum algorithm="md5" format="MD5SUM"> 
-	                  <fileset dir="${project.build.directory}"> 
-	                    <include name="*.zip" /> 
-	                    <include name="*.gz" /> 
-	                  </fileset> 
-	                </checksum> 
-	              </target> 
-	            </configuration> 
-	          </execution> 
-	        </executions> 
-	      </plugin>
 		</plugins>
 	</build>
-	<properties>
-		<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
-	</properties>
-
-	<dependencies>
-		<dependency>
-			<groupId>org.carrot2</groupId>
-			<artifactId>morfologik-stemming</artifactId>
-			<version>2.1.0</version>
-			<scope>compile</scope>
-		</dependency>
-		<dependency>
-			<groupId>org.carrot2</groupId>
-			<artifactId>morfologik-tools</artifactId>
-			<version>2.1.0</version>
-			<scope>compile</scope>
-		</dependency>
-
-		<dependency>
-			<groupId>org.apache.opennlp</groupId>
-			<artifactId>opennlp-tools</artifactId>
-			<version>1.6.0</version>
-		</dependency>
-
-		<dependency>
-			<groupId>junit</groupId>
-			<artifactId>junit</artifactId>
-			<version>4.13.1</version>
-			<scope>test</scope>
-		</dependency>
-
-	</dependencies>
 </project>
diff --git a/morfologik-addon/src/main/java/opennlp/morfologik/lemmatizer/MorfologikLemmatizer.java b/morfologik-addon/src/main/java/opennlp/morfologik/lemmatizer/MorfologikLemmatizer.java
index 2090ce5..650f7a6 100644
--- a/morfologik-addon/src/main/java/opennlp/morfologik/lemmatizer/MorfologikLemmatizer.java
+++ b/morfologik-addon/src/main/java/opennlp/morfologik/lemmatizer/MorfologikLemmatizer.java
@@ -30,24 +30,22 @@ import morfologik.stemming.Dictionary;
 import morfologik.stemming.DictionaryLookup;
 import morfologik.stemming.IStemmer;
 import morfologik.stemming.WordData;
-import opennlp.tools.lemmatizer.DictionaryLemmatizer;
+import opennlp.tools.lemmatizer.Lemmatizer;
 
-public class MorfologikLemmatizer implements DictionaryLemmatizer {
+public class MorfologikLemmatizer implements Lemmatizer {
 
   private IStemmer dictLookup;
-  public final Set<String> constantTags = new HashSet<String>(Arrays.asList(
-      "NNP", "NP00000"));
+  public final Set<String> constantTags = new HashSet<>(Arrays.asList("NNP", "NP00000"));
 
-  public MorfologikLemmatizer(Path dictionaryPath) throws IllegalArgumentException,
-      IOException {
+  public MorfologikLemmatizer(Path dictionaryPath) throws IllegalArgumentException, IOException {
     dictLookup = new DictionaryLookup(Dictionary.read(dictionaryPath));
   }
 
   private HashMap<List<String>, String> getLemmaTagsDict(String word) {
     List<WordData> wdList = dictLookup.lookup(word);
-    HashMap<List<String>, String> dictMap = new HashMap<List<String>, String>();
+    HashMap<List<String>, String> dictMap = new HashMap<>();
     for (WordData wd : wdList) {
-      List<String> wordLemmaTags = new ArrayList<String>();
+      List<String> wordLemmaTags = new ArrayList<>();
       wordLemmaTags.add(word);
       wordLemmaTags.add(wd.getTag().toString());
       dictMap.put(wordLemmaTags, wd.getStem().toString());
@@ -56,7 +54,7 @@ public class MorfologikLemmatizer implements DictionaryLemmatizer {
   }
 
   private List<String> getDictKeys(String word, String postag) {
-    List<String> keys = new ArrayList<String>();
+    List<String> keys = new ArrayList<>();
     if (constantTags.contains(postag)) {
       keys.addAll(Arrays.asList(word, postag));
     } else {
@@ -66,7 +64,7 @@ public class MorfologikLemmatizer implements DictionaryLemmatizer {
   }
 
   private HashMap<List<String>, String> getDictMap(String word, String postag) {
-    HashMap<List<String>, String> dictMap = new HashMap<List<String>, String>();
+    HashMap<List<String>, String> dictMap;
 
     if (constantTags.contains(postag)) {
       dictMap = this.getLemmaTagsDict(word);
@@ -77,7 +75,7 @@ public class MorfologikLemmatizer implements DictionaryLemmatizer {
   }
 
   public String lemmatize(String word, String postag) {
-    String lemma = null;
+    String lemma;
     List<String> keys = this.getDictKeys(word, postag);
     HashMap<List<String>, String> dictMap = this.getDictMap(word, postag);
     // lookup lemma as value of the map
@@ -86,11 +84,25 @@ public class MorfologikLemmatizer implements DictionaryLemmatizer {
       lemma = keyValue;
     } else if (keyValue == null && constantTags.contains(postag)) {
       lemma = word;
-    } else if (keyValue == null && word.toUpperCase() == word) {
+    } else if (keyValue == null && word.toUpperCase().equals(word)) {
       lemma = word;
     } else {
       lemma = word.toLowerCase();
     }
     return lemma;
   }
+
+  @Override
+  public String[] lemmatize(final String[] tokens, final String[] postags) {
+    List<String> lemmas = new ArrayList<>();
+    for (int i = 0; i < tokens.length; i++) {
+      lemmas.add(this.lemmatize(tokens[i], postags[i]));
+    }
+    return lemmas.toArray(new String[0]);
+  }
+
+  @Override
+  public List<List<String>> lemmatize(final List<String> tokens, final List<String> posTags) {
+    throw new UnsupportedOperationException("Method not implemented here!");
+  }
 }
diff --git a/morfologik-addon/src/main/java/opennlp/morfologik/tagdict/MorfologikPOSTaggerFactory.java b/morfologik-addon/src/main/java/opennlp/morfologik/tagdict/MorfologikPOSTaggerFactory.java
index 93d6c61..c3501cf 100644
--- a/morfologik-addon/src/main/java/opennlp/morfologik/tagdict/MorfologikPOSTaggerFactory.java
+++ b/morfologik-addon/src/main/java/opennlp/morfologik/tagdict/MorfologikPOSTaggerFactory.java
@@ -28,10 +28,8 @@ import java.nio.file.Path;
 import java.util.Map;
 
 import morfologik.stemming.DictionaryMetadata;
-import opennlp.tools.dictionary.Dictionary;
 import opennlp.tools.postag.POSTaggerFactory;
 import opennlp.tools.postag.TagDictionary;
-import opennlp.tools.util.InvalidFormatException;
 import opennlp.tools.util.model.ArtifactSerializer;
 import opennlp.tools.util.model.ModelUtil;
 
@@ -52,9 +50,9 @@ public class MorfologikPOSTaggerFactory extends POSTaggerFactory {
 
   public MorfologikPOSTaggerFactory() {
   }
-  
-  public TagDictionary createTagDictionary(File dictionary)
-      throws InvalidFormatException, FileNotFoundException, IOException {
+
+  @Override
+  public TagDictionary createTagDictionary(File dictionary) throws IOException {
     
     if(!dictionary.canRead()) {
       throw new FileNotFoundException("Could not read dictionary: " + dictionary.getAbsolutePath());
@@ -72,13 +70,6 @@ public class MorfologikPOSTaggerFactory extends POSTaggerFactory {
     return createMorfologikDictionary(dictData, dictInfo);
     
   }
-  
-
-  @Override
-  protected void init(Dictionary ngramDictionary, TagDictionary posDictionary) {
-    super.init(ngramDictionary, null);
-    this.dict = posDictionary;
-  }
 
   @Override
   public TagDictionary getTagDictionary() {
@@ -87,10 +78,8 @@ public class MorfologikPOSTaggerFactory extends POSTaggerFactory {
       if (artifactProvider != null) {
         Object obj = artifactProvider.getArtifact(MORFOLOGIK_POSDICT);
         if (obj != null) {
-          byte[] data = (byte[]) artifactProvider
-              .getArtifact(MORFOLOGIK_POSDICT);
-          byte[] info = (byte[]) artifactProvider
-              .getArtifact(MORFOLOGIK_DICT_INFO);
+          byte[] data = artifactProvider.getArtifact(MORFOLOGIK_POSDICT);
+          byte[] info = artifactProvider.getArtifact(MORFOLOGIK_DICT_INFO);
 
           try {
             this.dict = createMorfologikDictionary(data, info);
@@ -120,8 +109,7 @@ public class MorfologikPOSTaggerFactory extends POSTaggerFactory {
   }
 
   @Override
-  public TagDictionary createTagDictionary(InputStream in)
-      throws InvalidFormatException, IOException {
+  public TagDictionary createTagDictionary(InputStream in) throws IOException {
     throw new UnsupportedOperationException(
         "Morfologik POS Tagger factory does not support this operation");
   }
@@ -129,8 +117,7 @@ public class MorfologikPOSTaggerFactory extends POSTaggerFactory {
   @Override
   @SuppressWarnings("rawtypes")
   public Map<String, ArtifactSerializer> createArtifactSerializersMap() {
-    Map<String, ArtifactSerializer> serializers = super
-        .createArtifactSerializersMap();
+    Map<String, ArtifactSerializer> serializers = super.createArtifactSerializersMap();
 
     serializers.put(MORFOLOGIK_POSDICT_SUF, new ByteArraySerializer());
     serializers.put(MORFOLOGIK_DICT_INFO_SUF, new ByteArraySerializer());
@@ -149,19 +136,18 @@ public class MorfologikPOSTaggerFactory extends POSTaggerFactory {
   private TagDictionary createMorfologikDictionary(byte[] data, byte[] info)
       throws IOException {
     morfologik.stemming.Dictionary dict = morfologik.stemming.Dictionary
-        .read(new ByteArrayInputStream(data), new ByteArrayInputStream(
-            info));
+        .read(new ByteArrayInputStream(data), new ByteArrayInputStream(info));
     return new MorfologikTagDictionary(dict);
   }
 
   static class ByteArraySerializer implements ArtifactSerializer<byte[]> {
 
-    public byte[] create(InputStream in) throws IOException,
-        InvalidFormatException {
-
+    @Override
+    public byte[] create(InputStream in) throws IOException {
       return ModelUtil.read(in);
     }
 
+    @Override
     public void serialize(byte[] artifact, OutputStream out) throws IOException {
       out.write(artifact);
     }
diff --git a/morfologik-addon/src/main/java/opennlp/morfologik/tagdict/MorfologikTagDictionary.java b/morfologik-addon/src/main/java/opennlp/morfologik/tagdict/MorfologikTagDictionary.java
index b34ca2b..2c723ab 100644
--- a/morfologik-addon/src/main/java/opennlp/morfologik/tagdict/MorfologikTagDictionary.java
+++ b/morfologik-addon/src/main/java/opennlp/morfologik/tagdict/MorfologikTagDictionary.java
@@ -28,13 +28,12 @@ import morfologik.stemming.WordData;
 import opennlp.tools.postag.TagDictionary;
 
 /**
- * A POS Tagger dictionary implementation based on Morfologik binary
- * dictionaries
+ * A POS Tagger dictionary implementation based on Morfologik binary dictionaries.
  */
 public class MorfologikTagDictionary implements TagDictionary {
 
-  private IStemmer dictLookup;
-  private boolean isCaseSensitive;
+  private final IStemmer dictLookup;
+  private final boolean isCaseSensitive;
 
   /**
    * Creates a case sensitive {@link MorfologikTagDictionary}
@@ -46,8 +45,7 @@ public class MorfologikTagDictionary implements TagDictionary {
    * @throws IOException
    *           could not read dictionary from dictURL
    */
-  public MorfologikTagDictionary(Dictionary dict)
-      throws IllegalArgumentException, IOException {
+  public MorfologikTagDictionary(Dictionary dict) throws IllegalArgumentException {
     this(dict, true);
   }
 
@@ -63,8 +61,7 @@ public class MorfologikTagDictionary implements TagDictionary {
    * @throws IOException
    *           could not read dictionary from dictURL
    */
-  public MorfologikTagDictionary(Dictionary dict, boolean caseSensitive)
-      throws IllegalArgumentException, IOException {
+  public MorfologikTagDictionary(Dictionary dict, boolean caseSensitive) throws IllegalArgumentException {
     this.dictLookup = new DictionaryLookup(dict);
     this.isCaseSensitive = caseSensitive;
   }
@@ -77,7 +74,7 @@ public class MorfologikTagDictionary implements TagDictionary {
 
     List<WordData> data = dictLookup.lookup(word);
     if (data != null && data.size() > 0) {
-      List<String> tags = new ArrayList<String>(data.size());
+      List<String> tags = new ArrayList<>(data.size());
       for (int i = 0; i < data.size(); i++) {
         tags.add(data.get(i).getTag().toString());
       }
@@ -87,4 +84,9 @@ public class MorfologikTagDictionary implements TagDictionary {
     }
     return null;
   }
+
+  @Override
+  public boolean isCaseSensitive() {
+    return isCaseSensitive;
+  }
 }
diff --git a/morfologik-addon/src/test/java/opennlp/morfologik/builder/POSDictionayBuilderTest.java b/morfologik-addon/src/test/java/opennlp/morfologik/builder/POSDictionayBuilderTest.java
index 0a7ba48..ae2bb46 100644
--- a/morfologik-addon/src/test/java/opennlp/morfologik/builder/POSDictionayBuilderTest.java
+++ b/morfologik-addon/src/test/java/opennlp/morfologik/builder/POSDictionayBuilderTest.java
@@ -22,19 +22,19 @@ import java.nio.file.Files;
 import java.nio.file.Path;
 import java.nio.file.StandardCopyOption;
 
-import junit.framework.TestCase;
 import morfologik.stemming.DictionaryMetadata;
 import opennlp.morfologik.lemmatizer.MorfologikLemmatizer;
 
-import org.junit.Test;
+import org.junit.jupiter.api.Test;
 
-public class POSDictionayBuilderTest extends TestCase {
+import static org.junit.jupiter.api.Assertions.assertNotNull;
+
+public class POSDictionayBuilderTest {
 
   @Test
   public void testBuildDictionary() throws Exception {
     
     Path output = createMorfologikDictionary();
-
     MorfologikLemmatizer ml = new MorfologikLemmatizer(output);
 
     assertNotNull(ml);
diff --git a/morfologik-addon/src/test/java/opennlp/morfologik/lemmatizer/MorfologikLemmatizerTest.java b/morfologik-addon/src/test/java/opennlp/morfologik/lemmatizer/MorfologikLemmatizerTest.java
index 6b7525e..f1212cc 100644
--- a/morfologik-addon/src/test/java/opennlp/morfologik/lemmatizer/MorfologikLemmatizerTest.java
+++ b/morfologik-addon/src/test/java/opennlp/morfologik/lemmatizer/MorfologikLemmatizerTest.java
@@ -1,35 +1,45 @@
-package opennlp.morfologik.lemmatizer;
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
 
-import static org.junit.Assert.assertEquals;
+package opennlp.morfologik.lemmatizer;
 
 import java.nio.file.Path;
 
 import opennlp.morfologik.builder.POSDictionayBuilderTest;
-import opennlp.tools.lemmatizer.DictionaryLemmatizer;
 
-import org.junit.Test;
+import org.junit.jupiter.api.Test;
+
+import static org.junit.jupiter.api.Assertions.assertEquals;
 
 public class MorfologikLemmatizerTest {
 
   @Test
   public void testLemmatizeInsensitive() throws Exception {
-    DictionaryLemmatizer dict = createDictionary(false);
+    MorfologikLemmatizer dict = createDictionary(false);
 
     assertEquals("casar", dict.lemmatize("casa", "V"));
     assertEquals("casa", dict.lemmatize("casa", "NOUN"));
-
     assertEquals("casa", dict.lemmatize("Casa", "PROP"));
 
   }
 
-  private MorfologikLemmatizer createDictionary(boolean caseSensitive)
-      throws Exception {
-
+  private MorfologikLemmatizer createDictionary(boolean caseSensitive) throws Exception {
     Path output = POSDictionayBuilderTest.createMorfologikDictionary();
-
-    MorfologikLemmatizer ml = new MorfologikLemmatizer(output);
-
-    return ml;
+    return new MorfologikLemmatizer(output);
   }
 
 }
diff --git a/morfologik-addon/src/test/java/opennlp/morfologik/tagdict/MorfologikTagDictionaryTest.java b/morfologik-addon/src/test/java/opennlp/morfologik/tagdict/MorfologikTagDictionaryTest.java
index c6c9e04..d6bc2fe 100644
--- a/morfologik-addon/src/test/java/opennlp/morfologik/tagdict/MorfologikTagDictionaryTest.java
+++ b/morfologik-addon/src/test/java/opennlp/morfologik/tagdict/MorfologikTagDictionaryTest.java
@@ -1,7 +1,21 @@
-package opennlp.morfologik.tagdict;
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
 
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertTrue;
+package opennlp.morfologik.tagdict;
 
 import java.util.Arrays;
 import java.util.List;
@@ -10,7 +24,10 @@ import morfologik.stemming.Dictionary;
 import opennlp.morfologik.builder.POSDictionayBuilderTest;
 import opennlp.tools.postag.TagDictionary;
 
-import org.junit.Test;
+import org.junit.jupiter.api.Test;
+
+import static org.junit.jupiter.api.Assertions.assertEquals;
+import static org.junit.jupiter.api.Assertions.assertTrue;
 
 public class MorfologikTagDictionaryTest {
 
@@ -21,7 +38,6 @@ public class MorfologikTagDictionaryTest {
     List<String> tags = Arrays.asList(dict.getTags("carro"));
     assertEquals(1, tags.size());
     assertTrue(tags.contains("NOUN"));
-
   }
 
   @Test
@@ -66,13 +82,10 @@ public class MorfologikTagDictionaryTest {
     return this.createDictionary(caseSensitive, null);
   }
 
-  private MorfologikTagDictionary createDictionary(boolean caseSensitive,
-      List<String> constant) throws Exception {
+  private MorfologikTagDictionary createDictionary(boolean caseSensitive, List<String> constant) throws Exception {
 
     Dictionary dic = Dictionary.read(POSDictionayBuilderTest.createMorfologikDictionary());
-    MorfologikTagDictionary ml = new MorfologikTagDictionary(dic, caseSensitive);
-
-    return ml;
+    return new MorfologikTagDictionary(dic, caseSensitive);
   }
 
 }
diff --git a/morfologik-addon/src/test/java/opennlp/morfologik/tagdict/POSTaggerFactoryTest.java b/morfologik-addon/src/test/java/opennlp/morfologik/tagdict/POSTaggerFactoryTest.java
index 7341a02..602ffc6 100644
--- a/morfologik-addon/src/test/java/opennlp/morfologik/tagdict/POSTaggerFactoryTest.java
+++ b/morfologik-addon/src/test/java/opennlp/morfologik/tagdict/POSTaggerFactoryTest.java
@@ -17,13 +17,11 @@
 
 package opennlp.morfologik.tagdict;
 
-import static org.junit.Assert.*;
-
 import java.io.ByteArrayInputStream;
 import java.io.ByteArrayOutputStream;
+import java.io.File;
 import java.io.IOException;
-import java.io.InputStream;
-import java.io.InputStreamReader;
+import java.nio.charset.StandardCharsets;
 import java.nio.file.Path;
 
 import opennlp.morfologik.builder.POSDictionayBuilderTest;
@@ -33,31 +31,22 @@ import opennlp.tools.postag.POSTaggerFactory;
 import opennlp.tools.postag.POSTaggerME;
 import opennlp.tools.postag.TagDictionary;
 import opennlp.tools.postag.WordTagSampleStream;
+import opennlp.tools.util.MarkableFileInputStreamFactory;
 import opennlp.tools.util.ObjectStream;
+import opennlp.tools.util.PlainTextByLineStream;
 import opennlp.tools.util.TrainingParameters;
 import opennlp.tools.util.model.ModelType;
 
-import org.junit.Test;
+import org.junit.jupiter.api.Test;
+
+import static org.junit.jupiter.api.Assertions.assertTrue;
+import static org.junit.jupiter.api.Assertions.assertEquals;
 
 /**
  * Tests for the {@link POSTaggerFactory} class.
  */
 public class POSTaggerFactoryTest {
 
-  private static ObjectStream<POSSample> createSampleStream()
-      throws IOException {
-    InputStream in = POSTaggerFactoryTest.class.getClassLoader()
-        .getResourceAsStream("AnnotatedSentences.txt");
-
-    return new WordTagSampleStream((new InputStreamReader(in)));
-  }
-
-  static POSModel trainPOSModel(ModelType type, POSTaggerFactory factory)
-      throws IOException {
-    return POSTaggerME.train("en", createSampleStream(),
-        TrainingParameters.defaultParams(), factory);
-  }
-
   @Test
   public void testPOSTaggerWithCustomFactory() throws Exception {
 
@@ -71,18 +60,28 @@ public class POSTaggerFactoryTest {
     POSTaggerFactory factory = posModel.getFactory();
     assertTrue(factory.getTagDictionary() instanceof MorfologikTagDictionary);
 
-    factory = null;
-    
-    ByteArrayOutputStream out = new ByteArrayOutputStream();
-    posModel.serialize(out);
-    ByteArrayInputStream in = new ByteArrayInputStream(out.toByteArray());
+    try (ByteArrayOutputStream out = new ByteArrayOutputStream()) {
 
-    POSModel fromSerialized = new POSModel(in);
+      posModel.serialize(out);
+      POSModel fromSerialized = new POSModel(new ByteArrayInputStream(out.toByteArray()));
 
-    factory = fromSerialized.getFactory();
-    assertTrue(factory.getTagDictionary() instanceof MorfologikTagDictionary);
-    
-    assertEquals(2, factory.getTagDictionary().getTags("casa").length);
+      factory = fromSerialized.getFactory();
+      assertTrue(factory.getTagDictionary() instanceof MorfologikTagDictionary);
+
+      assertEquals(2, factory.getTagDictionary().getTags("casa").length);
+    }
+  }
+
+  private static ObjectStream<POSSample> createSampleStream() throws IOException {
+    File data = new File("target/test-classes/AnnotatedSentences.txt");
+    return new WordTagSampleStream(new PlainTextByLineStream(
+            new MarkableFileInputStreamFactory(data), StandardCharsets.UTF_8));
+  }
+
+  static POSModel trainPOSModel(ModelType type, POSTaggerFactory factory)
+      throws IOException {
+    return POSTaggerME.train("en", createSampleStream(),
+        TrainingParameters.defaultParams(), factory);
   }
 
 }
\ No newline at end of file
diff --git a/pom.xml b/pom.xml
new file mode 100644
index 0000000..045d0cf
--- /dev/null
+++ b/pom.xml
@@ -0,0 +1,532 @@
+<?xml version="1.0" encoding="UTF-8"?>
+
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one
+   or more contributor license agreements.  See the NOTICE file
+   distributed with this work for additional information
+   regarding copyright ownership.  The ASF licenses this file
+   to you under the Apache License, Version 2.0 (the
+   "License"); you may not use this file except in compliance
+   with the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing,
+   software distributed under the License is distributed on an
+   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+   KIND, either express or implied.  See the License for the
+   specific language governing permissions and limitations
+   under the License.
+-->
+
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
+    <modelVersion>4.0.0</modelVersion>
+
+    <parent>
+        <groupId>org.apache</groupId>
+        <artifactId>apache</artifactId>
+        <version>29</version>
+        <relativePath />
+    </parent>
+
+    <groupId>org.apache.opennlp</groupId>
+    <artifactId>opennlp-addons</artifactId>
+    <version>2.2.1-SNAPSHOT</version>
+    <packaging>pom</packaging>
+
+    <name>Apache OpenNLP Addons</name>
+
+    <scm>
+        <connection>scm:git:https://github.com/apache/opennlp-addons.git</connection>
+        <developerConnection>scm:git:git@github.com:apache/opennlp-addons.git</developerConnection>
+        <url>https://github.com/apache/opennlp-addons.git</url>
+        <tag>HEAD</tag>
+    </scm>
+
+    <repositories>
+        <repository>
+            <id>apache.snapshots</id>
+            <name>Apache Snapshot Repository</name>
+            <url>https://repository.apache.org/snapshots</url>
+            <snapshots>
+                <enabled>true</enabled>
+            </snapshots>
+        </repository>
+    </repositories>
+
+    <mailingLists>
+        <mailingList>
+            <name>Apache OpenNLP Users</name>
+            <subscribe>users-subscribe@opennlp.apache.org</subscribe>
+            <unsubscribe>users-unsubscribe@opennlp.apache.org</unsubscribe>
+            <post>users@opennlp.apache.org</post>
+            <archive>http://mail-archives.apache.org/mod_mbox/opennlp-users/</archive>
+        </mailingList>
+
+        <mailingList>
+            <name>Apache OpenNLP Developers</name>
+            <subscribe>dev-subscribe@opennlp.apache.org</subscribe>
+            <unsubscribe>dev-unsubscribe@opennlp.apache.org</unsubscribe>
+            <post>dev@opennlp.apache.org</post>
+            <archive>http://mail-archives.apache.org/mod_mbox/opennlp-dev/</archive>
+        </mailingList>
+
+        <mailingList>
+            <name>Apache OpenNLP Commits</name>
+            <subscribe>commits-subscribe@opennlp.apache.org</subscribe>
+            <unsubscribe>commits-unsubscribe@opennlp.apache.org</unsubscribe>
+            <archive>http://mail-archives.apache.org/mod_mbox/opennlp-commits/</archive>
+        </mailingList>
+
+        <mailingList>
+            <name>Apache OpenNLP Issues</name>
+            <subscribe>issues-subscribe@opennlp.apache.org</subscribe>
+            <unsubscribe>issues-unsubscribe@opennlp.apache.org</unsubscribe>
+            <archive>http://mail-archives.apache.org/mod_mbox/opennlp-issues/</archive>
+        </mailingList>
+    </mailingLists>
+
+    <issueManagement>
+        <system>jira</system>
+        <url>https://issues.apache.org/jira/browse/OPENNLP</url>
+    </issueManagement>
+
+    <modules>
+        <module>geoentitylinker-addon</module>
+        <module>japanese-addon</module>
+        <module>jwnl-addon</module>
+        <module>liblinear-addon</module>
+        <module>modelbuilder-addon</module>
+        <module>morfologik-addon</module>
+    </modules>
+
+    <properties>
+        <!-- Build Properties -->
+        <java.version>11</java.version>
+        <maven.version>3.3.9</maven.version>
+        <maven.compiler.source>${java.version}</maven.compiler.source>
+        <maven.compiler.target>${java.version}</maven.compiler.target>
+        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
+
+        <opennlp.tools.version>2.2.0</opennlp.tools.version>
+        <opennlp.forkCount>1.0C</opennlp.forkCount>
+
+        <slf4j.version>1.7.36</slf4j.version>
+        <log4j2.version>2.20.0</log4j2.version>
+
+        <junit.version>5.9.2</junit.version>
+
+        <enforcer.plugin.version>3.0.0-M3</enforcer.plugin.version>
+        <checkstyle.plugin.version>3.2.0</checkstyle.plugin.version>
+        <coveralls.maven.plugin>4.3.0</coveralls.maven.plugin>
+        <jacoco.maven.plugin>0.7.9</jacoco.maven.plugin>
+        <maven.surefire.plugin>3.0.0-M5</maven.surefire.plugin>
+        <maven.failsafe.plugin>3.0.0-M5</maven.failsafe.plugin>
+        <mockito.version>3.9.0</mockito.version>
+    </properties>
+
+    <dependencyManagement>
+        <dependencies>
+            <dependency>
+                <artifactId>opennlp-tools</artifactId>
+                <groupId>${project.groupId}</groupId>
+                <version>${opennlp.tools.version}</version>
+            </dependency>
+
+            <dependency>
+                <artifactId>opennlp-tools</artifactId>
+                <groupId>${project.groupId}</groupId>
+                <version>${project.version}</version>
+                <type>test-jar</type>
+            </dependency>
+
+            <dependency>
+                <groupId>org.slf4j</groupId>
+                <artifactId>slf4j-api</artifactId>
+                <version>${slf4j.version}</version>
+            </dependency>
+
+            <dependency>
+                <groupId>commons-lang</groupId>
+                <artifactId>commons-lang</artifactId>
+                <version>2.6</version>
+            </dependency>
+            <dependency>
+                <groupId>org.apache.commons</groupId>
+                <artifactId>commons-lang3</artifactId>
+                <version>3.12.0</version>
+            </dependency>
+            <dependency>
+                <groupId>commons-codec</groupId>
+                <artifactId>commons-codec</artifactId>
+                <version>1.15</version>
+            </dependency>
+            <dependency>
+                <groupId>commons-logging</groupId>
+                <artifactId>commons-logging</artifactId>
+                <version>1.1.1</version>
+            </dependency>
+            <dependency>
+                <groupId>commons-collections</groupId>
+                <artifactId>commons-collections</artifactId>
+                <version>3.2.2</version>
+            </dependency>
+            <dependency>
+                <groupId>org.apache.commons</groupId>
+                <artifactId>commons-math3</artifactId>
+                <version>3.6.1</version>
+            </dependency>
+            <dependency>
+                <groupId>commons-beanutils</groupId>
+                <artifactId>commons-beanutils</artifactId>
+                <version>1.9.4</version>
+            </dependency>
+
+            <dependency>
+                <groupId>org.apache.logging.log4j</groupId>
+                <artifactId>log4j-api</artifactId>
+                <version>${log4j2.version}</version>
+                <scope>test</scope>
+            </dependency>
+            <dependency>
+                <groupId>org.apache.logging.log4j</groupId>
+                <artifactId>log4j-core</artifactId>
+                <version>${log4j2.version}</version>
+                <scope>test</scope>
+            </dependency>
+            <dependency>
+                <groupId>org.apache.logging.log4j</groupId>
+                <artifactId>log4j-slf4j-impl</artifactId>
+                <version>${log4j2.version}</version>
+                <scope>test</scope>
+            </dependency>
+
+            <dependency>
+                <groupId>org.junit.jupiter</groupId>
+                <artifactId>junit-jupiter-api</artifactId>
+                <version>${junit.version}</version>
+                <scope>test</scope>
+            </dependency>
+
+            <dependency>
+                <groupId>org.junit.jupiter</groupId>
+                <artifactId>junit-jupiter-engine</artifactId>
+                <version>${junit.version}</version>
+                <scope>test</scope>
+            </dependency>
+
+            <dependency>
+                <groupId>org.junit.jupiter</groupId>
+                <artifactId>junit-jupiter-params</artifactId>
+                <version>${junit.version}</version>
+                <scope>test</scope>
+            </dependency>
+
+        </dependencies>
+    </dependencyManagement>
+
+    <build>
+        <pluginManagement>
+            <plugins>
+                <plugin>
+                    <groupId>org.apache.maven.plugins</groupId>
+                    <artifactId>maven-release-plugin</artifactId>
+                    <configuration>
+                        <useReleaseProfile>false</useReleaseProfile>
+                        <goals>deploy</goals>
+                        <arguments>-Papache-release</arguments>
+                        <mavenExecutorId>forked-path</mavenExecutorId>
+                    </configuration>
+                </plugin>
+
+                <plugin>
+                    <groupId>org.apache.maven.plugins</groupId>
+                    <artifactId>maven-assembly-plugin</artifactId>
+                    <version>3.2.0</version>
+                </plugin>
+
+                <plugin>
+                    <groupId>org.apache.felix</groupId>
+                    <artifactId>maven-bundle-plugin</artifactId>
+                    <version>5.1.4</version>
+                </plugin>
+                <!--
+                <plugin>
+                    <groupId>org.apache.maven.plugins</groupId>
+                    <artifactId>maven-checkstyle-plugin</artifactId>
+                    <version>${checkstyle.plugin.version}</version>
+                    <dependencies>
+                        <dependency>
+                            <groupId>com.puppycrawl.tools</groupId>
+                            <artifactId>checkstyle</artifactId>
+                            <version>10.6.0</version>
+                        </dependency>
+                    </dependencies>
+                    <executions>
+                        <execution>
+                            <id>validate</id>
+                            <phase>validate</phase>
+                            <configuration>
+                                <configLocation>checkstyle.xml</configLocation>
+                                <consoleOutput>true</consoleOutput>
+                                <includeTestSourceDirectory>true</includeTestSourceDirectory>
+                                <testSourceDirectories>${project.basedir}/src/test/java</testSourceDirectories>
+                                <violationSeverity>error</violationSeverity>
+                                <failOnViolation>true</failOnViolation>
+                            </configuration>
+                            <goals>
+                                <goal>check</goal>
+                            </goals>
+                        </execution>
+                    </executions>
+                </plugin>
+                -->
+                
+                <!-- Coverage analysis for tests -->
+                <plugin>
+                    <groupId>org.jacoco</groupId>
+                    <artifactId>jacoco-maven-plugin</artifactId>
+                    <version>${jacoco.maven.plugin}</version>
+                    <configuration>
+                        <excludes>
+                            <exclude>**/stemmer/*</exclude>
+                            <exclude>**/stemmer/snowball/*</exclude>
+                        </excludes>
+                    </configuration>
+                    <executions>
+                        <execution>
+                            <id>jacoco-prepare-agent</id>
+                            <goals>
+                                <goal>prepare-agent</goal>
+                            </goals>
+                        </execution>
+                        <execution>
+                            <id>jacoco-prepare-agent-integration</id>
+                            <goals>
+                                <goal>prepare-agent-integration</goal>
+                            </goals>
+                        </execution>
+                        <execution>
+                            <id>jacoco-report</id>
+                            <phase>verify</phase>
+                            <goals>
+                                <goal>report</goal>
+                            </goals>
+                        </execution>
+                    </executions>
+                </plugin>
+
+                <plugin>
+                    <groupId>org.apache.maven.plugins</groupId>
+                    <artifactId>maven-surefire-plugin</artifactId>
+                    <version>${maven.surefire.plugin}</version>
+                    <configuration>
+                        <argLine>-Xmx2048m -Dfile.encoding=UTF-8</argLine>
+                        <forkCount>${opennlp.forkCount}</forkCount>
+                        <failIfNoSpecifiedTests>false</failIfNoSpecifiedTests>
+                        <excludes>
+                            <exclude>**/*IT.java</exclude>
+                        </excludes>
+                    </configuration>
+                </plugin>
+
+                <plugin>
+                    <groupId>org.apache.maven.plugins</groupId>
+                    <artifactId>maven-failsafe-plugin</artifactId>
+                    <version>${maven.failsafe.plugin}</version>
+                    <executions>
+                        <execution>
+                            <id>integration-test</id>
+                            <goals>
+                                <goal>integration-test</goal>
+                                <goal>verify</goal>
+                            </goals>
+                        </execution>
+                    </executions>
+                    <configuration>
+                        <excludes>
+                            <exclude>**/*Test.java</exclude>
+                        </excludes>
+                        <includes>
+                            <include>**/*IT.java</include>
+                        </includes>
+                    </configuration>
+                </plugin>
+
+                <plugin>
+                    <groupId>de.thetaphi</groupId>
+                    <artifactId>forbiddenapis</artifactId>
+                    <version>3.5.1</version>
+                    <configuration>
+                        <failOnUnsupportedJava>false</failOnUnsupportedJava>
+                        <bundledSignatures>
+                            <bundledSignature>jdk-deprecated</bundledSignature>
+                            <bundledSignature>jdk-non-portable</bundledSignature>
+                            <bundledSignature>jdk-internal</bundledSignature>
+                            <!-- don't allow unsafe reflective access: -->
+                            <bundledSignature>jdk-reflection</bundledSignature>
+                        </bundledSignatures>
+                    </configuration>
+                    <executions>
+                        <execution>
+                            <phase>validate</phase>
+                            <goals>
+                                <goal>check</goal>
+                                <goal>testCheck</goal>
+                            </goals>
+                        </execution>
+                    </executions>
+                </plugin>
+            </plugins>
+        </pluginManagement>
+
+        <plugins>
+            <plugin>
+                <groupId>org.apache.maven.plugins</groupId>
+                <artifactId>maven-compiler-plugin</artifactId>
+                <version>3.10.1</version>
+                <configuration>
+                    <release>${java.version}</release>
+                    <compilerArgument>-Xlint</compilerArgument>
+                </configuration>
+            </plugin>
+            <plugin>
+                <groupId>org.apache.rat</groupId>
+                <artifactId>apache-rat-plugin</artifactId>
+                <executions>
+                    <execution>
+                        <id>default-cli</id>
+                        <goals>
+                            <goal>check</goal>
+                        </goals>
+                        <phase>verify</phase>
+                        <configuration>
+                            <excludes>
+                                <exclude>release.properties</exclude>
+                                <exclude>.gitattributes</exclude>
+                                <exclude>.github/*</exclude>
+                                <exclude>**/*.md</exclude>
+                                <!-- We do not ship files from test/resources, so ok to exclude -->
+                                <exclude>**/src/test/resources/**/*.txt</exclude>
+                                <exclude>**/src/test/resources/**/*.csv</exclude>
+                                <exclude>**/src/test/resources/**/*.cxt</exclude>
+                                <exclude>**/src/test/resources/**/*.xml</exclude>
+                                <exclude>**/src/test/resources/**/*.info</exclude>
+                                <!-- These files do not allow a license header -->
+                                <exclude>**/src/main/readme/MORFOLOGIK-LICENSE</exclude>
+                            </excludes>
+                        </configuration>
+                    </execution>
+                </executions>
+            </plugin>
+
+            <plugin>
+                <artifactId>maven-javadoc-plugin</artifactId>
+                <version>3.5.0</version>
+                <configuration>
+                    <doclint>none</doclint>
+                    <source>${java.version}</source>
+                    <sourcepath>src/main/java</sourcepath>
+                </configuration>
+                <executions>
+                    <execution>
+                        <id>create-javadoc-jar</id>
+                        <goals>
+                            <goal>jar</goal>
+                        </goals>
+                        <phase>package</phase>
+                        <configuration>
+                            <show>public</show>
+                            <quiet>false</quiet>
+                            <use>false</use> <!-- Speeds up the build of the javadocs -->
+                        </configuration>
+                    </execution>
+                </executions>
+            </plugin>
+
+            <plugin>
+                <artifactId>maven-source-plugin</artifactId>
+                <executions>
+                    <execution>
+                        <id>create-source-jar</id>
+                        <goals>
+                            <goal>jar</goal>
+                        </goals>
+                        <phase>package</phase>
+                    </execution>
+                </executions>
+            </plugin>
+
+            <plugin>
+                <groupId>org.apache.maven.plugins</groupId>
+                <artifactId>maven-eclipse-plugin</artifactId>
+                <version>2.10</version>
+                <configuration>
+                    <workspace>../</workspace>
+                    <workspaceCodeStylesURL>http://opennlp.apache.org/code-formatter/OpenNLP-Eclipse-Formatter.xml</workspaceCodeStylesURL>
+                </configuration>
+            </plugin>
+
+            <plugin>
+                <groupId>org.apache.maven.plugins</groupId>
+                <artifactId>maven-enforcer-plugin</artifactId>
+                <version>${enforcer.plugin.version}</version>
+                <executions>
+                    <execution>
+                        <id>enforce-java</id>
+                        <phase>validate</phase>
+                        <goals>
+                            <goal>enforce</goal>
+                        </goals>
+                        <configuration>
+                            <rules>
+                                <requireJavaVersion>
+                                    <message>Java 11 or higher is required to compile this module</message>
+                                    <version>[${java.version},)</version>
+                                </requireJavaVersion>
+                                <requireMavenVersion>
+                                    <message>Maven 3.3.9 or higher is required to compile this module</message>
+                                    <version>[${maven.version},)</version>
+                                </requireMavenVersion>
+                            </rules>
+                        </configuration>
+                    </execution>
+                </executions>
+            </plugin>
+
+            <plugin>
+                <groupId>org.apache.maven.plugins</groupId>
+                <artifactId>maven-surefire-plugin</artifactId>
+            </plugin>
+
+            <plugin>
+                <groupId>de.thetaphi</groupId>
+                <artifactId>forbiddenapis</artifactId>
+            </plugin>
+            <plugin>
+                <groupId>org.apache.maven.plugins</groupId>
+                <artifactId>maven-checkstyle-plugin</artifactId>
+                <version>${checkstyle.plugin.version}</version>
+            </plugin>
+        </plugins>
+    </build>
+
+    <profiles>
+        <profile>
+            <id>jacoco</id>
+            <properties>
+                <opennlp.forkCount>1</opennlp.forkCount>
+            </properties>
+            <build>
+                <plugins>
+                    <plugin>
+                        <groupId>org.jacoco</groupId>
+                        <artifactId>jacoco-maven-plugin</artifactId>
+                    </plugin>
+                </plugins>
+            </build>
+        </profile>
+
+    </profiles>
+
+</project>
\ No newline at end of file