You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2017/12/13 02:36:19 UTC

[GitHub] sandeep-krishnamurthy closed pull request #9030: Fix Gan

sandeep-krishnamurthy closed pull request #9030: Fix Gan
URL: https://github.com/apache/incubator-mxnet/pull/9030
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/tutorials/index.md b/docs/tutorials/index.md
index d20a821193..d691ecc427 100644
--- a/docs/tutorials/index.md
+++ b/docs/tutorials/index.md
@@ -5,7 +5,7 @@
 Gluon is the high-level interface for MXNet. It is more intuitive and easier to use than the lower level interface.
 Gluon supports dynamic (define-by-run) graphs with JIT-compilation to achieve both flexibility and efficiency.
 
-This is a selected subset of Gluon tutorials that explains basic usage of Gluon and fundamental concepts in deep learning. For the comprehensive tutorial on Gluon that covers topics from basic statistics and probability theory to reinforcement learning and recommender systems, please see [gluon.mxnet.io](http://gluon.mxnet.io). 
+This is a selected subset of Gluon tutorials that explain basic usage of Gluon and fundamental concepts in deep learning. For a comprehensive tutorial on Gluon that covers topics from basic statistics and probability theory to reinforcement learning and recommender systems, please see [gluon.mxnet.io](http://gluon.mxnet.io).
 
 ### Basics
 
@@ -67,6 +67,15 @@ These tutorials introduce a few fundamental concepts in deep learning and how to
    sparse/train
 ```
 
+### Advanced Neural networks
+
+```eval_rst
+.. toctree::
+   :maxdepth: 1
+
+   unsupervised_learning/gan
+```
+
 <br>
 More tutorials and examples are available in the GitHub [repository](https://github.com/dmlc/mxnet/tree/master/example).
 
diff --git a/docs/tutorials/unsupervised_learning/gan.md b/docs/tutorials/unsupervised_learning/gan.md
index 709e1323c6..71774bc989 100644
--- a/docs/tutorials/unsupervised_learning/gan.md
+++ b/docs/tutorials/unsupervised_learning/gan.md
@@ -1,43 +1,43 @@
-# Generative Adversarial Networks
+# Generative Adversarial Network (GAN)
 
-GANs are an application of unsupervised learning - you don't need labels for your dataset in order to train a GAN.
- 
-The GAN framework composes of two neural networks: a generator network and a discriminator network.
+Generative Adversarial Networks (GANs) are a class of algorithms used in unsupervised learning - you don't need labels for your dataset in order to train a GAN.
 
-The generator's job is to take a set of random numbers and produce data (such as images or text).
+The GAN framework is composed of two neural networks: a Generator network and a Discriminator network.
 
-The discriminator then takes in that data as well as samples of that data from a dataset and tries to determine if is "fake" (created by the generator network) or "real" (from the original dataset).
+The Generator's job is to take a set of random numbers and produce the data (such as images or text).
 
-During training, the two networks play a game against each other. The generator tries to create realistic data, so that it can fool the discriminator into thinking that the data it generated is from the original dataset. At the same time, the discriminator tries to not be fooled - it learns to become better at determining if data is real or fake.
+The Discriminator then takes in that data as well as samples of that data from a dataset and tries to determine if it is "fake" (created by the Generator network) or "real" (from the original dataset).
 
-Since the two networks are fighting in this game, they can be seen as as adversaries, which is where the term "Generative Adverserial Network" comes from.
+During training, the two networks play a game against each other. The Generator tries to create realistic data, so that it can fool the Discriminator into thinking that the data it generated is from the original dataset. At the same time, the Discriminator tries to not be fooled - it learns to become better at determining if data is real or fake.
+
+Since the two networks are fighting in this game, they can be seen as as adversaries, which is where the term "Generative Adversarial Network" comes from.
 
 ## Deep Convolutional Generative Adversarial Networks
 
 This tutorial takes a look at Deep Convolutional Generative Adversarial Networks (DCGAN), which combines Convolutional Neural Networks (CNNs) and GANs.
 
-We will create a DCGAN that is able to create images of handwritten digits from random numbers.The tutorial uses the neural net architecture and guidelines outlined in [this paper](https://arxiv.org/abs/1511.06434), and the MNIST dataset.
+We will create a DCGAN that is able to create images of handwritten digits from random numbers. The tutorial uses the neural net architecture and guidelines outlined in [this paper](https://arxiv.org/abs/1511.06434), and the MNIST dataset.
 
-##How to Use This Tutorial
+## How to Use This Tutorial
 You can use this tutorial by executing each snippet of python code in order as it appears in the tutorial.
 
 
-1. The first net is the "generator" and creates images of handwritten digits from random numbers.
-2. The second net is the "discriminator" and determines if the image created by the generator is real (a realistic looking image of handwritten digits) or fake (an image that doesn't look like it came from the original dataset).
-    
+1. The first net is the "Generator" and creates images of handwritten digits from random numbers.
+2. The second net is the "Discriminator" and determines if the image created by the Generator is real (a realistic looking image of handwritten digits) or fake (an image that does not look like it is from the original dataset).
+
 Apart from creating a DCGAN, you'll also learn:
 
-- How to manipulate and iterate through batches images that you can feed into your neural network.
+- How to manipulate and iterate through batches of image data that you can feed into your neural network.
 
 - How to create a custom MXNet data iterator that generates random numbers from a normal distribution.
 
-- How to create a custom training process in MXNet, using lower level functions from the MXNet Module API such as .bind() .forward() and .backward(). The training process for a DCGAN is more complex than many other neural net's, so we need to use these functions instead of using the higher level .fit() function.
+- How to create a custom training process in MXNet, using lower level functions from the MXNet Module API such as .bind() .forward() and .backward(). The training process for a DCGAN is more complex than many other neural networks, so we need to use these functions instead of using the higher level .fit() function.
 
 - How to visualize images as they are going through the training process
 
 ## Prerequisites
 
-This tutorial assumes you're familiar with the concept of CNN's and have implemented one in MXNet. You should also be familiar with the concept of logistic regression. Having a basic understanding for MXNet data iterators helps, since we'll create a custom Data Iterator to iterate though random numbers as inputs to our generator network. 
+This tutorial assumes you are familiar with the concepts of CNNs and have implemented one in MXNet. You should also be familiar with the concept of logistic regression. Having a basic understanding of MXNet data iterators helps, since we will create a custom data iterator to iterate though random numbers as inputs to the Generator network.
 
 This example is designed to be trained on a single GPU. Training this network on CPU can be slow, so it's recommended that you use a GPU for training.
 
@@ -47,17 +47,17 @@ To complete this tutorial, you need:
 - Python 2.7, and the following libraries for Python:
     - Numpy - for matrix math
     - OpenCV - for image manipulation
-    - Scikit-learn - to easily get our dataset
-    - Matplotlib - to visualize our output
+    - Scikit-learn - to easily get the MNIST dataset
+    - Matplotlib - to visualize the output
 
 ## The Data
-We need two pieces of data to train our DCGAN:
+We need two pieces of data to train the DCGAN:
     1. Images of handwritten digits from the MNIST dataset
     2. Random numbers from a normal distribution
 
-Our generator network will use the random numbers as the input to produce images of handwritten digits, and out discriminator network will use images of handwritten digits from the MNIST dataset to determine if images produced by our generator are realistic.
+The Generator network will use the random numbers as the input to produce the images of handwritten digits, and the Discriminator network will use images of handwritten digits from the MNIST dataset to determine if images produced by the Generator are realistic.
 
-We are going to use the python library, scikit-learn, to get the MNIST dataset. Scikit-learn comes with a function that gets the dataset for us, which we will then manipulate to create our training and testing inputs.
+We are going to use the python library, scikit-learn, to get the MNIST dataset. Scikit-learn comes with a function that gets the dataset for us, which we will then manipulate to create the training and testing inputs.
 
 The MNIST dataset contains 70,000 images of handwritten digits. Each image is 28x28 pixels in size. To create random numbers, we're going to create a custom MXNet data iterator, which will returns random numbers from a normal distribution as we need then.
 
@@ -65,13 +65,14 @@ The MNIST dataset contains 70,000 images of handwritten digits. Each image is 28
 
 ### 1. Preparing the MNSIT dataset
 
-Let's start by preparing our handwritten digits from the MNIST dataset. We import the fetch_mldata function from scikit-learn, and use it to get the MNSIT dataset. Notice that it's shape is 70000x784. This contains the 70000 images on every row and 784 pixels of each image in the columns of each row. Each image is 28x28 pixels, but has been flattened so that all 784 images are represented in a single list.
+Let us start by preparing the handwritten digits from the MNIST dataset. We import the fetch_mldata function from scikit-learn, and use it to get the MNSIT dataset. Notice that it's shape is 70000x784. This contains 70000 images, one per row and 784 pixels of each image in the columns of each row. Each image is 28x28 pixels, but has been flattened so that all 784 pixels are represented in a single list.
+
 ```python
 from sklearn.datasets import fetch_mldata
 mnist = fetch_mldata('MNIST original')
 ```
 
-Next, we'll randomize the handwritten digits by using numpy to create random permutations on the dataset on our rows (images). We'll then reshape our dataset from 70000x786 to 70000x28x28, so that every image in our dataset is arranged into a 28x28 grid, where each cell in the grid represents 1 pixel of the image.
+Next, we will randomize the handwritten digits by using numpy to create random permutations on the dataset on the rows (images). We will then reshape the dataset from 70000x786 to 70000x28x28, so that every image in the dataset is arranged into a 28x28 grid, where each cell in the grid represents 1 pixel of the image.
 
 ```python
 import numpy as np
@@ -81,22 +82,23 @@ p = np.random.permutation(mnist.data.shape[0])
 X = mnist.data[p]
 X = X.reshape((70000, 28, 28))
 ```
-Since the DCGAN that we're creating takes in a 64x64 image as the input, we'll use OpenCV to resize the each 28x28 image to 64x64 images:
+Since the DCGAN that we're creating takes in a 64x64 image as the input, we will use OpenCV to resize the each 28x28 image to 64x64 images:
 ```python
 import cv2
 X = np.asarray([cv2.resize(x, (64,64)) for x in X])
 ```
-Each pixel in our 64x64 image is represented by a number between 0-255, that represents the intensity of the pixel. However, we want to input numbers between -1 and 1 into our DCGAN, as suggested by the research paper. To rescale our pixels to be in the range of -1 to 1, we'll divide each pixel by (255/2). This put our images on a scale of 0-2. We can then subtract by 1, to get them in the range of -1 to 1.
+Each pixel in the 64x64 image is represented by a number between 0-255, that represents the intensity of the pixel. However, we want to input numbers between -1 and 1 into the DCGAN, as suggested by the [research paper](https://arxiv.org/abs/1511.06434). To rescale the pixel values, we will divide it by (255/2). This changes the scale to 0-2. We then subtract by 1 to get them in the range of -1 to 1.
+
 ```python
 X = X.astype(np.float32)/(255.0/2) - 1.0
 ```
-Ultimately, images are inputted into our neural net from a 70000x3x64x64 array, and they are currently in a 70000x64x64 array. We need to add 3 channels to our images. Typically when we are working with images, the 3 channels represent the red, green, and blue components of each image. Since the MNIST dataset is grayscale, we only need 1 channel to represent our dataset. We will pad the other channels with 0's:
+Ultimately, images are fed into the neural net through a 70000x3x64x64 array but they are currently in a 70000x64x64 array. We need to add 3 channels to the images. Typically, when we are working with the images, the 3 channels represent the red, green, and blue (RGB) components of each image. Since the MNIST dataset is grayscale, we only need 1 channel to represent the dataset. We will pad the other channels with 0's:
 
 ```python
 X = X.reshape((70000, 1, 64, 64))
 X = np.tile(X, (1, 3, 1, 1))
 ```
-Finally, we'll put our images into MXNet's NDArrayIter, which will allow MXNet to easily iterate through our images during training. We'll also split up them images into a batches, with 64 images in each batch. Every time we iterate, we'll get a 4 dimensional array with size (64, 3, 64, 64), representing a batch of 64 images.
+Finally, we will put the images into MXNet's NDArrayIter, which will allow MXNet to easily iterate through the images during training. We will also split them up into batches of 64 images each. Every time we iterate, we will get a 4 dimensional array with size (64, 3, 64, 64), representing a batch of 64 images.
 ```python
 import mxnet as mx
 batch_size = 64
@@ -104,7 +106,8 @@ image_iter = mx.io.NDArrayIter(X, batch_size=batch_size)
 ```
 ### 2. Preparing Random Numbers
 
-We need to input random numbers from a normal distribution to our generator network, so we'll create an MXNet DataIter that produces random numbers for each training batch. The DataIter is the base class of MXNet's Data Loading API. Below, we create a class called RandIter which is a subclass of DataIter. We use MXNet's built in mx.random.normal function in order to return the normally distributed random numbers every time we iterate.
+We need to input random numbers from a normal distribution to the Generator network, so we will create an MXNet DataIter that produces random numbers for each training batch. The DataIter is the base class of MXNet's Data Loading API. Below, we create a class called RandIter which is a subclass of DataIter. We use MXNet's built-in mx.random.normal function to return the random numbers from a normal distribution during the iteration.
+
 ```python
 class RandIter(mx.io.DataIter):
     def __init__(self, batch_size, ndim):
@@ -117,22 +120,22 @@ class RandIter(mx.io.DataIter):
         return True
 
     def getdata(self):
-        #Returns random numbers from a gaussian (normal) distribution 
+        #Returns random numbers from a gaussian (normal) distribution
         #with mean=0 and standard deviation = 1
         return [mx.random.normal(0, 1.0, shape=(self.batch_size, self.ndim, 1, 1))]
 ```
-When we initalize our RandIter, we need to provide two numbers: the batch size and how many random numbers we want to produce a single image from. This number is referred to as Z, and we'll set this to 100. This value comes from the research paper on the topic. Every time we iterate and get a batch of random numbers, we will get a 4 dimensional array with shape: (batch_size, Z, 1, 1), which in our example is (64, 100, 1, 1).
+When we initialize the RandIter, we need to provide two numbers: the batch size and how many random numbers we want in order to produce a single image from. This number is referred to as Z, and we will set this to 100. This value comes from the research paper on the topic. Every time we iterate and get a batch of random numbers, we will get a 4 dimensional array with shape: (batch_size, Z, 1, 1), which in the example is (64, 100, 1, 1).
 ```python
 Z = 100
 rand_iter = RandIter(batch_size, Z)
 ```
 ## Create the Model
 
-Our model has two networks that we will train together - the generator network and the disciminator network.
+The model has two networks that we will train together - the Generator network and the Discriminator network.
 
 ### The Generator
 
-Let's start off by defining the generator network, which uses deconvolutional layers (also callled fractionally strided layers) to generate an image form random numbers :
+Let us start off by defining the Generator network, which uses Deconvolution layers (also called as fractionally strided layers) to generate an image form random numbers :
 ```python
 no_bias = True
 fix_gamma = True
@@ -160,16 +163,16 @@ g5 = mx.sym.Deconvolution(gact4, name='g5', kernel=(4,4), stride=(2,2), pad=(1,1
 generatorSymbol = mx.sym.Activation(g5, name='gact5', act_type='tanh')
 ```
 
-Our generator image starts with random numbers that will be obtained from the RandIter we created earlier, so we created the rand variable for this input.
+The Generator image starts with random numbers that will be obtained from the RandIter we created earlier, so we created the rand variable for this input.
 We then start creating the model starting with a Deconvolution layer (sometimes called 'fractionally strided layer'). We apply batch normalization and ReLU activation after the Deconvolution layer.
 
-We repeat this process 4 times, applying a (2,2) stride and (1,1) pad at each Deconvolutional layer, which doubles the size of our image at each layer. By creating these layers, our generator network will have to learn to upsample our input vector of random numbers, Z at each layer, so that network output a final image. We also reduce half the number of filters at each layer, reducing dimensionality at each layer. Ultimatley, our output layer is a 64x64x3 layer, representing the size and channels of our image. We use tanh activation instead of relu on the last layer, as recommended by the research on DCGANs. The output of neurons in the final gout layer represent the pixels of generated image.
+We repeat this process 4 times, applying a (2,2) stride and (1,1) pad at each Deconvolution layer, which doubles the size of the image at each layer. By creating these layers, the Generator network will have to learn to upsample the input vector of random numbers, Z at each layer, so that network output a final image. We also reduce by half the number of filters at each layer, reducing dimensionality at each layer. Ultimately, the output layer is a 64x64x3 layer, representing the size and channels of the image. We use tanh activation instead of relu on the last layer, as recommended by the research on DCGANs. The output of neurons in the final gout layer represent the pixels of generated image.
 
-Notice we used 3 parameters to help us create our model: no_bias, fixed_gamma, and epsilon. Neurons in our network won't have a bias added to them, this seems to work better in practice for the DCGAN. In our batch norm layer, we set fixed_gamma=True, which means gamma=1 for all of our batch norm layers. epsilon is a small number that gets added to our batch norm so that we don't end up dividing by zero. By default, CuDNN requires that this number is greater than 1e-5, so we add a small number to this value, ensuring this values stays small.
+Notice we used 3 parameters to help us create the model: no_bias, fixed_gamma, and epsilon. Neurons in the network won't have a bias added to them, this seems to work better in practice for the DCGAN. In the batch norm layer, we set fixed_gamma=True, which means gamma=1 for all of the batch norm layers. epsilon is a small number that gets added to the batch norm so that we don't end up dividing by zero. By default, CuDNN requires that this number is greater than 1e-5, so we add a small number to this value, ensuring this values stays small.
 
 ### The Discriminator
 
-Let's now create our discriminator network, which will take in images of handwritten digits from the MNIST dataset and images created by the generator network:
+Let us now create the Discriminator network, which will take in images of handwritten digits from the MNIST dataset and images created by the Generator network:
 ```python
 data = mx.sym.Variable('data')
 
@@ -195,19 +198,22 @@ label = mx.sym.Variable('label')
 discriminatorSymbol = mx.sym.LogisticRegressionOutput(data=d5, label=label, name='dloss')
 ```
 
-We start off by creating the data variable, which is used to hold our input images to the discriminator.
+We start off by creating the data variable, which is used to hold the input images to the Discriminator.
+
+The Discriminator then goes through a series of 5 convolutional layers, each with a 4x4 kernel, 2x2 stride, and 1x1 pad. These layers half the size of the image (which starts at 64x64) at each convolutional layer. The model also increases dimensionality at each layer by doubling the number of filters per convolutional layer, starting at 128 filters and ending at 1024 filters before we flatten the output.
 
-The discriminator then goes through a series of 5 convolutional layers, each with a 4x4 kernel, 2x2 stride, and 1x1 pad. These layers half the size of the image (which starts at 64x64) at each convolutional layer. Our model also increases dimensionality at each layer by doubling the number of filters per convolutional layer, starting at 128 filters and ending at 1024 filters before we flatten the output.
+At the final convolution, we flatten the neural net to get one number as the final output of Discriminator network. This number is the probability that the image is real, as determined by the Discriminator. We use logistic regression to determine this probability. When we pass in "real" images from the MNIST dataset, we can label these as 1 and we can label the "fake" images from the Generator net as 0 to perform logistic regression on the Discriminator network.
 
-At the final convolution, we flatten the neural net to get one number as the final output of discriminator network. This number is the probability the image is real, as determined by our discriminator. We use logistic regression to determine this probability. When we pass in "real" images from the MNIST dataset, we can label these as 1 and we can label the "fake" images from the generator net as 0 to perform logistic regression on the discriminator network.
-Prepare the models using the Module API
+### Prepare the models using the Module API
 
-So far we have defined a MXNet Symbol for both the generator and the discriminator network. Before we can train our model, we need to bind these symbols using the Module API, which creates the computation graph for our models. It also allows us to decide how we want to initialize our model and what type of optimizer we want to use. Let's set up Module for both of our networks:
+So far we have defined a MXNet Symbol for both the Generator and the Discriminator network. Before we can train the model, we need to bind these symbols using the Module API, which creates the computation graph for the models. It also allows us to decide how we want to initialize the model and what type of optimizer we want to use. Let us set up the Module for both the networks:
 ```python
-#Hyperperameters
+#Hyper-parameters
 sigma = 0.02
 lr = 0.0002
 beta1 = 0.5
+# If you do not have a GPU. Use the below outlined
+# ctx = mx.cpu()
 ctx = mx.gpu(0)
 
 #=============Generator Module=============
@@ -236,27 +242,27 @@ discriminator.init_optimizer(
     })
 mods.append(discriminator)
 ```
-First, we create Modules for our networks and then bind the symbols that we've created in the previous steps to our modules.
-We use rand_iter.provide_data as the  data_shape to bind our generator network. This means that as we iterate though batches of data on the generator Module, our RandIter will provide us with random numbers to feed our Module using it's provide_data function.
+First, we create Modules for the networks and then bind the symbols that we've created in the previous steps to the modules.
+We use rand_iter.provide_data as the  data_shape to bind the Generator network. This means that as we iterate though batches of the data on the Generator Module, the RandIter will provide us with random numbers to feed the Module using it's provide_data function.
 
-Similarly, we bind the discriminator Module to image_iter.provide_data, which gives us images from MNIST from the NDArrayIter we had set up earlier, called image_iter.
+Similarly, we bind the Discriminator Module to image_iter.provide_data, which gives us images from MNIST from the NDArrayIter we had set up earlier, called image_iter.
 
-Notice that we're using the Normal initialization, with the hyperparameter sigma=0.02. This means our weight initializations for the neurons in our networks will random numbers from a Gaussian (normal) distribution with a mean of 0 and a standard deviation of 0.02.
+Notice that we are using the Normal Initialization, with the hyperparameter sigma=0.02. This means the weight initializations for the neurons in the networks will be random numbers from a Gaussian (normal) distribution with a mean of 0 and a standard deviation of 0.02.
 
-We also use the adam optimizer for gradient decent. We've set up two hyperparameters, lr and beta1 based on the values used in the DCGAN paper. We're using a single gpu, gpu(0) for training.
+We also use the Adam optimizer for gradient decent. We've set up two hyperparameters, lr and beta1 based on the values used in the DCGAN paper. We're using a single gpu, gpu(0) for training. Set the context to cpu() if you do not have a GPU on your machine.
 
-### Visualizing Our Training
-Before we train the model, let's set up some helper functions that will help visualize what our generator is producing, compared to what the real image is:
+### Visualizing The Training
+Before we train the model, let us set up some helper functions that will help visualize what the Generator is producing, compared to what the real image is:
 ```python
 from matplotlib import pyplot as plt
 
-#Takes the images in our batch and arranges them in an array so that they can be
+#Takes the images in the batch and arranges them in an array so that they can be
 #Plotted using matplotlib
 def fill_buf(buf, num_images, img, shape):
     width = buf.shape[0]/shape[1]
     height = buf.shape[1]/shape[0]
-    img_width = (num_images%width)*shape[0]
-    img_hight = (num_images/height)*shape[1]
+    img_width = int(num_images%width)*shape[0]
+    img_hight = int(num_images/height)*shape[1]
     buf[img_hight:img_hight+shape[1], img_width:img_width+shape[0], :] = img
 
 #Plots two images side by side using matplotlib
@@ -268,8 +274,8 @@ def visualize(fake, real):
     #Repeat for real image
     real = real.transpose((0, 2, 3, 1))
     real = np.clip((real+1.0)*(255.0/2.0), 0, 255).astype(np.uint8)
-    
-    #Create buffer array that will hold all the images in our batch
+
+    #Create buffer array that will hold all the images in the batch
     #Fill the buffer so to arrange all images in the batch onto the buffer array
     n = np.ceil(np.sqrt(fake.shape[0]))
     fbuff = np.zeros((int(n*fake.shape[1]), int(n*fake.shape[2]), int(fake.shape[3])), dtype=np.uint8)
@@ -278,9 +284,9 @@ def visualize(fake, real):
     rbuff = np.zeros((int(n*real.shape[1]), int(n*real.shape[2]), int(real.shape[3])), dtype=np.uint8)
     for i, img in enumerate(real):
         fill_buf(rbuff, i, img, real.shape[1:3])
-        
+
     #Create a matplotlib figure with two subplots: one for the real and the other for the fake
-    #fill each plot with our buffer array, which creates the image
+    #fill each plot with the buffer array, which creates the image
     fig = plt.figure()
     ax1 = fig.add_subplot(2,2,1)
     ax1.imshow(fbuff)
@@ -288,22 +294,22 @@ def visualize(fake, real):
     ax2.imshow(rbuff)
     plt.show()
 ```
- 
+
 ## Fit the Model
 Training the DCGAN is a complex process that requires multiple steps.
-To fit the model, for every batch of data in our dataset:
+To fit the model, for every batch of data in the MNIST dataset:
 
-1. Use the Z vector, which contains our random numbers to do a forward pass through our generator. This outputs the "fake" image, since it's created from our generator.
+1. Use the Z vector, which contains the random numbers to do a forward pass through the Generator network. This outputs the "fake" image, since it is created from the Generator.
 
-2. Use the fake image as the input to do a forward and backwards pass through the discriminator network. We set our labels for our logistic regression to 0 to represent that this is a fake image. This trains the discriminator to learn what a fake image looks like. We save the gradient produced in backpropogation for the next step.
+2. Use the fake image as the input to do a forward and backward pass through the Discriminator network. We set the labels for logistic regression to 0 to represent that this is a fake image. This trains the Discriminator to learn what a fake image looks like. We save the gradient produced in backpropagation for the next step.
 
-3. Do a forwards and backwards pass through the discriminator using a real image from our dataset. Our label for logistic regression will now be 1 to represent real images, so our discriminator can learn to recognize a real image.
+3. Do a forward and backward pass through the Discriminator using a real image from the MNIST dataset. The label for logistic regression will now be 1 to represent the real images, so the Discriminator can learn to recognize a real image.
 
-4. Update the discriminator by adding the result of the gradient generated during backpropogation on the fake image with the gradient from backpropogation on the real image.
+4. Update the Discriminator by adding the result of the gradient generated during backpropagation on the fake image with the gradient from backpropagation on the real image.
 
-5. Now that the discriminator has been updated for the this batch, we still need to update the generator. First, do a forward and backwards pass with the same batch on the updated discriminator, to produce a new gradient. Use the new gradient to do a backwards pass
+5. Now that the Discriminator has been updated for the this data batch, we still need to update the Generator. First, do a forward and backwards pass with the same data batch on the updated Discriminator, to produce a new gradient. Use the new gradient to do a backwards pass
 
-Here's the main training loop for our DCGAN:
+Here is the main training loop for the DCGAN:
 
 ```python
 # =============train===============
@@ -317,29 +323,29 @@ for epoch in range(1):
         generator.forward(rbatch, is_train=True)
         #Output of training batch is the 64x64x3 image
         outG = generator.get_outputs()
-        
+
         #Pass the generated (fake) image through the discriminator, and save the gradient
         #Label (for logistic regression) is an array of 0's since this image is fake
         label = mx.nd.zeros((batch_size,), ctx=ctx)
         #Forward pass on the output of the discriminator network
         discriminator.forward(mx.io.DataBatch(outG, [label]), is_train=True)
-        #Do the backwards pass and save the gradient
+        #Do the backward pass and save the gradient
         discriminator.backward()
         gradD = [[grad.copyto(grad.context) for grad in grads] for grads in discriminator._exec_group.grad_arrays]
-        
+
         #Pass a batch of real images from MNIST through the discriminator
         #Set the label to be an array of 1's because these are the real images
         label[:] = 1
         batch.label = [label]
         #Forward pass on a batch of MNIST images
         discriminator.forward(batch, is_train=True)
-        #Do the backwards pass and add the saved gradient from the fake images to the gradient 
+        #Do the backward pass and add the saved gradient from the fake images to the gradient
         #generated by this backwards pass on the real images
         discriminator.backward()
         for gradsr, gradsf in zip(discriminator._exec_group.grad_arrays, gradD):
             for gradr, gradf in zip(gradsr, gradsf):
                 gradr += gradf
-        #Update gradient on the discriminator 
+        #Update gradient on the discriminator
         discriminator.update()
 
         #Now that we've updated the discriminator, let's update the generator
@@ -353,7 +359,7 @@ for epoch in range(1):
         generator.backward(diffD)
         #Update the gradients on the generator
         generator.update()
-        
+
         #Increment to the next batch, printing every 50 batches
         i += 1
         if i % 50 == 0:
@@ -364,20 +370,20 @@ for epoch in range(1):
             visualize(outG[0].asnumpy(), batch.data[0].asnumpy())
 ```
 
-This causes our GAN to train and we can visualize the progress that we're making as our networks train. After every 25 iterations, we're calling the visualize function that we created earlier, which creates the visual plots during training.
+This will train the GAN network and visualize the progress that we are making as the networks are trained. After every 25 iterations, we are calling the visualize function that we created earlier, which plots the intermediate results.
 
-The plot on our left will represent what our generator created (the fake image) in the most recent iteration. The plot on the right will represent the original (real) image from the MNIST dataset that was inputted to the discriminator on the same iteration.
+The plot on the left will represent what the Generator created (the fake image) in the most recent iteration. The plot on the right will represent the Original (real) image from the MNIST dataset that was inputted to the Discriminator on the same iteration.
 
-As training goes on the generator becomes better at generating realistic images. You can see this happening since images on the left become closer to the original dataset with each iteration.
+As the training goes on, the Generator becomes better at generating realistic images. You can see this happening since the images on the left becomes closer to the original dataset with each iteration.
 
 ## Summary
 
-We've now sucessfully used Apache MXNet to train a Deep Convolutional GAN using the MNIST dataset.
+We have now successfully used Apache MXNet to train a Deep Convolutional Generative Adversarial Neural Networks (DCGAN) using the MNIST dataset.
 
-As a result, we've created two neural nets: a generator, which is able to create images of handwritten digits from random numbers, and a discriminator, which is able to take an image and determine if it is an image of handwritten digits.
+As a result, we have created two neural nets: a Generator, which is able to create images of handwritten digits from random numbers, and a Discriminator, which is able to take an image and determine if it is an image of handwritten digits.
 
-Along the way, we've learned how to do the image manipulation and visualization that's associted with training deep neural nets. We've also learned how to some of MXNet's advanced training functionality to fit our model.
+Along the way, we have learned how to do the image manipulation and visualization that is associated with the training of deep neural nets. We have also learned how to use MXNet's Module APIs to perform advanced model training functionality to fit the model.
 
 ## Acknowledgements
-This tutorial is based on [MXNet DCGAN codebase](https://github.com/apache/incubator-mxnet/blob/master/example/gan/dcgan.py), 
-[The original paper on GANs](https://arxiv.org/abs/1406.2661), as well as [this paper on deep convolutional GANs](https://arxiv.org/abs/1511.06434).
\ No newline at end of file
+This tutorial is based on [MXNet DCGAN codebase](https://github.com/apache/incubator-mxnet/blob/master/example/gan/dcgan.py),
+[The original paper on GANs](https://arxiv.org/abs/1406.2661), as well as [this paper on deep convolutional GANs](https://arxiv.org/abs/1511.06434).
diff --git a/example/gan/dcgan.py b/example/gan/dcgan.py
deleted file mode 100644
index 981f4a4778..0000000000
--- a/example/gan/dcgan.py
+++ /dev/null
@@ -1,299 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-#
-#   http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing,
-# software distributed under the License is distributed on an
-# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-# KIND, either express or implied.  See the License for the
-# specific language governing permissions and limitations
-# under the License.
-
-from __future__ import print_function
-import mxnet as mx
-import numpy as np
-from sklearn.datasets import fetch_mldata
-from matplotlib import pyplot as plt
-import logging
-import cv2
-from datetime import datetime
-
-def make_dcgan_sym(ngf, ndf, nc, no_bias=True, fix_gamma=True, eps=1e-5 + 1e-12):
-    BatchNorm = mx.sym.BatchNorm
-    rand = mx.sym.Variable('rand')
-
-    g1 = mx.sym.Deconvolution(rand, name='g1', kernel=(4,4), num_filter=ngf*8, no_bias=no_bias)
-    gbn1 = BatchNorm(g1, name='gbn1', fix_gamma=fix_gamma, eps=eps)
-    gact1 = mx.sym.Activation(gbn1, name='gact1', act_type='relu')
-
-    g2 = mx.sym.Deconvolution(gact1, name='g2', kernel=(4,4), stride=(2,2), pad=(1,1), num_filter=ngf*4, no_bias=no_bias)
-    gbn2 = BatchNorm(g2, name='gbn2', fix_gamma=fix_gamma, eps=eps)
-    gact2 = mx.sym.Activation(gbn2, name='gact2', act_type='relu')
-
-    g3 = mx.sym.Deconvolution(gact2, name='g3', kernel=(4,4), stride=(2,2), pad=(1,1), num_filter=ngf*2, no_bias=no_bias)
-    gbn3 = BatchNorm(g3, name='gbn3', fix_gamma=fix_gamma, eps=eps)
-    gact3 = mx.sym.Activation(gbn3, name='gact3', act_type='relu')
-
-    g4 = mx.sym.Deconvolution(gact3, name='g4', kernel=(4,4), stride=(2,2), pad=(1,1), num_filter=ngf, no_bias=no_bias)
-    gbn4 = BatchNorm(g4, name='gbn4', fix_gamma=fix_gamma, eps=eps)
-    gact4 = mx.sym.Activation(gbn4, name='gact4', act_type='relu')
-
-    g5 = mx.sym.Deconvolution(gact4, name='g5', kernel=(4,4), stride=(2,2), pad=(1,1), num_filter=nc, no_bias=no_bias)
-    gout = mx.sym.Activation(g5, name='gact5', act_type='tanh')
-
-    data = mx.sym.Variable('data')
-    label = mx.sym.Variable('label')
-
-    d1 = mx.sym.Convolution(data, name='d1', kernel=(4,4), stride=(2,2), pad=(1,1), num_filter=ndf, no_bias=no_bias)
-    dact1 = mx.sym.LeakyReLU(d1, name='dact1', act_type='leaky', slope=0.2)
-
-    d2 = mx.sym.Convolution(dact1, name='d2', kernel=(4,4), stride=(2,2), pad=(1,1), num_filter=ndf*2, no_bias=no_bias)
-    dbn2 = BatchNorm(d2, name='dbn2', fix_gamma=fix_gamma, eps=eps)
-    dact2 = mx.sym.LeakyReLU(dbn2, name='dact2', act_type='leaky', slope=0.2)
-
-    d3 = mx.sym.Convolution(dact2, name='d3', kernel=(4,4), stride=(2,2), pad=(1,1), num_filter=ndf*4, no_bias=no_bias)
-    dbn3 = BatchNorm(d3, name='dbn3', fix_gamma=fix_gamma, eps=eps)
-    dact3 = mx.sym.LeakyReLU(dbn3, name='dact3', act_type='leaky', slope=0.2)
-
-    d4 = mx.sym.Convolution(dact3, name='d4', kernel=(4,4), stride=(2,2), pad=(1,1), num_filter=ndf*8, no_bias=no_bias)
-    dbn4 = BatchNorm(d4, name='dbn4', fix_gamma=fix_gamma, eps=eps)
-    dact4 = mx.sym.LeakyReLU(dbn4, name='dact4', act_type='leaky', slope=0.2)
-
-    d5 = mx.sym.Convolution(dact4, name='d5', kernel=(4,4), num_filter=1, no_bias=no_bias)
-    d5 = mx.sym.Flatten(d5)
-
-    dloss = mx.sym.LogisticRegressionOutput(data=d5, label=label, name='dloss')
-    return gout, dloss
-
-def get_mnist():
-    mnist = fetch_mldata('MNIST original')
-    np.random.seed(1234) # set seed for deterministic ordering
-    p = np.random.permutation(mnist.data.shape[0])
-    X = mnist.data[p]
-    X = X.reshape((70000, 28, 28))
-
-    X = np.asarray([cv2.resize(x, (64,64)) for x in X])
-
-    X = X.astype(np.float32)/(255.0/2) - 1.0
-    X = X.reshape((70000, 1, 64, 64))
-    X = np.tile(X, (1, 3, 1, 1))
-    X_train = X[:60000]
-    X_test = X[60000:]
-
-    return X_train, X_test
-
-class RandIter(mx.io.DataIter):
-    def __init__(self, batch_size, ndim):
-        self.batch_size = batch_size
-        self.ndim = ndim
-        self.provide_data = [('rand', (batch_size, ndim, 1, 1))]
-        self.provide_label = []
-
-    def iter_next(self):
-        return True
-
-    def getdata(self):
-        return [mx.random.normal(0, 1.0, shape=(self.batch_size, self.ndim, 1, 1))]
-
-class ImagenetIter(mx.io.DataIter):
-    def __init__(self, path, batch_size, data_shape):
-        self.internal = mx.io.ImageRecordIter(
-            path_imgrec = path,
-            data_shape  = data_shape,
-            batch_size  = batch_size,
-            rand_crop   = True,
-            rand_mirror = True,
-            max_crop_size = 256,
-            min_crop_size = 192)
-        self.provide_data = [('data', (batch_size,) + data_shape)]
-        self.provide_label = []
-
-    def reset(self):
-        self.internal.reset()
-
-    def iter_next(self):
-        return self.internal.iter_next()
-
-    def getdata(self):
-        data = self.internal.getdata()
-        data = data * (2.0/255.0)
-        data -= 1
-        return [data]
-
-def fill_buf(buf, i, img, shape):
-    n = buf.shape[0]/shape[1]
-    m = buf.shape[1]/shape[0]
-
-    sx = (i%m)*shape[0]
-    sy = (i/m)*shape[1]
-    buf[sy:sy+shape[1], sx:sx+shape[0], :] = img
-
-def visual(title, X):
-    assert len(X.shape) == 4
-    X = X.transpose((0, 2, 3, 1))
-    X = np.clip((X+1.0)*(255.0/2.0), 0, 255).astype(np.uint8)
-    n = np.ceil(np.sqrt(X.shape[0]))
-    buff = np.zeros((int(n*X.shape[1]), int(n*X.shape[2]), int(X.shape[3])), dtype=np.uint8)
-    for i, img in enumerate(X):
-        fill_buf(buff, i, img, X.shape[1:3])
-    buff = cv2.cvtColor(buff, cv2.COLOR_BGR2RGB)
-    plt.imshow(buff)
-    plt.title(title)
-    plt.show()
-
-if __name__ == '__main__':
-    logging.basicConfig(level=logging.DEBUG)
-
-    # =============setting============
-    dataset = 'mnist'
-    imgnet_path = './train.rec'
-    ndf = 64
-    ngf = 64
-    nc = 3
-    batch_size = 64
-    Z = 100
-    lr = 0.0002
-    beta1 = 0.5
-    ctx = mx.gpu(0)
-    check_point = False
-
-    symG, symD = make_dcgan_sym(ngf, ndf, nc)
-    #mx.viz.plot_network(symG, shape={'rand': (batch_size, 100, 1, 1)}).view()
-    #mx.viz.plot_network(symD, shape={'data': (batch_size, nc, 64, 64)}).view()
-
-    # ==============data==============
-    if dataset == 'mnist':
-        X_train, X_test = get_mnist()
-        train_iter = mx.io.NDArrayIter(X_train, batch_size=batch_size)
-    elif dataset == 'imagenet':
-        train_iter = ImagenetIter(imgnet_path, batch_size, (3, 64, 64))
-    rand_iter = RandIter(batch_size, Z)
-    label = mx.nd.zeros((batch_size,), ctx=ctx)
-
-    # =============module G=============
-    modG = mx.mod.Module(symbol=symG, data_names=('rand',), label_names=None, context=ctx)
-    modG.bind(data_shapes=rand_iter.provide_data)
-    modG.init_params(initializer=mx.init.Normal(0.02))
-    modG.init_optimizer(
-        optimizer='adam',
-        optimizer_params={
-            'learning_rate': lr,
-            'wd': 0.,
-            'beta1': beta1,
-        })
-    mods = [modG]
-
-    # =============module D=============
-    modD = mx.mod.Module(symbol=symD, data_names=('data',), label_names=('label',), context=ctx)
-    modD.bind(data_shapes=train_iter.provide_data,
-              label_shapes=[('label', (batch_size,))],
-              inputs_need_grad=True)
-    modD.init_params(initializer=mx.init.Normal(0.02))
-    modD.init_optimizer(
-        optimizer='adam',
-        optimizer_params={
-            'learning_rate': lr,
-            'wd': 0.,
-            'beta1': beta1,
-        })
-    mods.append(modD)
-
-
-    # ============printing==============
-    def norm_stat(d):
-        return mx.nd.norm(d)/np.sqrt(d.size)
-    mon = mx.mon.Monitor(10, norm_stat, pattern=".*output|d1_backward_data", sort=True)
-    mon = None
-    if mon is not None:
-        for mod in mods:
-            pass
-
-    def facc(label, pred):
-        pred = pred.ravel()
-        label = label.ravel()
-        return ((pred > 0.5) == label).mean()
-
-    def fentropy(label, pred):
-        pred = pred.ravel()
-        label = label.ravel()
-        return -(label*np.log(pred+1e-12) + (1.-label)*np.log(1.-pred+1e-12)).mean()
-
-    mG = mx.metric.CustomMetric(fentropy)
-    mD = mx.metric.CustomMetric(fentropy)
-    mACC = mx.metric.CustomMetric(facc)
-
-    print('Training...')
-    stamp =  datetime.now().strftime('%Y_%m_%d-%H_%M')
-
-    # =============train===============
-    for epoch in range(100):
-        train_iter.reset()
-        for t, batch in enumerate(train_iter):
-            rbatch = rand_iter.next()
-
-            if mon is not None:
-                mon.tic()
-
-            modG.forward(rbatch, is_train=True)
-            outG = modG.get_outputs()
-
-            # update discriminator on fake
-            label[:] = 0
-            modD.forward(mx.io.DataBatch(outG, [label]), is_train=True)
-            modD.backward()
-            #modD.update()
-            gradD = [[grad.copyto(grad.context) for grad in grads] for grads in modD._exec_group.grad_arrays]
-
-            modD.update_metric(mD, [label])
-            modD.update_metric(mACC, [label])
-
-            # update discriminator on real
-            label[:] = 1
-            batch.label = [label]
-            modD.forward(batch, is_train=True)
-            modD.backward()
-            for gradsr, gradsf in zip(modD._exec_group.grad_arrays, gradD):
-                for gradr, gradf in zip(gradsr, gradsf):
-                    gradr += gradf
-            modD.update()
-
-            modD.update_metric(mD, [label])
-            modD.update_metric(mACC, [label])
-
-            # update generator
-            label[:] = 1
-            modD.forward(mx.io.DataBatch(outG, [label]), is_train=True)
-            modD.backward()
-            diffD = modD.get_input_grads()
-            modG.backward(diffD)
-            modG.update()
-
-            mG.update([label], modD.get_outputs())
-
-
-            if mon is not None:
-                mon.toc_print()
-
-            t += 1
-            if t % 10 == 0:
-                print('epoch:', epoch, 'iter:', t, 'metric:', mACC.get(), mG.get(), mD.get())
-                mACC.reset()
-                mG.reset()
-                mD.reset()
-
-                visual('gout', outG[0].asnumpy())
-                diff = diffD[0].asnumpy()
-                diff = (diff - diff.mean())/diff.std()
-                visual('diff', diff)
-                visual('data', batch.data[0].asnumpy())
-
-        if check_point:
-            print('Saving...')
-            modG.save_params('%s_G_%s-%04d.params'%(dataset, stamp, epoch))
-            modD.save_params('%s_D_%s-%04d.params'%(dataset, stamp, epoch))


 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services