You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2017/12/08 19:25:15 UTC

[GitHub] piiswrong closed pull request #8991: Some fixes for example/reinforcement-learning/parallel_actor_critic

piiswrong closed pull request #8991: Some fixes for example/reinforcement-learning/parallel_actor_critic
URL: https://github.com/apache/incubator-mxnet/pull/8991
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/example/reinforcement-learning/parallel_actor_critic/README.md b/example/reinforcement-learning/parallel_actor_critic/README.md
index d734ceb190..d3288492a6 100644
--- a/example/reinforcement-learning/parallel_actor_critic/README.md
+++ b/example/reinforcement-learning/parallel_actor_critic/README.md
@@ -10,6 +10,14 @@ Please see the accompanying [tutorial](https://minpy.readthedocs.io/en/latest/tu
 
 Author: Sean Welleck ([@wellecks](https://github.com/wellecks)), Reed Lee ([@loofahcus](https://github.com/loofahcus))
 
+
+## Prerequisites
+  - Install Scikit-learn: `python -m pip install --user sklearn`
+  - Install SciPy: `python -m pip install --user scipy`
+  - Install the required OpenAI environments. For example, install Atari: `pip install gym[atari]`
+
+For more details refer: https://github.com/openai/gym
+
 ## Training
 
 #### Atari Pong
diff --git a/example/reinforcement-learning/parallel_actor_critic/model.py b/example/reinforcement-learning/parallel_actor_critic/model.py
index b90af67905..384f48cfab 100644
--- a/example/reinforcement-learning/parallel_actor_critic/model.py
+++ b/example/reinforcement-learning/parallel_actor_critic/model.py
@@ -88,7 +88,7 @@ def train_step(self, env_xs, env_as, env_rs, env_vs):
         # Compute discounted rewards and advantages.
         advs = []
         gamma, lambda_ = self.config.gamma, self.config.lambda_
-        for i in xrange(len(env_vs)):
+        for i in range(len(env_vs)):
             # Compute advantages using Generalized Advantage Estimation;
             # see eqn. (16) of [Schulman 2016].
             delta_t = (env_rs[i] + gamma*np.array(env_vs[i][1:]) -
diff --git a/example/reinforcement-learning/parallel_actor_critic/train.py b/example/reinforcement-learning/parallel_actor_critic/train.py
index 128a550302..7b78d72205 100644
--- a/example/reinforcement-learning/parallel_actor_critic/train.py
+++ b/example/reinforcement-learning/parallel_actor_critic/train.py
@@ -125,7 +125,7 @@ def save_params(save_pre, model, epoch):
     parser = argparse.ArgumentParser()
     parser.add_argument('--num-envs', type=int, default=16)
     parser.add_argument('--t-max', type=int, default=50)
-    parser.add_argument('--env-type', default='PongDeterministic-v3')
+    parser.add_argument('--env-type', default='PongDeterministic-v4')
     parser.add_argument('--render', action='store_true')
     parser.add_argument('--save-pre', default='checkpoints')
     parser.add_argument('--save-every', type=int, default=0)


 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services