OpenAI Gym: Pendulum-v0¶
This notebook demonstrates how grammar-guided genetic programming (G3P) can be used to solve the Pendulum-v0 problem from OpenAI Gym. This is achieved by searching for a small program that defines an agent, who uses an algebraic expression of the observed variables to decide which action to take in each moment.
Caution: This notebook was run with gym v0.20.0 (pip install gym==0.20.0) and pyglet v1.5.27 (pip install pyglet==1.5.27). Gym deprecated “Pendulum-v0” from v0.20.0 to v.0.21.0. Gym changed its API from v0.25.2 to v0.26.0. Pyglet changed its API from 1.5.27 to 2.0.0.
References¶
OpenAI Gym website
Classic problems from control theory: an overview of environments
Pendulum-v0: the environment solved here
GitHub
Pendulum-v1: details on the next version of the environment solved here
Leaderboard: community wiki to track user-provided solutions
Example solution: a fixed policy written by Zhiqing Xiao
[1]:
import time
import warnings
import alogos as al
import gym
import numpy as np
import unified_map as um
[2]:
warnings.filterwarnings('ignore')
Preparation¶
1) Environment¶
Pendulum-v0: The aim is to swing up a frictionless pendulum and keep it standing upright there, starting from random position and velocity. The agent observes the current position and velocity of the pendulum. It can act by applying limited torque to the joint (continuous value between -2 to +2)
[3]:
env = gym.make('Pendulum-v0')
2) Functions to run single or multiple simulations¶
It allows an agent to act in an environment and collect rewards until the environment signals it is done.
[4]:
def simulate_single_run(env, agent, render=False):
observation = env.reset()
episode_reward = 0.0
while True:
action = agent.decide(observation)
observation, reward, done, info = env.step(action)
episode_reward += reward
if render:
time.sleep(0.03)
env.render()
if done:
break
env.close()
return episode_reward
[5]:
def simulate_multiple_runs(env, agent, n):
total_reward = sum(simulate_single_run(env, agent) for _ in range(n))
mean_reward = total_reward / n
return mean_reward
Example solutions¶
[6]:
num_sim = 500
1) By Zhiqing Xiao¶
[7]:
class Agent:
def decide(self, observation):
x, y, angle_velocity = observation
flip = (y < 0.)
if flip:
y *= -1. # now y >= 0
angle_velocity *= -1.
angle = np.arcsin(y)
if x < 0.:
angle = np.pi - angle
if (angle < -0.3 * angle_velocity) or \
(angle > 0.03 * (angle_velocity - 2.5) ** 2. + 1. and \
angle < 0.15 * (angle_velocity + 3.) ** 2. + 2.):
force = 2.
else:
force = -2.
if flip:
force *= -1.
action = np.array([force,])
return action
agent = Agent()
simulate_multiple_runs(env, agent, num_sim)
[7]:
-146.6261378431444
2) By previous runs of evolutionary optimization¶
[8]:
class Agent:
def decide(self, observation):
x, y, angle_velocity = observation
output = (6.46/((4.45**(5.67/8.42))/((((y-y)*1.50)-(((x/x)/x)*angle_velocity))-((5.40*x)*y))))
action = [min(max(output, -2.0), 2.0)]
return action
agent = Agent()
simulate_multiple_runs(env, agent, num_sim)
[8]:
-185.80858672911128
[9]:
class Agent:
def decide(self, observation):
x, y, angle_velocity = observation
output = ((x/(((2.29-4.83)+y)/(angle_velocity+(8.50*(9.86/0.28)))))*(y+angle_velocity))
action = [min(max(output, -2.0), 2.0)]
return action
agent = Agent()
simulate_multiple_runs(env, agent, num_sim)
[9]:
-216.15272672117865
[10]:
class Agent:
def decide(self, observation):
x, y, angle_velocity = observation
output = (((((7.05/(x+(6.66/1.04)))-angle_velocity)-((y+y)+x))*3.04)/x)
action = [min(max(output, -2.0), 2.0)]
return action
agent = Agent()
simulate_multiple_runs(env, agent, num_sim)
[10]:
-264.6755941288853
[11]:
class Agent:
def decide(self, observation):
x, y, angle_velocity = observation
output = ((((2.05*x)-x)*((x-6.40)-(angle_velocity/y)))/y)
action = [min(max(output, -2.0), 2.0)]
return action
agent = Agent()
simulate_multiple_runs(env, agent, num_sim)
[11]:
-322.65006620221567
Definition of search space and goal¶
1) Grammar¶
This grammar defines the search space: a Python program that creates an Agent who uses an algebraic expression of the observed variables to decide how to act in each situation.
[12]:
ebnf_text = """
program = L0 NL L1 NL L2 NL L3 NL L4 NL L5
L0 = "class Agent:"
L1 = " def decide(self, observation):"
L2 = " x, y, angle_velocity = observation"
L3 = " output = " EXPR
L4 = " action = [min(max(output, -2.0), 2.0)]"
L5 = " return action"
NL = "\n"
EXPR = VAR | CONST | "(" EXPR OP EXPR ")"
VAR = "x" | "y" | "angle_velocity"
CONST = DIGIT "." DIGIT DIGIT
OP = "+" | "-" | "*" | "/" | "**"
DIGIT = "0" | "1" | "2" | "3" | "4" | "5" | "6" | "7" | "8" | "9"
"""
grammar = al.Grammar(ebnf_text=ebnf_text)
2) Objective function¶
The objective function gets a candidate solution (=a string of the grammar’s language) and returns a fitness value for it. This is done by 1) executing the string as a Python program, so that it creates an agent object, and then 2) using the agent in multiple simulations to see how good it can handle different situations: the higher the total reward, the better is the candidate.
[13]:
def string_to_agent(string):
local_vars = dict()
exec(string, None, local_vars)
Agent = local_vars['Agent']
return Agent()
def objective_function(string):
agent = string_to_agent(string)
avg_reward = simulate_multiple_runs(env, agent, 15)
return avg_reward
Generation of a random solution¶
Check if grammar and objective function work as intended.
[14]:
random_string = grammar.generate_string()
print(random_string)
class Agent:
def decide(self, observation):
x, y, angle_velocity = observation
output = ((angle_velocity/y)**angle_velocity)
action = [min(max(output, -2.0), 2.0)]
return action
[15]:
objective_function(random_string)
[15]:
nan
Search for an optimal solution¶
Evolutionary optimization with random variation and non-random selection is used to find increasingly better candidate solutions.
1) Parameterization¶
[16]:
ea = al.EvolutionaryAlgorithm(
grammar, objective_function, 'max',
max_or_min_fitness=-180, population_size=50, offspring_size=50,
evaluator=um.univariate.parallel.futures, verbose=True)
2) Run¶
[17]:
best_ind = ea.run()
Progress Generations Evaluations Runtime (sec) Best fitness
..... ..... 10 369 25.2 -1186.9689994988496
..... ..... 20 540 33.5 -727.2836108335539
..... ..... 30 836 47.9 -378.6647331268654
..... ..... 40 1193 67.7 -286.09358866928585
..... ..... 50 1465 81.4 -240.8040736173183
..... ..... 60 1780 97.0 -239.95336646733077
..... ..
Finished 67 1990 111.1 -173.8957367860315
3) Result¶
[18]:
string = best_ind.phenotype
print(string)
class Agent:
def decide(self, observation):
x, y, angle_velocity = observation
output = (((y*((angle_velocity*8.64)-5.86))-y)-(angle_velocity/(0.05*x)))
action = [min(max(output, -2.0), 2.0)]
return action
[19]:
agent = string_to_agent(string)
simulate_multiple_runs(env, agent, 100)
[19]:
-232.48567351045173
[20]:
simulate_single_run(env, agent, render=True)
[20]:
-375.62393381779486