-
Notifications
You must be signed in to change notification settings - Fork 278
Description
Issue forked from #87 by @kvas7andy
learner.epsilon_greedy_search(...) is often used for training agents with different algorithms, including DQL in the dql_run. However dql_exploit_run with input network dql_run as policy-agent and eval_episode_count parameter for the number of episodes, gives an impression that runs are used for evaluation of the trained DQN. The only distinguishable difference between 2 runs is epsilon queal to 0, which leads to exploitation mode of training, but does not exclude training, because during run with learner.epsilon_greedy_search the optimizer.step() is executed on each step of training in the file agent_dql.py, function call learner.on_step(...).
- Solution: I will include in Pull request the code I used for better evaluation (based on learner.epsilon_greedy_search(...) and generate pictures below.
- Screenshots: Figure 1 & 2 and figure 3 & 4 , shows result of chain network evaluation using corresponding new cell in notebook_benchmark-chain.ipynb. As you can see on figure 1 training on the initial 50 episodes is not enough for owning 100% of the network (AttackerGoal), whereas original run
dql_exploit_runinternally usinglearner.on_step(...)figure 2 leads to much better results, due to optimization process, which still process ongoing experience of agent. We can overcome this inaccurate evaluation and still reach the goal in 100% of times figure 3, while training on 200 episodes with commentedlearner.on_step(). It fixes trained network and stops optimizing during evaluation, but leads to the ownership of all the network with larger amount of learning episodes. This means with 200 episodes it is feasible to learn optimal path of agent attacks inside chain network configuration.
Lastly, figure 4 we can compare those runs with correct evaluation runs on 20 episodes reach 6000+ and 120+ cumulative reward for for 200 and 50 training episodes correspondently.
Figure 1: (after PR) no optimizer during evaluation, 20 trained episodes, 20 evaluation episodes
Figure 2: (before & after PR) dql_exploit_run with optimizer during evaluation, 20 trained episodes, 5 evaluation episodes
Figure3: (after PR) no optimizer during evaluation, 200 trained episodes, 20 evaluation episodes
Figure 4: (after PR) comparison of evaluation for network trained on 200 and 20 episodes, chain network configuration
ToyCTF benchmark is inaccurate, because with correct evaluation procedure, like with chain network configuration, agent does not reqch goal of 6 owned nodes after 200 training episodes.