日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

强化学习-Vanilla Policy Gradient(VPG)

發布時間:2024/9/15 编程问答 34 豆豆
生活随笔 收集整理的這篇文章主要介紹了 强化学习-Vanilla Policy Gradient(VPG) 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

文章目錄

  • Background
    • Quick Facts
    • Key Equations
    • Exploration vs. Exploitation
    • Pseudocode
  • Documentation
  • Referances

Background

策略梯度背后的關鍵思想是提高導致更高回報的操作的概率,并降低導致低回報的操作的概率,直到獲得最佳策略。

Quick Facts

  • VPG 是一個on-policy算法
  • VPG 能用于連續或者離散動作空間的環境
  • 結合MPI可以有并行運算的VPG

Key Equations

πθπ_θπθ?表示參數為 θ 的策略,J(πθ)J(π_θ)J(πθ?)表示策略的有限步長無折扣收益的期望。 J(πθ)J(π_θ)J(πθ?)的梯度為:?θJ(πθ)=Eτ~πθ[∑t=0T?θlog?πθ(at∣st)Aπθ(st,at)]\nabla_{\theta} J(\pi_{\theta}) = \underset{\tau \sim \pi_{\theta}}E[{ \sum_{t=0}^{T} \nabla_{\theta} \log \pi_{\theta}(a_t|s_t) A^{\pi_{\theta}}(s_t,a_t) }]?θ?J(πθ?)=τπθ?E?[t=0T??θ?logπθ?(at?st?)Aπθ?(st?,at?)]其中τ\tauτ是一個軌跡,AπθA^{\pi_\theta}Aπθ?是當前策略的優勢函數。

策略梯度算法通過策略表現的隨機梯度上升來更新策略參數:θk+1=θk+α?θJ(πθk)\theta_{k+1} = \theta_k + \alpha \nabla_{\theta} J(\pi_{\theta_k})θk+1?=θk?+α?θ?J(πθk??)盡管其他情況有使用有限步長無折扣策略梯度公式,策略梯度實現通常基于無限步長折扣收益來計算優勢函數估計值。

Exploration vs. Exploitation

VPG以一種按on-policy方式訓練隨機策略。 這意味著它將根據最新版本的隨機策略通過采樣動作來進行探索。 動作選擇的隨機性取決于初始條件和訓練程序。 在訓練過程中,由于更新規則鼓勵該策略利用已發現的獎勵,因此該策略通常變得越來越少隨機性。 這可能會導致策略陷入局部最優狀態。

Pseudocode

Documentation

spinup.vpg(env_fn, actor_critic=, ac_kwargs={}, seed=0, steps_per_epoch=4000, epochs=50, gamma=0.99, pi_lr=0.0003, vf_lr=0.001, train_v_iters=80, lam=0.97, max_ep_len=1000, logger_kwargs={}, save_freq=10)
Parameters:

  • env_fn – A function which creates a copy of the environment. The environment must satisfy the OpenAI Gym API.
  • actor_critic – A function which takes in placeholder symbols for state, x_ph, and action, a_ph, and returns the main outputs from the agent’s Tensorflow computation graph:
  • ac_kwargs (dict) – Any kwargs appropriate for the actor_critic function you provided to VPG.
  • seed (int) – Seed for random number generators.
  • steps_per_epoch (int) – Number of steps of interaction (state-action pairs) for the agent and the environment in each epoch.
  • epochs (int) – Number of epochs of interaction (equivalent to number of policy updates) to perform.
  • gamma (float) – Discount factor. (Always between 0 and 1.)
  • pi_lr (float) – Learning rate for policy optimizer.
  • vf_lr (float) – Learning rate for value function optimizer.
  • train_v_iters (int) – Number of gradient descent steps to take on value function per epoch.
  • lam (float) – Lambda for GAE-Lambda. (Always between 0 and 1, close to 1.)
  • max_ep_len (int) – Maximum length of trajectory / episode / rollout.
  • logger_kwargs (dict) – Keyword args for EpochLogger.
  • save_freq (int) – How often (in terms of gap between epochs) to save the current policy and value function.

Referances

Policy Gradient Methods for Reinforcement Learning with Function Approximation, Sutton et al. 2000

Optimizing Expectations: From Deep Reinforcement Learning to Stochastic Computation Graphs, Schulman 2016(a)

Benchmarking Deep Reinforcement Learning for Continuous Control, Duan et al. 2016

High Dimensional Continuous Control Using Generalized Advantage Estimation, Schulman et al. 2016(b)

總結

以上是生活随笔為你收集整理的强化学习-Vanilla Policy Gradient(VPG)的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。