ML-Agents Release 16
·
2 commits
to release_16_branch
since this release
ML-Agents Release 16
Package Versions
NOTE: It is strongly recommended that you use packages from the same release together for the best experience.
| Package | Version |
|---|---|
| com.unity.ml-agents (C#) | v1.9.1 |
| com.unity.ml-agents.extensions (C#) | v0.3.1-preview |
| ml-agents (Python) | v0.25.1 |
| ml-agents-envs (Python) | v0.25.1 |
| gym-unity (Python) | v0.25.1 |
| Communicator (C#/Python) | v1.5.0 |
Major Changes
ml-agents / ml-agents-envs / gym-unity (Python)
- The
--resumeflag now supports resuming experiments with additional reward providers or loading partial models if the network architecture has changed. See here for more details. (#5213)
Bug Fixes
com.unity.ml-agents (C#)
- Fixed erroneous warnings when using the Demonstration Recorder. (#5216)
ml-agents / ml-agents-envs / gym-unity (Python)
- Fixed an issue which was causing increased variance when using LSTMs. Also fixed an issue with LSTM when used with POCA and
sequence_length<time_horizon. (#5206) - Fixed a bug where the SAC replay buffer would not be saved out at the end of a run, even if
save_replay_bufferwas enabled. (#5205) - ELO now correctly resumes when loading from a checkpoint. (#5202)
- In the Python API, fixed
validate_actionto expect the right dimensions whenset_action_single_agentis called. (#5208) - In the
GymToUnityWrapper, raise an appropriate warning ifstep()is called after an environment is done. (#5204) - Fixed an issue where using one of the
gymwrappers would override user-set log levels. (#5201)