ML-Agents Release 1
Package Versions
As part of ML-Agents Release 1, we will be versioning the different packages that make up the release.
NOTE: It is strongly recommended that you use packages from the same release together for the best experience.
| Package | Version |
|---|---|
| com.unity.ml-agents (C#) | v1.0.0 |
| ml-agents (Python) | v0.16.0 |
| ml-agents-envs (Python) | v0.16.0 |
| gym-unity (Python) | v0.16.0 |
| Communicator (C#/Python) | v1.0.0 |
Major Changes
com.unity.ml-agents (C#)
- The
MLAgentsC# namespace was renamed toUnity.MLAgents, and other nested namespaces were similarly renamed. (#3843) - The offset logic was removed from
DecisionRequester. (#3716) - The signature of
Agent.Heuristic()was changed to take a float array as a parameter, instead of returning the array. This was done to prevent a common source of error where users would return arrays of the wrong size. (#3765) - The communication API version has been bumped up to 1.0.0 and will use Semantic Versioning to do compatibility checks for communication between Unity and the Python process. (#3760)
- The obsolete
AgentmethodsGiveModel,Done,InitializeAgent,AgentActionandAgentResethave been removed. (#3770) - The SideChannel API has changed:
- Introduced the
SideChannelManagerto register, unregister and access side channels. (#3807) Academy.FloatPropertieswas replaced byAcademy.EnvironmentParameters. See the Migration Guide for more details on upgrading. (#3807)SideChannel.OnMessageReceivedis now a protected method (was public). (#3807)- SideChannel IncomingMessages methods now take an optional default argument, which is used when trying to read more data than the message contains. (#3751)
- Added a feature to allow sending stats from C# environments to TensorBoard (and other python StatsWriters). To do this from your code, use
Academy.Instance.StatsRecorder.Add(key, value). (#3660)
- Introduced the
CameraSensorComponent.m_GrayscaleandRenderTextureSensorComponent.m_Grayscalewere changed frompublictoprivate. These can still be accessed via their corresponding properties. (#3808)- Public fields and properties on several classes were renamed to follow Unity's C# style conventions. All public fields and properties now use "PascalCase" instead of "camelCase"; for example,
Agent.maxStepwas renamed toAgent.MaxStep. For a full list of changes, see the pull request. (#3828) WriteAdapterwas renamed toObservationWriter. If you have a customISensorimplementation, you will need to change the signature of itsWrite()method. (#3834)- The Barracuda dependency was upgraded to 0.7.0-preview (which has breaking namespace and assembly name changes). (#3875)
ml-agents / ml-agents-envs / gym-unity (Python)
- The
--loadand--traincommand-line flags have been deprecated. Training now happens by default, and use--resumeto resume training instead of--load. (#3705) - The Jupyter notebooks have been removed from the repository. (#3704)
- The multi-agent gym option was removed from the gym wrapper. For multi-agent scenarios, use the Low Level Python API. (#3681)
- The low level Python API has changed. You can look at the document Low Level Python API documentation for more information. If you use
mlagents-learnfor training, this should be a transparent change. (#3681) - Added ability to start training (initialize model weights) from a previous run ID. (#3710)
- The GhostTrainer has been extended to support asymmetric games and the asymmetric example environment Strikers Vs. Goalie has been added. (#3653)
- The
UnityEnvclass from thegym-unitypackage was renamedUnityToGymWrapperand no longer creates theUnityEnvironment. Instead, theUnityEnvironmentmust be passed as input to the constructor ofUnityToGymWrapper(#3812)
Minor Changes
com.unity.ml-agents (C#)
- Added new 3-joint Worm ragdoll environment. (#3798)
StackingSensorwas changed frominternalvisibility topublic. (#3701)- The internal event
Academy.AgentSetStatuswas renamed toAcademy.AgentPreStepand made public. (#3716) - Academy.InferenceSeed property was added. This is used to initialize the random number generator in ModelRunner, and is incremented for each ModelRunner. (#3823)
Agent.GetObservations()was added, which returns a read-only view of the observations added inCollectObservations(). (#3825)UnityRLCapabilitieswas added to help inform users when RL features are mismatched between C# and Python packages. (#3831)
ml-agents / ml-agents-envs / gym-unity (Python)
- Format of console output has changed slightly and now matches the name of the model/summary directory. (#3630, #3616)
- Renamed 'Generalization' feature to 'Environment Parameter Randomization'. (#3646)
- Timer files now contain a dictionary of metadata, including things like the package version numbers. (#3758)
- The way that UnityEnvironment decides the port was changed. If no port is specified, the behavior will depend on the
file_nameparameter. If it isNone, 5004 (the editor port) will be used; otherwise 5005 (the base environment port) will be used. (#3673) - Running
mlagents-learnwith the same--run-idtwice will no longer overwrite the existing files. (#3705) - Model updates can now happen asynchronously with environment steps for better performance. (#3690)
num_updatesandtrain_intervalfor SAC were replaced withsteps_per_update. (#3690)- The maximum compatible version of tensorflow was changed to allow tensorflow 2.1 and 2.2. This will allow use with python 3.8 using tensorflow 2.2.0rc3. (#3830)
mlagents-learnwill no longer set the width and height of the executable window to 84x84 when no width nor height arguments are given. (#3867)
Bug Fixes
com.unity.ml-agents (C#)
- Fixed a display bug when viewing Demonstration files in the inspector. The shapes of the observations in the file now display correctly. (#3771)
ml-agents / ml-agents-envs / gym-unity (Python)
- Fixed an issue where exceptions from environments provided a return code of 0. (#3680)