Changelog

This is the changelog of Diffusion Gym.


Version 2.0.1

  • Option to specify CFG scale in env.sample.

  • Move keyword arguments to CPU before returning from env.sample.

Version 2.0.0

  • Re-named the project to Diffusion Gym.

  • Added min_cfg_scale and max_cfg_scale argument to the Stable Diffusion base model for random guidance scales during sampling.

  • Added optional pred argument to BaseModel.train_loss.

  • Re-named variables to use DD short for Diffusion Data instead of Flow. I.e.: DDProtocol

    -> DDProtocol, FlowMixin -> DDMixin, FlowTensor -> DDTensor, FlowDataset -> DDDataset, FlowGraph -> DDGraph.

Version 1.13

  • Environment[D].sample now outputs a Sample[D] data class instead of a tuple.

  • Added Environment[D].batch_sample to sample in batches and outputs a single Sample[D].

  • No longer need to specify whether Reward[D] is defined over latent space of sample space, Reward[D].__call__ takes both as input.

Version 1.12

  • Added DummyReward for only sampling from a base model.

  • Added xt and t arguments to BaseModel.train_loss.

  • Added rewards for more quantum chemistry properties returned by GFN2-xTB.

Version 1.11

  • Added keyword argument support for train_base_model.

  • Removed timestep-dependent loss for FlowMol in train_loss method, which is much more memory- and time-efficient.

  • Introduced prompts argument to Stable Diffusion to customize training prompts.

  • Made classifier-free guidance scale part of the base model configuration, rather than inference.

  • Corrected FlowGraph.randn_like() to ensure proper sampling from zero-mean Gaussian positions and upper edge masking.

Version 1.9

  • Fixed the to method of FlowGraph to not change the underlying data, but the graph itself.

  • Updated the reward calculation in BaseEnvironment to handle rewards defined over both latent spaces and postprocessed samples.

  • Corrected the aggregation method in Epsilon, Score, and Velocity environments to use “sum” instead of the default aggregation.

  • Improved the reduction logic in FlowGraph to properly compute reduction="mean".

Version 1.8

  • Added reduction parameter to aggregate method in FlowProtocol.

Version 1.7

  • Fixed train_base_model to not only train on a single sample per iteration.

Version 1.6

  • Made it much easier to add new data types by shortening the number of methods required to implement.

  • Added functionality for indexing and collating data type.

  • Added a general training function train_base_model.

  • Dependencies should be easier to manage now.

Version 1.3

  • Added x0 parameter to FlowEnv.sample_trajectories to allow specifying initial states for trajectory sampling.