Changelog
This is the changelog of Diffusion Gym.
Version 2.0.1
Option to specify CFG scale in
env.sample.Move keyword arguments to CPU before returning from
env.sample.
Version 2.0.0
Re-named the project to Diffusion Gym.
Added
min_cfg_scaleandmax_cfg_scaleargument to the Stable Diffusion base model for random guidance scales during sampling.Added optional
predargument toBaseModel.train_loss.- Re-named variables to use
DDshort for Diffusion Data instead of Flow. I.e.:DDProtocol ->
DDProtocol,FlowMixin->DDMixin,FlowTensor->DDTensor,FlowDataset->DDDataset,FlowGraph->DDGraph.
- Re-named variables to use
Version 1.13
Environment[D].samplenow outputs aSample[D]data class instead of a tuple.Added
Environment[D].batch_sampleto sample in batches and outputs a singleSample[D].No longer need to specify whether
Reward[D]is defined over latent space of sample space,Reward[D].__call__takes both as input.
Version 1.12
Added
DummyRewardfor only sampling from a base model.Added
xtandtarguments toBaseModel.train_loss.Added rewards for more quantum chemistry properties returned by GFN2-xTB.
Version 1.11
Added keyword argument support for
train_base_model.Removed timestep-dependent loss for FlowMol in
train_lossmethod, which is much more memory- and time-efficient.Introduced
promptsargument to Stable Diffusion to customize training prompts.Made classifier-free guidance scale part of the base model configuration, rather than inference.
Corrected
FlowGraph.randn_like()to ensure proper sampling from zero-mean Gaussian positions and upper edge masking.
Version 1.9
Fixed the
tomethod ofFlowGraphto not change the underlying data, but the graph itself.Updated the reward calculation in
BaseEnvironmentto handle rewards defined over both latent spaces and postprocessed samples.Corrected the aggregation method in
Epsilon,Score, andVelocityenvironments to use “sum” instead of the default aggregation.Improved the reduction logic in
FlowGraphto properly computereduction="mean".
Version 1.8
Added
reductionparameter toaggregatemethod inFlowProtocol.
Version 1.7
Fixed
train_base_modelto not only train on a single sample per iteration.
Version 1.6
Made it much easier to add new data types by shortening the number of methods required to implement.
Added functionality for indexing and collating data type.
Added a general training function
train_base_model.Dependencies should be easier to manage now.
Version 1.3
Added x0 parameter to FlowEnv.sample_trajectories to allow specifying initial states for trajectory sampling.