Diffusion models have demonstrated impressive capabilities in synthesizing diverse content. However, despite their high-quality outputs, these models often perpetuate social biases, including those related to gender and race. These biases can potentially contribute to harmful real-world consequences, reinforcing stereotypes and exacerbating inequalities in various social contexts. While existing research on diffusion bias mitigation has predominantly focused on guiding content generation, it often neglects the intrinsic mechanisms within diffusion models that causally drive biased outputs. In this paper, we investigate the internal processes of diffusion models, identifying specific decision-making mechanisms, termed bias features, embedded within the model architecture. By directly manipulating these features, our method precisely isolates and adjusts the elements responsible for bias generation, permitting granular control over the bias levels in the generated content. Through experiments on both unconditional and conditional diffusion models across various social bias attributes, we demonstrate our method's efficacy in managing generation distribution while preserving image quality. We also dissect the discovered model mechanism, revealing different intrinsic features controlling fine-grained aspects of generation, boosting further research on mechanistic interpretability of diffusion models.
DiffLens balances the generations of gender attribute.
DiffLens preserves overall image semantics such as smile and eyeglasses while other methods frequently introduce distortions or lose important details.
DiffLens achieves smooth and consistent transitions that preserve semantic features like facial expressions, while other methods show distortions and loss of details at higher imbalance ratios.
DiffLens can control bias level with finer granularity and across a broader range.
Gender
Age
Race
@article{shi2025dissecting,
author = {Shi, Yingdong and Li, Changming and Wang, Yifan and Zhao, Yongxiang and Pang, Anqi and Yang, Sibei and Yu, Jingyi and Ren, Kan},
title = {Dissecting and Mitigating Diffusion Bias via Mechanistic Interpretability},
journal = {CVPR},
year = {2025},
}