Dissecting and Mitigating Diffusion Bias via Mechanistic Interpretability

The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2025

1ShanghaiTech University 2Stony Brook University 3Tencent PCG
Equal contribution
*Corresponding author
Interpolation end reference image.

Abstract

Diffusion models have demonstrated impressive capabilities in synthesizing diverse content. However, despite their high-quality outputs, these models often perpetuate social biases, including those related to gender and race. These biases can potentially contribute to harmful real-world consequences, reinforcing stereotypes and exacerbating inequalities in various social contexts. While existing research on diffusion bias mitigation has predominantly focused on guiding content generation, it often neglects the intrinsic mechanisms within diffusion models that causally drive biased outputs. In this paper, we investigate the internal processes of diffusion models, identifying specific decision-making mechanisms, termed bias features, embedded within the model architecture. By directly manipulating these features, our method precisely isolates and adjusts the elements responsible for bias generation, permitting granular control over the bias levels in the generated content. Through experiments on both unconditional and conditional diffusion models across various social bias attributes, we demonstrate our method's efficacy in managing generation distribution while preserving image quality. We also dissect the discovered model mechanism, revealing different intrinsic features controlling fine-grained aspects of generation, boosting further research on mechanistic interpretability of diffusion models.

Results

Attribute Debiasing

DiffLens balances the generations of gender attribute.

Gender debiasing results
Interpolate start reference image.

Accurate Identification of Bias Feature

DiffLens preserves overall image semantics such as smile and eyeglasses while other methods frequently introduce distortions or lose important details.

Gender debiasing results

Individual Image Transformations Along the Gender Axis

DiffLens achieves smooth and consistent transitions that preserve semantic features like facial expressions, while other methods show distortions and loss of details at higher imbalance ratios.

Gender debiasing results

Fine-grained Control and Editing

DiffLens can control bias level with finer granularity and across a broader range.

Gender

Gender 1
Gender 2
Gender 3

Age

Age 1
Age 2
Age 3

Race

Race 1
Race 2
Race 3

BibTeX

@article{shi2025dissecting,
  author    = {Shi, Yingdong and Li, Changming and Wang, Yifan and Zhao, Yongxiang and Pang, Anqi and Yang, Sibei and Yu, Jingyi and Ren, Kan},
  title     = {Dissecting and Mitigating Diffusion Bias via Mechanistic Interpretability},
  journal   = {CVPR},
  year      = {2025},
}