Table of Links
Abstract and 1. Introduction
-
Background and Related Work
-
Threat Model
-
Robust Style Mimicry
-
Experimental Setup
-
Results
6.1 Main Findings: All Protections are Easily Circumvented
6.2 Analysis
-
Discussion and Broader Impact, Acknowledgements, and References
A. Detailed Art Examples
B. Robust Mimicry Generations
C. Detailed Results
D. Differences with Glaze Finetuning
E. Findings on Glaze 2.0
F. Findings on Mist v2
G. Methods for Style Mimicry
H. Existing Style Mimicry Protections
I. Robust Mimicry Methods
J. Experimental Setup
K. User Study
L. Compute Resources
F Findings on Mist v2
After responsibly disclosing our work to defense developers, authors from Mist brought to our attention the recent release of their latest Mist v2 with improved resilience (Zheng et al., 2023). As we did with Glaze v2.0 (see Section E), we reproduced some of our experiments with the latest protections to verify the success of robust mimicry. Their original implementation still uses the outdated version 1.5 of Stable Diffusion. We change to SD 2.1 to match our previous experiments[6].
Our findings, as we saw with Glaze v2.0, highlight that improved protections are still not effective against low-effort robust mimicry. More specifically, the latest version of Mist:
-
introduces visible perturbations over the images. See Figure 23
-
does not improve protections against robust mimicry. See Figure 24
-
creates protection that are easily removable with Noisy Upscaling. See Figure 25.
Authors:
(1) Robert Honig, ETH Zurich ([email protected]);
(2) Javier Rando, ETH Zurich ([email protected]);
(3) Nicholas Carlini, Google DeepMind;
(4) Florian Tramer, ETH Zurich ([email protected]).
This paper is
[6] Both models share the same encoder for which protections are optimized.