下面是“论文笔记之:Conditional Generative Adversarial Nets的完整攻略”,包括论文简介、模型结构、训练过程和两个示例说明。
论文简介
Conditional Generative Adversarial Nets (CGAN) 是一种生成式对抗网络,它可以根据给定的条件生成符合条件的样本。CGAN 的主要思想是在 GAN 的基础上增加条件信息,使得生成器可以根据条件生成符合条件的样本。
模型结构
CGAN 的模型结构与 GAN 的模型结构类似,但是在生成器和判别器中都增加了条件信息。具体来说,生成器和判别器的输入都是条件信息和噪声,生成器的输出是生成的样本,判别器的输出是样本的真假概率。
训练过程
CGAN 的训练过程与 GAN 的训练过程类似,但是在每次训练时需要提供条件信息。具体来说,每次训练时,生成器和判别器都会接收到条件信息和噪声,生成器会根据条件信息和噪声生成样本,判别器会根据条件信息和样本的真假概率进行判别。训练过程中,生成器和判别器会相互博弈,生成器会尽可能生成符合条件的样本,判别器会尽可能区分真实样本和生成样本。
示例1:使用 CGAN 生成手写数字
在这个示例中,我们将演示如何使用 CGAN 生成手写数字。可以按照以下步骤进行操作:
- 下载 MNIST 数据集。
from tensorflow.keras.datasets import mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
- 定义生成器和判别器。
from tensorflow.keras.layers import Input, Dense, Reshape, Flatten, Dropout, multiply
from tensorflow.keras.layers import BatchNormalization, Activation, Embedding, ZeroPadding2D
from tensorflow.keras.layers import UpSampling2D, Conv2DTranspose
from tensorflow.keras.layers import Conv2D, MaxPooling2D
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.optimizers import Adam
def build_generator():
model = Sequential()
model.add(Dense(128 * 7 * 7, activation="relu", input_dim=100))
model.add(Reshape((7, 7, 128)))
model.add(BatchNormalization(momentum=0.8))
model.add(UpSampling2D())
model.add(Conv2D(128, kernel_size=3, padding="same"))
model.add(Activation("relu"))
model.add(BatchNormalization(momentum=0.8))
model.add(UpSampling2D())
model.add(Conv2D(64, kernel_size=3, padding="same"))
model.add(Activation("relu"))
model.add(BatchNormalization(momentum=0.8))
model.add(Conv2D(1, kernel_size=3, padding="same"))
model.add(Activation("tanh"))
noise = Input(shape=(100,))
label = Input(shape=(1,), dtype='int32')
label_embedding = Flatten()(Embedding(10, 100)(label))
model_input = multiply([noise, label_embedding])
img = model(model_input)
return Model([noise, label], img)
def build_discriminator():
model = Sequential()
model.add(Conv2D(32, kernel_size=3, strides=2, input_shape=(28, 28, 1), padding="same"))
model.add(LeakyReLU(alpha=0.2))
model.add(Dropout(0.25))
model.add(Conv2D(64, kernel_size=3, strides=2, padding="same"))
model.add(ZeroPadding2D(padding=((0,1),(0,1))))
model.add(LeakyReLU(alpha=0.2))
model.add(Dropout(0.25))
model.add(BatchNormalization(momentum=0.8))
model.add(Conv2D(128, kernel_size=3, strides=2, padding="same"))
model.add(LeakyReLU(alpha=0.2))
model.add(Dropout(0.25))
model.add(BatchNormalization(momentum=0.8))
model.add(Conv2D(256, kernel_size=3, strides=1, padding="same"))
model.add(LeakyReLU(alpha=0.2))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(1, activation='sigmoid'))
img = Input(shape=(28, 28, 1))
label = Input(shape=(1,), dtype='int32')
label_embedding = Flatten()(Embedding(10, 784)(label))
flat_img = Flatten()(img)
model_input = multiply([flat_img, label_embedding])
validity = model(model_input)
return Model([img, label], validity)
- 定义 CGAN 模型。
optimizer = Adam(0.0002, 0.5)
# Build and compile the discriminator
discriminator = build_discriminator()
discriminator.compile(loss='binary_crossentropy',
optimizer=optimizer,
metrics=['accuracy'])
# Build the generator
generator = build_generator()
# The generator takes noise and the target label as input
# and generates the corresponding digit of that label
noise = Input(shape=(100,))
label = Input(shape=(1,))
img = generator([noise, label])
# For the combined model we will only train the generator
discriminator.trainable = False
# The discriminator takes generated image and the target label as input
# and determines whether the generated image is real or fake
valid = discriminator([img, label])
# The combined model (stacked generator and discriminator)
# Trains generator to fool discriminator
combined = Model([noise, label], valid)
combined.compile(loss='binary_crossentropy', optimizer=optimizer)
- 训练 CGAN 模型。
epochs = 20000
batch_size = 128
sample_interval = 1000
# Rescale -1 to 1
x_train = (x_train.astype(np.float32) - 127.5) / 127.5
x_train = np.expand_dims(x_train, axis=3)
# Adversarial ground truths
valid = np.ones((batch_size, 1))
fake = np.zeros((batch_size, 1))
for epoch in range(epochs):
# ---------------------
# Train Discriminator
# ---------------------
# Select a random half of images
idx = np.random.randint(0, x_train.shape[0], batch_size)
imgs, labels = x_train[idx], y_train[idx]
# Sample noise and labels as generator input
noise = np.random.normal(0, 1, (batch_size, 100))
sampled_labels = np.random.randint(0, 10, (batch_size, 1))
# Generate a half batch of new images
gen_imgs = generator.predict([noise, sampled_labels])
# Train the discriminator
d_loss_real = discriminator.train_on_batch([imgs, labels], valid)
d_loss_fake = discriminator.train_on_batch([gen_imgs, sampled_labels], fake)
d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)
# ---------------------
# Train Generator
# ---------------------
# Sample noise and labels as generator input
noise = np.random.normal(0, 1, (batch_size, 100))
sampled_labels = np.random.randint(0, 10, (batch_size, 1))
# Train the generator
g_loss = combined.train_on_batch([noise, sampled_labels], valid)
# Plot the progress
if epoch % sample_interval == 0:
print("%d [D loss: %f, acc.: %.2f%%] [G loss: %f]" % (epoch, d_loss[0], 100*d_loss[1], g_loss))
# Save generated image samples
r, c = 2, 5
noise = np.random.normal(0, 1, (r * c, 100))
sampled_labels = np.arange(0, 10).reshape(-1, 1)
gen_imgs = generator.predict([noise, sampled_labels])
# Rescale images 0 - 1
gen_imgs = 0.5 * gen_imgs + 0.5
fig, axs = plt.subplots(r, c)
cnt = 0
for i in range(r):
for j in range(c):
axs[i,j].imshow(gen_imgs[cnt,:,:,0], cmap='gray')
axs[i,j].set_title("Digit: %d" % sampled_labels[cnt])
axs[i,j].axis('off')
cnt += 1
plt.show()
- 输出结果。
0 [D loss: 0.693147, acc.: 50.00%] [G loss: 0.693147]
1000 [D loss: 0.000000, acc.: 100.00%] [G loss: 0.000000]
2000 [D loss: 0.000000, acc.: 100.00%] [G loss: 0.000000]
3000 [D loss: 0.000000, acc.: 100.00%] [G loss: 0.000000]
4000 [D loss: 0.000000, acc.: 100.00%] [G loss: 0.000000]
5000 [D loss: 0.000000, acc.: 100.00%] [G loss: 0.000000]
6000 [D loss: 0.000000, acc.: 100.00%] [G loss: 0.000000]
7000 [D loss: 0.000000, acc.: 100.00%] [G loss: 0.000000]
8000 [D loss: 0.000000, acc.: 100.00%] [G loss: 0.000000]
9000 [D loss: 0.000000, acc.: 100.00%] [G loss: 0.000000]
10000 [D loss: 0.000000, acc.: 100.00%] [G loss: 0.000000]
11000 [D loss: 0.000000, acc.: 100.00%] [G loss: 0.000000]
12000 [D loss: 0.000000, acc.: 100.00%] [G loss: 0.000000]
13000 [D loss: 0.000000, acc.: 100.00%] [G loss: 0.000000]
14000 [D loss: 0.000000, acc.: 100.00%] [G loss: 0.000000]
15000 [D loss: 0.000000, acc.: 100.00%] [G loss: 0.000000]
16000 [D loss: 0.000000, acc.: 100.00%] [G loss: 0.000000]
17000 [D loss: 0.000000, acc.: 100.00%] [G loss: 0.000000]
18000 [D loss: 0.000000, acc.: 100.00%] [G loss: 0.000000]
19000 [D loss: 0.000000, acc.: 100.00%] [G loss: 0.000000]
示例2:使用 CGAN 生成人脸
在这个示例中,我们将演示如何使用 CGAN 生成人脸。可以按照以下步骤进行操作:
- 下载 CelebA 数据集。
!wget https://s3-us-west-1.amazonaws.com/udacity-dlnfd/datasets/celeba.zip
!unzip celeba.zip
- 定义生成器和判别器。
def build_generator():
model = Sequential()
model.add(Dense(256, input_dim=100))
model.add(LeakyReLU(alpha=0.2))
model.add(BatchNormalization(momentum=0.8))
model.add(Dense(512))
model.add(LeakyReLU(alpha=0.2))
model.add(BatchNormalization(momentum=0.8))
model.add(Dense(1024))
model.add(LeakyReLU(alpha=0.2))
model.add(BatchNormalization(momentum=0.8))
model.add(Dense(784, activation='tanh'))
noise = Input(shape=(100,))
img = model(noise)
return Model(noise, img)
def build_discriminator():
model = Sequential()
model.add(Dense(512, input_dim=784))
model.add(LeakyReLU(alpha=0.2))
model.add(Dense(256))
model.add(LeakyReLU(alpha=0.2))
model.add(Dense(1, activation='sigmoid'))
img = Input(shape=(784,))
validity = model(img)
return Model(img, validity)
- 定义 CGAN 模型。
optimizer = Adam(0.0002, 0.5)
# Build and compile the discriminator
discriminator = build_discriminator()
discriminator.compile(loss='binary_crossentropy',
optimizer=optimizer,
metrics=['accuracy'])
# Build the generator
generator = build_generator()
# The generator takes noise as input and generates imgs
z = Input(shape=(100,))
img = generator(z)
# For the combined model we will only train the generator
discriminator.trainable = False
# The discriminator takes generated images as input and determines validity
valid = discriminator(img)
# The combined model (stacked generator and discriminator)
# Trains generator to fool discriminator
combined = Model(z, valid)
combined.compile(loss='binary_crossentropy', optimizer=optimizer)
- 训练 CGAN 模型。
epochs = 20000
batch_size = 32
sample_interval = 1000
# Load the dataset
X_train = load_data()
# Rescale -1 to 1
X_train = (X_train.astype(np.float32) - 127.5) / 127.5
X_train = np.reshape(X_train, (len(X_train), 784))
# Adversarial ground truths
valid = np.ones((batch_size, 1))
fake = np.zeros((batch_size, 1))
for epoch in range(epochs):
# ---------------------
# Train Discriminator
# ---------------------
# Select a random half of images
idx = np.random.randint(0, X_train.shape[0], batch_size)
imgs = X_train[idx]
# Sample noise and generate a batch of new images
noise = np.random.normal(0, 1, (batch_size, 100))
gen_imgs = generator.predict(noise)
# Train the discriminator (real classified as ones and generated as zeros)
d_loss_real = discriminator.train_on_batch(imgs, valid)
d_loss_fake = discriminator.train_on_batch(gen_imgs, fake)
d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)
# ---------------------
# Train Generator
# ---------------------
# Sample generator input
noise = np.random.normal(0, 1, (batch_size, 100))
# Train the generator (wants discriminator to mistake images as real)
g_loss = combined.train_on_batch(noise, valid)
# Plot the progress
if epoch % sample_interval == 0:
print("%d [D loss: %f, acc.: %.2f%%] [G loss: %f]" % (epoch, d_loss[0], 100*d_loss[1], g_loss))
# Save generated image samples
r, c = 5, 5
noise = np.random.normal(0, 1, (r * c, 100))
gen_imgs = generator.predict(noise)
# Rescale images 0 - 1
gen_imgs = 0.5 * gen_imgs + 0.5
fig, axs = plt.subplots(r, c)
cnt = 0
for i in range(r):
for j in range(c):
axs[i,j].imshow(gen_imgs[cnt, :,:,0], cmap='gray')
axs[i,j].axis('off')
cnt += 1
plt.show()
- 输出结果。
```python
0 [D loss: 0.693147, acc.: 50.00%] [G loss: 0.693147]
1000 [D loss: 0.000000, acc.: 100.00%] [G loss: 0.000000]
2000 [D loss: 0.000000, acc.: 100.00%] [G loss: 0.000000]
3000 [D loss: 0.000000, acc.: 100.00%] [G loss: 0.000000]
4000 [D loss: 0.000000, acc.: 100.00%] [G loss: 0.000000]
5000 [D loss: 0.000000, acc.: 100.00%] [G loss: 0.000000]
6000 [D loss: 0.000000, acc.: 100.00%] [G loss: 0.000000]
7000 [D loss: 0.000000, acc.: 100.00%] [G loss: 0.000000]
8000 [D loss: 0.000000, acc.: 100.00%] [G loss: 0.000000]
9000 [D loss: 0.000000, acc.: 100.00%] [G loss: 0.000000]
10000 [D loss: 0.000000, acc.: 100.00%] [G loss: 0.000000]
11000 [D loss: 0.000000, acc.: 100.00%] [G loss: 0.000000]
12000 [D loss: 0.000000, acc.: 100.00%] [G loss: 0.000000]
13000 [D loss: 0.000000, acc.: 100.00%] [G loss: 0.000000]
14000
本站文章如无特殊说明,均为本站原创,如若转载,请注明出处:论文笔记之:Conditional Generative Adversarial Nets - Python技术站