去噪自动编码器


去噪自动编码器

强制自动编码器学习有用特征的另一种方法是向其输入中添加噪声,训练它来恢复原始的无噪声输入。这个想法自1980年代开始就存在(在Yann LeCun 1987年的硕士论文中提到过)。在2008年的论文中,Pascal Vincent等人表明自动编码器也可以用于特征提取。在2010年的论文中,Vincent等人提出了堆叠式去噪自动编码器

噪声可以是添加到输入的纯高斯噪声,也可以是随机关闭的输入

实现很简单,在编码器的输入中附加一个Dropout层(或者使用GaussianNoise层)。Dropout层和GasussianNoise层仅在训练期间处于激活状态

from tensorflow import keras

fashion_mnist = keras.datasets.fashion_mnist
(X_train_all, y_train_all), (X_test, y_test) = fashion_mnist.load_data()
X_valid, X_train = X_train_all[:5000] / 255., X_train_all[5000:] / 255.
y_valid, y_train = y_train_all[:5000], y_train_all[5000:]
dropout_encoder = keras.models.Sequential([
    keras.layers.Flatten(input_shape=[28, 28]),
    keras.layers.Dropout(.5),
    keras.layers.Dense(100, activation='gelu'),
    keras.layers.Dense(30, activation='gelu')
])
dropout_decoder = keras.models.Sequential([
    keras.layers.Dense(100, activation='gelu', input_shape=[30]),
    keras.layers.Dense(28 * 28, activation='sigmoid'),
    keras.layers.Reshape([28, 28])
])
dropout_ae = keras.models.Sequential([dropout_encoder, dropout_decoder])
dropout_ae.compile(loss='binary_crossentropy', optimizer=keras.optimizers.Adam())
history = dropout_ae.fit(X_train, X_train, validation_data=(X_valid, X_valid), batch_size=32, epochs=10)
Epoch 1/10
1719/1719 [==============================] - 7s 3ms/step - loss: 0.3260 - val_loss: 0.2988
Epoch 2/10
1719/1719 [==============================] - 5s 3ms/step - loss: 0.3027 - val_loss: 0.2921
Epoch 3/10
1719/1719 [==============================] - 5s 3ms/step - loss: 0.2982 - val_loss: 0.2891
Epoch 4/10
1719/1719 [==============================] - 5s 3ms/step - loss: 0.2957 - val_loss: 0.2867
Epoch 5/10
1719/1719 [==============================] - 5s 3ms/step - loss: 0.2940 - val_loss: 0.2854
Epoch 6/10
1719/1719 [==============================] - 5s 3ms/step - loss: 0.2928 - val_loss: 0.2842
Epoch 7/10
1719/1719 [==============================] - 5s 3ms/step - loss: 0.2918 - val_loss: 0.2831
Epoch 8/10
1719/1719 [==============================] - 5s 3ms/step - loss: 0.2911 - val_loss: 0.2825
Epoch 9/10
1719/1719 [==============================] - 5s 3ms/step - loss: 0.2905 - val_loss: 0.2818
Epoch 10/10
1719/1719 [==============================] - 5s 3ms/step - loss: 0.2899 - val_loss: 0.2815

可视化重构

去噪自动编码器不仅可以用于数据可视化或无监督学习,而且还可以非常简单有效地用于图像中的噪声去除

import matplotlib.pyplot as plt


def plot_image(image):
    plt.imshow(image, cmap='binary')
    plt.axis('off')


def show_reconstructions(model, n_images=5):
    reconstructions = model.predict(X_valid[:n_images])
    fig = plt.figure(figsize=(n_images * 1.5, 3))
    for image_index in range(n_images):
        plt.subplot(2, n_images, 1 + image_index)
        plot_image(X_valid[image_index])
        plt.subplot(2, n_images, 1 + n_images + image_index)
        plot_image(reconstructions[image_index])


show_reconstructions(dropout_ae)

?

相关