Autoencoders In Keras
Written by Rowel Atienza   
Monday, 03 September 2018
Article Index
Autoencoders In Keras
Building autoencoders using Keras
The Model

Building autoencoders using Keras

Let us build an autoencoder using Keras. For simplicity, we use MNIST dataset for the first set of examples. The autoencoder will generate a latent vector from input data and recover the input using the decoder. The latent vector in this first example is 16-dim. 

Let us implement the autoencoder by building the encoder first. Listing 3.2.1 shows the encoder that compresses the MNIST digit into a 16-dim latent vector. The encoder is basically a stack of two Conv2D. The last stage is a Dense layer with 16 units to generate the latent vector. Figure 3 shows the architecture model diagram generated by plot_model() which is the same as the text version produced by encoder.summary(). The shape of the output of the last Conv2D is saved to compute the dimensions of the decoder input layer for easy reconstruction of the MNIST image. 

Listing 3.2.1 An autoencoder implementation using Keras. The latent vector is 16-dim. 

from keras.layers import Dense, Input
from keras.layers import Conv2D, Flatten
from keras.layers import Reshape, Conv2DTranspose
from keras.models import Model
from keras.datasets import mnist
from keras.utils import plot_model
from keras import backend as K
import numpy as np

import matplotlib.pyplot as plt
# load MNIST dataset
(x_train, _), (x_test, _) = mnist.load_data()
# reshape to (28, 28, 1) and normalize input images
image_size = x_train.shape[1]
x_train = np.reshape(
                   x_train, [-1, image_size, image_size, 1])
x_test = np.reshape(x_test, [-1, image_size, image_size, 1])
x_train = x_train.astype('float32') / 255
x_test = x_test.astype('float32') / 255
# network parameters
input_shape = (image_size, image_size, 1)
batch_size = 32
kernel_size = 3
latent_dim = 16
# encoder/decoder number of filters per CNN layer
layer_filters = [32, 64]
# build the autoencoder model
# first build the encoder model
inputs = Input(shape=input_shape, name='encoder_input')
x = inputs
# stack of Conv2D(32)-Conv2D(64)
for filters in layer_filters:
    x = Conv2D(filters=filters,
# shape info needed to build decoder
# model so we don't do hand computation
# the input to the decoder's first
# Conv2DTranspose will have this shape
# shape is (7, 7, 64) which be processed by
# the decoder back to (28, 28, 1)
shape = K.int_shape(x)
# generate latent vector
x = Flatten()(x)
latent = Dense(latent_dim, name='latent_vector')(x)
# instantiate encoder model
encoder = Model(inputs, latent, name='encoder')
plot_model(encoder, to_file='encoder.png',show_shapes=True)
# build the decoder model
latent_inputs =
         Input(shape=(latent_dim,), name='decoder_input')
# use the shape (7, 7, 64) that was earlier saved
x = Dense(shape[1] * shape[2] * shape[3])(latent_inputs)
# from vector to suitable shape for transposed conv
x = Reshape((shape[1], shape[2], shape[3]))(x)
# stack of Conv2DTranspose(64)-Conv2DTranspose(32)
for filters in layer_filters[::-1]:
    x = Conv2DTranspose(filters=filters,
# reconstruct the input
outputs = Conv2DTranspose(filters=1,
# instantiate decoder model
decoder = Model(latent_inputs, outputs, name='decoder')
plot_model(decoder, to_file='decoder.png',
# autoencoder = encoder + decoder
# instantiate autoencoder model
autoencoder = Model(
     inputs, decoder(encoder(inputs)), name='autoencoder')
plot_model(autoencoder, to_file='autoencoder.png',
# Mean Square Error (MSE) loss function, Adam optimizer
autoencoder.compile(loss='mse', optimizer='adam')
# train the autoencoder,
                validation_data=(x_test, x_test),
# predict the autoencoder output from test data
x_decoded = autoencoder.predict(x_test)
# display the 1st 8 test input and decoded images
imgs = np.concatenate([x_test[:8], x_decoded[:8]])
imgs = imgs.reshape((4, 4, image_size, image_size))
imgs = np.vstack([np.hstack(i) for i in imgs])
plt.title('Input: 1st 2 rows, Decoded: last 2 rows')
plt.imshow(imgs, interpolation='none', cmap='gray')

Last Updated ( Sunday, 19 May 2019 )