手把手教你构建ResNet残差网络

【导读】ResNet在2015年名声大噪,影响了2016年DL在学术界和工业界的发展方向。它对每一层的输入做一个reference,形成残差函数。残差用来设计解决深度网络退化问题,同时也解决了梯度消失问题,使得网络性能得到提升。本文解释了残差网络的技巧以及手把手教你如何应用它。

编译 | 专知

参与 | Yingying, Xiaowen


近年来,由于大量数据集和功能强大的GPU的可用性,可以对非常深的架构进行训练,图像识别技术得到了进一步发展。 VGG的作者Simonyan等表示,通过简单地堆叠更多图层,我们可以提高准确性,Yoshua Bengio在他的专着“Learning AI Architect for AI”中对深层架构的有效性进行了令人信服的理论分析。 我们是否可以通过简单地叠加越来越多的卷积 - 批标准化 - ReLU来构建更精确的系统?在某种程度上,准确性会提高,但超过25层以上,精确度就会下降。 何恺明等人在2015年首次解决了深度问题,从那以后已经允许训练超过2000层的网络,并且精度越来越高。

这篇文章中解释了他们的技巧以及如何应用它。


首先,由于梯度消失,层数变多时,准确性会降低,因为图层越深,梯度越小,导致性能越差。这不是过拟合的原因,因此,droupout并不起什么作用。


何凯明和他在微软亚洲研究院的同事们所提出的解决方案是引入残差(Residual Connections),即前一层的的输出连接到新一层的输出。


假设你有一个七层网络。在残差网络中,不仅可以将第1层的输出传递给第2层作为输入,还可以将第1层的输出合并到第2层的输出中。  

用f(x)表示每一层 :

在标准网络中,y = f(x);

 但是,在残差网络中,y = f(x)+ x。

依靠这种方法,作者在Imagenet 2015获得了冠军。这个想法已经扩展到所有其他深度学习领域,包括自然语言处理。


看完原理之后,让我们开始实践吧!

标准的两层模块如下

def Unit(x,filters):

out = BatchNormalization()(x)
out = Activation("relu")(out)
out = Conv2D(filters=filters, kernel_size=[3, 3], strides=[1, 1], padding="same")(out)

out = BatchNormalization()(out)
out = Activation("relu")(out)
out = Conv2D(filters=filters, kernel_size=[3, 3], strides=[1, 1], padding="same")(out)

return out


回顾一下,在这个模块中,我们输入x,依次进行batchnorm- relu- conv2d。

以下是一个resnet模块

def Unit(x,filters):
res = x
out = BatchNormalization()(x)
out = Activation("relu")(out)
out = Conv2D(filters=filters, kernel_size=[3, 3], strides=[1, 1], padding="same")(out)

out = BatchNormalization()(out)
out = Activation("relu")(out)
out = Conv2D(filters=filters, kernel_size=[3, 3], strides=[1, 1], padding="same")(out)

out = keras.layers.add([res,out])

return out


这看起来非常相似,但有一个主要区别,首先,我们存储对原始输入的引用“res”,并且在通过batchnorm-relu-conv图层后,将输出与残差相加,即下面这一行代码

out = keras.layers.add([res,out])


这部分对应于方程y = f(x)+ x

所以我们可以通过堆叠许多这个模块来构建一个resnet。

在此之前,我们需要稍微修改代码来增加池化。

def Unit(x,filters,pool=False):
res = x
if pool:
x = MaxPooling2D(pool_size=(2, 2))(x)
res = Conv2D(filters=filters,kernel_size=[1,1],strides=(2,2),padding="same")(res)
out = BatchNormalization()(x)
out = Activation("relu")(out)
out = Conv2D(filters=filters, kernel_size=[3, 3], strides=[1, 1], padding="same")(out)

out = BatchNormalization()(out)
out = Activation("relu")(out)
out = Conv2D(filters=filters, kernel_size=[3, 3], strides=[1, 1], padding="same")(out)

out = keras.layers.add([res,out])

return out


注意上面的内容,在池化时,输出的维数将不再与残差的维数匹配,因此,我们不仅要对输入pooling,而且残差也将用大小为1*1 的核,步长为2的卷积,投影到与输出相同的维数。


下面是完整ResNet。

def Unit(x,filters,pool=False):
res = x
if pool:
x = MaxPooling2D(pool_size=(2, 2))(x)
res = Conv2D(filters=filters,kernel_size=[1,1],strides=(2,2),padding="same")(res)
out = BatchNormalization()(x)
out = Activation("relu")(out)
out = Conv2D(filters=filters, kernel_size=[3, 3], strides=[1, 1], padding="same")(out)

out = BatchNormalization()(out)
out = Activation("relu")(out)
out = Conv2D(filters=filters, kernel_size=[3, 3], strides=[1, 1], padding="same")(out)

out = keras.layers.add([res,out])

return out
def MiniModel(input_shape):
images = Input(input_shape)
net = Conv2D(filters=32, kernel_size=[3, 3], strides=[1, 1], padding="same")(images)
net = Unit(net,32)
net = Unit(net,32)
net = Unit(net,32)

net = Unit(net,64,pool=True)
net = Unit(net,64)
net = Unit(net,64)

net = Unit(net,128,pool=True)
net = Unit(net,128)
net = Unit(net,128)

net = Unit(net, 256,pool=True)
net = Unit(net, 256)
net = Unit(net, 256)

net = BatchNormalization()(net)
net = Activation("relu")(net)
net = Dropout(0.25)(net)

net = AveragePooling2D(pool_size=(4,4))(net)
net = Flatten()(net)
net = Dense(units=10,activation="softmax")(net)

model = Model(inputs=images,outputs=net)

return model


上面没有包括训练网络的代码,以下是训练代码,将epoch设置为50。

# import needed classes
import keras
from keras.datasets import cifar10
from keras.layers import Dense, Conv2D, MaxPooling2D, Flatten, AveragePooling2D, Dropout, BatchNormalization, Activation
from keras.models import Model, Input
from keras.optimizers import Adam
from keras.callbacks import LearningRateScheduler
from keras.callbacks import ModelCheckpoint
from math import ceil
import os
from keras.preprocessing.image import ImageDataGenerator


def Unit(x, filters, pool=False):
res = x
if pool:
x = MaxPooling2D(pool_size=(2, 2))(x)
res = Conv2D(filters=filters, kernel_size=[1, 1], strides=(2, 2), padding="same")(res)
out = BatchNormalization()(x)
out = Activation("relu")(out)
out = Conv2D(filters=filters, kernel_size=[3, 3], strides=[1, 1], padding="same")(out)

out = BatchNormalization()(out)
out = Activation("relu")(out)
out = Conv2D(filters=filters, kernel_size=[3, 3], strides=[1, 1], padding="same")(out)

out = keras.layers.add([res, out])

return out


# Define the model


def MiniModel(input_shape):
images = Input(input_shape)
net = Conv2D(filters=32, kernel_size=[3, 3], strides=[1, 1], padding="same")(images)
net = Unit(net, 32)
net = Unit(net, 32)
net = Unit(net, 32)

net = Unit(net, 64, pool=True)
net = Unit(net, 64)
net = Unit(net, 64)

net = Unit(net, 128, pool=True)
net = Unit(net, 128)
net = Unit(net, 128)

net = Unit(net, 256, pool=True)
net = Unit(net, 256)
net = Unit(net, 256)

net = BatchNormalization()(net)
net = Activation("relu")(net)
net = Dropout(0.25)(net)

net = AveragePooling2D(pool_size=(4, 4))(net)
net = Flatten()(net)
net = Dense(units=10, activation="softmax")(net)

model = Model(inputs=images, outputs=net)

return model


# load the cifar10 dataset
(train_x, train_y), (test_x, test_y) = cifar10.load_data()

# normalize the data
train_x = train_x.astype('float32') / 255
test_x = test_x.astype('float32') / 255

# Subtract the mean image from both train and test set
train_x = train_x - train_x.mean()
test_x = test_x - test_x.mean()

# Divide by the standard deviation
train_x = train_x / train_x.std(axis=0)
test_x = test_x / test_x.std(axis=0)

datagen = ImageDataGenerator(rotation_range=10,
                            width_shift_range=5. / 32,
                            height_shift_range=5. / 32,
                            horizontal_flip=True)

# Compute quantities required for featurewise normalization
# (std, mean, and principal components if ZCA whitening is applied).
datagen.fit(train_x)

# Encode the labels to vectors
train_y = keras.utils.to_categorical(train_y, 10)
test_y = keras.utils.to_categorical(test_y, 10)

# define a common unit


input_shape = (32, 32, 3)
model = MiniModel(input_shape)

# Print a Summary of the model

model.summary()
# Specify the training components
model.compile(optimizer=Adam(0.001), loss="categorical_crossentropy", metrics=["accuracy"])

epochs = 50
steps_per_epoch = ceil(50000 / 128)

# Fit the model on the batches generated by datagen.flow().
model.fit_generator(datagen.flow(train_x, train_y, batch_size=128),
                   validation_data=[test_x, test_y],
                   epochs=epochs, steps_per_epoch=steps_per_epoch, verbose=1, workers=4)

# Evaluate the accuracy of the test dataset
accuracy = model.evaluate(x=test_x, y=test_y, batch_size=128)
model.save("cifar10model.h5")


更多教程资料请访问:人工智能知识资料全集


-END-

专 · 知

人工智能领域主题知识资料查看与加入专知人工智能服务群

【专知AI服务计划】专知AI知识技术服务会员群加入人工智能领域26个主题知识资料全集获取

[点击上面图片加入会员]

请PC登录www.zhuanzhi.ai或者点击阅读原文,注册登录专知,获取更多AI知识资料

请扫一扫如下二维码关注我们的公众号,获取人工智能的专业知识!

点击“阅读原文”,使用专知

展开全文
Top
微信扫码咨询专知VIP会员