FCN:Semantic Segmentation

图像的语义分割,简言之就是对一张图片上的所有像素点进行分类

1 FCN介绍

​ 与传统的CNN解决的分类与检测问题不同,语义分割是一个空间密集型的预测任务,是像素级别的,需要对图像上所有的像素进行分类。 由于CNN在进行convolution和pooling过程中丢失了图像细节,即feature map size逐渐变小,所以不能很好地指出物体的具体轮廓、指出每个像素具体属于哪个物体,无法做到精确的分割。

​ FCN是针对语义分割训练的的一个端到端的网络, 是处理语义分割问题的基本框架,后续算法其实都是在这个框架中改进而来。

1.1 卷积化

​ 在一般的分类任务中在conv层之后一般会有全连接层,将二维的图像特征压缩为一维,可以训练输出一个标量,成为分类标签。这样做会失去部分的空间信息,不适用于分割的操作。

​ 语义分割输出为分割图,信息是二维的,所以在进行网络构建的时候抛弃了全连接层而是采用了卷积层,叫做卷积化。

1.2 上采样(Upsampling)

上采样与下采样相反,我们需要得到原图像的分割图就需要将缩小的特征恢复到原来的size。

上采样一般有两种方式:

  • Resize 即图片缩放
  • Deconvolution(反卷积) 也叫做Transposed Convolution(转置卷积)

常用的方式就是反卷积

反卷积通俗的来讲就是将普通的卷积操作反过来做。

PS:

输入为2X2矩阵,kernel_size = 3, pad = 0, stride = 1,进行反卷积操作会变成4X4的矩阵

反卷积公式如下

$output = (input-1)stride + outputpadding-2padding+kernel_size$

upsampling的意义在于小尺寸的高纬度feature map恢复成原来图像的大小,再做像素预测,获取每个像素的分类信息。

为了更好地将图像还原成原来的尺寸,在FCN中还加入了crop层,

1
2
3
4
5
6
7
8
9
10
11
12
#caffe中的crop层定义
layer {
name: "score_pool4c"
type: "Crop"
bottom: "score_pool4" # 需要裁切的blob
bottom: "upscore2" # 用于指示裁切尺寸的blob,和输出blob一样大
top: "score_pool4c" # 输出blob
crop_param {
axis: 2
offset: 5
}
}

相当于在图像的W,H纬度进行剪裁。用python的语法表示为

1
score_pool4c = score_pool4[:, :, 5:5+crop_h, 5:5+crop_w]
1.3 跳跃结构(Skip Architecture)

​ 如果只用最后一层池化结果进行上采样的话得到的结果通常十分粗糙,所以FCN采用了将不同池化层的结果进行上采样最后叠加的结构来增加精确度。

效果:FCN-32s < FCN-16s < FCN-8s,即使用多层feature融合有利于提高分割准确性

2 代码实现

FCN model代码

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
class FCN8s(nn.Module):
def __init__(self, num_classes, pretrained=True, caffe=False):
super(FCN8s, self).__init__()
vgg = models.vgg16()
if pretrained:
if caffe:
# load the pretrained vgg16 used by the paper's author
vgg.load_state_dict(torch.load(vgg16_caffe_path))
else:
vgg.load_state_dict(torch.load(vgg16_path))
features, classifier = list(vgg.features.children()), list(vgg.classifier.children())

'''
100 padding for 2 reasons:
1) support very small input size
2) allow cropping in order to match size of different layers' feature maps
Note that the cropped part corresponds to a part of the 100 padding
Spatial information of different layers' feature maps cannot be align exactly because of cropping, which is bad
'''
features[0].padding = (100, 100)

for f in features:
if 'MaxPool' in f.__class__.__name__:
f.ceil_mode = True
elif 'ReLU' in f.__class__.__name__:
f.inplace = True

self.features3 = nn.Sequential(*features[: 17])
self.features4 = nn.Sequential(*features[17: 24])
self.features5 = nn.Sequential(*features[24:])

self.score_pool3 = nn.Conv2d(256, num_classes, kernel_size=1)
self.score_pool4 = nn.Conv2d(512, num_classes, kernel_size=1)
self.score_pool3.weight.data.zero_()
self.score_pool3.bias.data.zero_()
self.score_pool4.weight.data.zero_()
self.score_pool4.bias.data.zero_()

fc6 = nn.Conv2d(512, 4096, kernel_size=7)
fc6.weight.data.copy_(classifier[0].weight.data.view(4096, 512, 7, 7))
fc6.bias.data.copy_(classifier[0].bias.data)
fc7 = nn.Conv2d(4096, 4096, kernel_size=1)
fc7.weight.data.copy_(classifier[3].weight.data.view(4096, 4096, 1, 1))
fc7.bias.data.copy_(classifier[3].bias.data)
score_fr = nn.Conv2d(4096, num_classes, kernel_size=1)
score_fr.weight.data.zero_()
score_fr.bias.data.zero_()
self.score_fr = nn.Sequential(
fc6, nn.ReLU(inplace=True), nn.Dropout(), fc7, nn.ReLU(inplace=True), nn.Dropout(), score_fr
)

self.upscore2 = nn.ConvTranspose2d(num_classes, num_classes, kernel_size=4, stride=2, bias=False)
self.upscore_pool4 = nn.ConvTranspose2d(num_classes, num_classes, kernel_size=4, stride=2, bias=False)
self.upscore8 = nn.ConvTranspose2d(num_classes, num_classes, kernel_size=16, stride=8, bias=False)
self.upscore2.weight.data.copy_(get_upsampling_weight(num_classes, num_classes, 4))
self.upscore_pool4.weight.data.copy_(get_upsampling_weight(num_classes, num_classes, 4))
self.upscore8.weight.data.copy_(get_upsampling_weight(num_classes, num_classes, 16))

def forward(self, x):
x_size = x.size()
pool3 = self.features3(x)
pool4 = self.features4(pool3)
pool5 = self.features5(pool4)

score_fr = self.score_fr(pool5)
upscore2 = self.upscore2(score_fr)

score_pool4 = self.score_pool4(0.01 * pool4)
upscore_pool4 = self.upscore_pool4(score_pool4[:, :, 5: (5 + upscore2.size()[2]), 5: (5 + upscore2.size()[3])]
+ upscore2)

score_pool3 = self.score_pool3(0.0001 * pool3)
upscore8 = self.upscore8(score_pool3[:, :, 9: (9 + upscore_pool4.size()[2]), 9: (9 + upscore_pool4.size()[3])]
+ upscore_pool4)
return upscore8[:, :, 31: (31 + x_size[2]), 31: (31 + x_size[3])].contiguous()