In situation of finetuning, parameters in backbone network need to be frozen. To achieve this target, there are two steps.
First, locate the layers and change their requires_grad attributes to be False.
for param in net.backbone.parameters(): param.requires_grad = False
for pname, param in net.named_parameters(): if(key_word in pname): param.requires_grad = False
Here we use parameters() or named_parameters() method, it will give both bias and weight.
Second, filter out those parameters who need to be updated and pass them to the optimizer.
optimizer = torch.optim.SGD(filter(lambda p: p.requires_grad == True, net.parameters()), lr=learning_rate, momentum=mom)
本站文章如无特殊说明,均为本站原创,如若转载,请注明出处:PyTorch固定参数 - Python技术站