Borrowing Weights from a Pretrained Network
To borrow the weights of an already trained model, we need to do two things:
- Rename our layer to match the name of the original model's layer. The weights are assigned by layer name, thus using the original network's layer name, we get it's weights.
For example, let say the original model had a layer name ip1
, then we should name our layer ip1
:
layer { name: "ip1" type: "InnerProduct" bottom: "pool2" top: "ip1" param { lr_mult: 1 } param { lr_mult: 2 } inner_product_param { num_output: 500 weight_filler { type: "xavier" } bias_filler { type: "constant" } } }
- Train our new hybrid model declaring the location of the weights:
caffe train —solver ourSolver.prototxt —weights theirModel.caffemodel
What About the Other Layers of Our Network?
The other layers of our network will be initialized just like any other brand new layer (usually ~zero).
2.Fine-Tuning 将prototxt某层的lr 置为0,这层即不学习
Fine-Tuning is the process of training specific sections of a network to improve results.
Making Layers Not Learn
To stop a layer from learning further, you can set it's param
attributes in your prototxt.
For example:
layer { name: "example" type: "example" ... param { lr_mult: 0 #learning rate of weights decay_mult: 1 } param { lr_mult: 0 #learning rate of bias decay_mult: 0 } }
参考:
https://github.com/BVLC/caffe/wiki/Fine-Tuning-or-Training-Certain-Layers-Exclusively
https://github.com/BVLC/caffe/wiki/Borrowing-Weights-from-a-Pretrained-Network
本站文章如无特殊说明,均为本站原创,如若转载,请注明出处:caffe 预训练 或者Fine-Tuning 操作 - Python技术站