Skip to main content

Learn about Configs

Config 结构

基础结构

在 mmdetection 项目的目录下,所有的配置文件都存储在 configs 文件夹。

├───.....
├───common
├───.....
└───_base_
├───datasets
├───models
└───schedules

其中最重要的是其中的 configs/_base_,存储了大部分比较出名的网络的基础结构,configs 除了 common_base_ 之外的其他文件夹中更多配置文件都是基于 _base_ 修改。_base_ 包括 datasets, schedule, model, defaultruntime 的设置,通过这几个文件能够轻松地搭建网络。`_base` 就是最原始的配置。

对于配置文件的使用,我们推荐的使用方式是:

  • 对于同一个文件夹下的配置文件,都只继承自一个最原始的配置文件

  • 我们推荐最多继承不超过三级

  • 为了方便理解,我们推荐你在构造自己的模型的时候从已有的文件继承过来

    例如,如果你想对 Faster R-CNN 做一些修改,那么首先你就要先加入一行:_base_ = ../faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py,然后在配置文件的后面继续修改你想修改的地方

  • 如果你想构造一个 mmdetection 本身都还没有的模型或者不会和其他现有的模型产生关系,你可以创建一个新的文件夹在 configs

配置文件命名方式

推荐使用这样的命名方式能够更加区分出不同配置文件之间的区别:

{model}_[model setting]_{backbone}_{neck}_[norm setting]_[misc]_[gpu x batch_per_gpu]_{schedule}_{dataset}

其中,{xxx} 表示必须,[yyy] 表示可选

  • {model}: 表示
  • [model setting]: specific setting for some model, like without_semantic for htc, moment for reppoints, etc.
  • {backbone}: 模型的主干是什么,例如: r50 (ResNet-50), x101 (ResNeXt-101).
  • {neck}: 模型的 neck 是什么,例如: fpn, pafpn, nasfpn, c4.
  • [norm_setting]: bn (Batch Normalization) is used unless specified, other norm layer type could be gn (Group Normalization), syncbn (Synchronized Batch Normalization). gn-head/gn-neck indicates GN is applied in head/neck only, while gn-all means GN is applied in the entire model, e.g. backbone, neck, head.
  • [misc]: miscellaneous setting/plugins of model, e.g. dconv, gcb, attention, albu, mstrain.
  • [gpu x batch_per_gpu]: GPUs and samples per GPU, 8x2 is used by default.
  • {schedule}: training schedule, options are 1x, 2x, 20e, etc. 1x and 2x means 12 epochs and 24 epochs respectively. 20e is adopted in cascade models, which denotes 20 epochs. For 1x/2x, initial learning rate decays by a factor of 10 at the 8/16th and 11/22th epochs. For 20e, initial learning rate decays by a factor of 10 at the 16th and 19th epochs.
  • {dataset}: dataset like coco, cityscapes, voc_0712, wider_face.

训练和测试配置

推荐的训练和测试配置方式如下:

# recommended
model = dict(
type=...,
...
train_cfg=dict(...),
test_cfg=dict(...),
)

样例

Mask R-CNN 为例。

model = dict(
type='MaskRCNN', # The name of detector
backbone=dict( # The config of backbone
type='ResNet', # The type of the backbone, refer to https://github.com/open-mmlab/mmdetection/blob/master/mmdet/models/backbones/resnet.py#L308 for more details.
depth=50, # The depth of backbone, usually it is 50 or 101 for ResNet and ResNext backbones.
num_stages=4, # Number of stages of the backbone.
out_indices=(0, 1, 2, 3), # The index of output feature maps produced in each stages
frozen_stages=1, # The weights in the first 1 stage are frozen
norm_cfg=dict( # The config of normalization layers.
type='BN', # Type of norm layer, usually it is BN or GN
requires_grad=True), # Whether to train the gamma and beta in BN
norm_eval=True, # Whether to freeze the statistics in BN
style='pytorch'# The style of backbone, 'pytorch' means that stride 2 layers are in 3x3 conv, 'caffe' means stride 2 layers are in 1x1 convs.
init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50')), # The ImageNet pretrained backbone to be loaded
neck=dict(
type='FPN', # The neck of detector is FPN. We also support 'NASFPN', 'PAFPN', etc. Refer to https://github.com/open-mmlab/mmdetection/blob/master/mmdet/models/necks/fpn.py#L10 for more details.
in_channels=[256, 512, 1024, 2048], # The input channels, this is consistent with the output channels of backbone
out_channels=256, # The output channels of each level of the pyramid feature map
num_outs=5), # The number of output scales
rpn_head=dict(
type='RPNHead', # The type of RPN head is 'RPNHead', we also support 'GARPNHead', etc. Refer to https://github.com/open-mmlab/mmdetection/blob/master/mmdet/models/dense_heads/rpn_head.py#L12 for more details.
in_channels=256, # The input channels of each input feature map, this is consistent with the output channels of neck
feat_channels=256, # Feature channels of convolutional layers in the head.
anchor_generator=dict( # The config of anchor generator
type='AnchorGenerator', # Most of methods use AnchorGenerator, SSD Detectors uses `SSDAnchorGenerator`. Refer to https://github.com/open-mmlab/mmdetection/blob/master/mmdet/core/anchor/anchor_generator.py#L10 for more details
scales=[8], # Basic scale of the anchor, the area of the anchor in one position of a feature map will be scale * base_sizes
ratios=[0.5, 1.0, 2.0], # The ratio between height and width.
strides=[4, 8, 16, 32, 64]), # The strides of the anchor generator. This is consistent with the FPN feature strides. The strides will be taken as base_sizes if base_sizes is not set.
bbox_coder=dict( # Config of box coder to encode and decode the boxes during training and testing
type='DeltaXYWHBBoxCoder', # Type of box coder. 'DeltaXYWHBBoxCoder' is applied for most of methods. Refer to https://github.com/open-mmlab/mmdetection/blob/master/mmdet/core/bbox/coder/delta_xywh_bbox_coder.py#L9 for more details.
target_means=[0.0, 0.0, 0.0, 0.0], # The target means used to encode and decode boxes
target_stds=[1.0, 1.0, 1.0, 1.0]), # The standard variance used to encode and decode boxes
loss_cls=dict( # Config of loss function for the classification branch
type='CrossEntropyLoss', # Type of loss for classification branch, we also support FocalLoss etc.
use_sigmoid=True, # RPN usually perform two-class classification, so it usually uses sigmoid function.
loss_weight=1.0), # Loss weight of the classification branch.
loss_bbox=dict( # Config of loss function for the regression branch.
type='L1Loss', # Type of loss, we also support many IoU Losses and smooth L1-loss, etc. Refer to https://github.com/open-mmlab/mmdetection/blob/master/mmdet/models/losses/smooth_l1_loss.py#L56 for implementation.
loss_weight=1.0)), # Loss weight of the regression branch.
roi_head=dict( # RoIHead encapsulates the second stage of two-stage/cascade detectors.
type='StandardRoIHead', # Type of the RoI head. Refer to https://github.com/open-mmlab/mmdetection/blob/master/mmdet/models/roi_heads/standard_roi_head.py#L10 for implementation.
bbox_roi_extractor=dict( # RoI feature extractor for bbox regression.
type='SingleRoIExtractor', # Type of the RoI feature extractor, most of methods uses SingleRoIExtractor. Refer to https://github.com/open-mmlab/mmdetection/blob/master/mmdet/models/roi_heads/roi_extractors/single_level.py#L10 for details.
roi_layer=dict( # Config of RoI Layer
type='RoIAlign', # Type of RoI Layer, DeformRoIPoolingPack and ModulatedDeformRoIPoolingPack are also supported. Refer to https://github.com/open-mmlab/mmdetection/blob/master/mmdet/ops/roi_align/roi_align.py#L79 for details.
output_size=7, # The output size of feature maps.
sampling_ratio=0), # Sampling ratio when extracting the RoI features. 0 means adaptive ratio.
out_channels=256, # output channels of the extracted feature.
featmap_strides=[4, 8, 16, 32]), # Strides of multi-scale feature maps. It should be consistent to the architecture of the backbone.
bbox_head=dict( # Config of box head in the RoIHead.
type='Shared2FCBBoxHead', # Type of the bbox head, Refer to https://github.com/open-mmlab/mmdetection/blob/master/mmdet/models/roi_heads/bbox_heads/convfc_bbox_head.py#L177 for implementation details.
in_channels=256, # Input channels for bbox head. This is consistent with the out_channels in roi_extractor
fc_out_channels=1024, # Output feature channels of FC layers.
roi_feat_size=7, # Size of RoI features
num_classes=80, # Number of classes for classification
bbox_coder=dict( # Box coder used in the second stage.
type='DeltaXYWHBBoxCoder', # Type of box coder. 'DeltaXYWHBBoxCoder' is applied for most of methods.
target_means=[0.0, 0.0, 0.0, 0.0], # Means used to encode and decode box
target_stds=[0.1, 0.1, 0.2, 0.2]), # Standard variance for encoding and decoding. It is smaller since the boxes are more accurate. [0.1, 0.1, 0.2, 0.2] is a conventional setting.
reg_class_agnostic=False, # Whether the regression is class agnostic.
loss_cls=dict( # Config of loss function for the classification branch
type='CrossEntropyLoss', # Type of loss for classification branch, we also support FocalLoss etc.
use_sigmoid=False, # Whether to use sigmoid.
loss_weight=1.0), # Loss weight of the classification branch.
loss_bbox=dict( # Config of loss function for the regression branch.
type='L1Loss', # Type of loss, we also support many IoU Losses and smooth L1-loss, etc.
loss_weight=1.0)), # Loss weight of the regression branch.
mask_roi_extractor=dict( # RoI feature extractor for mask generation.
type='SingleRoIExtractor', # Type of the RoI feature extractor, most of methods uses SingleRoIExtractor.
roi_layer=dict( # Config of RoI Layer that extracts features for instance segmentation
type='RoIAlign', # Type of RoI Layer, DeformRoIPoolingPack and ModulatedDeformRoIPoolingPack are also supported
output_size=14, # The output size of feature maps.
sampling_ratio=0), # Sampling ratio when extracting the RoI features.
out_channels=256, # Output channels of the extracted feature.
featmap_strides=[4, 8, 16, 32]), # Strides of multi-scale feature maps.
mask_head=dict( # Mask prediction head
type='FCNMaskHead', # Type of mask head, refer to https://github.com/open-mmlab/mmdetection/blob/master/mmdet/models/roi_heads/mask_heads/fcn_mask_head.py#L21 for implementation details.
num_convs=4, # Number of convolutional layers in mask head.
in_channels=256, # Input channels, should be consistent with the output channels of mask roi extractor.
conv_out_channels=256, # Output channels of the convolutional layer.
num_classes=80, # Number of class to be segmented.
loss_mask=dict( # Config of loss function for the mask branch.
type='CrossEntropyLoss', # Type of loss used for segmentation
use_mask=True, # Whether to only train the mask in the correct class.
loss_weight=1.0)))) # Loss weight of mask branch.
train_cfg = dict( # Config of training hyperparameters for rpn and rcnn
rpn=dict( # Training config of rpn
assigner=dict( # Config of assigner
type='MaxIoUAssigner', # Type of assigner, MaxIoUAssigner is used for many common detectors. Refer to https://github.com/open-mmlab/mmdetection/blob/master/mmdet/core/bbox/assigners/max_iou_assigner.py#L10 for more details.
pos_iou_thr=0.7, # IoU >= threshold 0.7 will be taken as positive samples
neg_iou_thr=0.3, # IoU < threshold 0.3 will be taken as negative samples
min_pos_iou=0.3, # The minimal IoU threshold to take boxes as positive samples
match_low_quality=True, # Whether to match the boxes under low quality (see API doc for more details).
ignore_iof_thr=-1), # IoF threshold for ignoring bboxes
sampler=dict( # Config of positive/negative sampler
type='RandomSampler', # Type of sampler, PseudoSampler and other samplers are also supported. Refer to https://github.com/open-mmlab/mmdetection/blob/master/mmdet/core/bbox/samplers/random_sampler.py#L8 for implementation details.
num=256, # Number of samples
pos_fraction=0.5, # The ratio of positive samples in the total samples.
neg_pos_ub=-1, # The upper bound of negative samples based on the number of positive samples.
add_gt_as_proposals=False), # Whether add GT as proposals after sampling.
allowed_border=-1, # The border allowed after padding for valid anchors.
pos_weight=-1, # The weight of positive samples during training.
debug=False), # Whether to set the debug mode
rpn_proposal=dict( # The config to generate proposals during training
nms_across_levels=False, # Whether to do NMS for boxes across levels. Only work in `GARPNHead`, naive rpn does not support do nms cross levels.
nms_pre=2000, # The number of boxes before NMS
nms_post=1000, # The number of boxes to be kept by NMS, Only work in `GARPNHead`.
max_per_img=1000, # The number of boxes to be kept after NMS.
nms=dict( # Config of NMS
type='nms', # Type of NMS
iou_threshold=0.7 # NMS threshold
),
min_bbox_size=0), # The allowed minimal box size
rcnn=dict( # The config for the roi heads.
assigner=dict( # Config of assigner for second stage, this is different for that in rpn
type='MaxIoUAssigner', # Type of assigner, MaxIoUAssigner is used for all roi_heads for now. Refer to https://github.com/open-mmlab/mmdetection/blob/master/mmdet/core/bbox/assigners/max_iou_assigner.py#L10 for more details.
pos_iou_thr=0.5, # IoU >= threshold 0.5 will be taken as positive samples
neg_iou_thr=0.5, # IoU < threshold 0.5 will be taken as negative samples
min_pos_iou=0.5, # The minimal IoU threshold to take boxes as positive samples
match_low_quality=False, # Whether to match the boxes under low quality (see API doc for more details).
ignore_iof_thr=-1), # IoF threshold for ignoring bboxes
sampler=dict(
type='RandomSampler', # Type of sampler, PseudoSampler and other samplers are also supported. Refer to https://github.com/open-mmlab/mmdetection/blob/master/mmdet/core/bbox/samplers/random_sampler.py#L8 for implementation details.
num=512, # Number of samples
pos_fraction=0.25, # The ratio of positive samples in the total samples.
neg_pos_ub=-1, # The upper bound of negative samples based on the number of positive samples.
add_gt_as_proposals=True
), # Whether add GT as proposals after sampling.
mask_size=28, # Size of mask
pos_weight=-1, # The weight of positive samples during training.
debug=False)) # Whether to set the debug mode
test_cfg = dict( # Config for testing hyperparameters for rpn and rcnn
rpn=dict( # The config to generate proposals during testing
nms_across_levels=False, # Whether to do NMS for boxes across levels. Only work in `GARPNHead`, naive rpn does not support do nms cross levels.
nms_pre=1000, # The number of boxes before NMS
nms_post=1000, # The number of boxes to be kept by NMS, Only work in `GARPNHead`.
max_per_img=1000, # The number of boxes to be kept after NMS.
nms=dict( # Config of NMS
type='nms', #Type of NMS
iou_threshold=0.7 # NMS threshold
),
min_bbox_size=0), # The allowed minimal box size
rcnn=dict( # The config for the roi heads.
score_thr=0.05, # Threshold to filter out boxes
nms=dict( # Config of NMS in the second stage
type='nms', # Type of NMS
iou_thr=0.5), # NMS threshold
max_per_img=100, # Max number of detections of each image
mask_thr_binary=0.5)) # Threshold of mask prediction
dataset_type = 'CocoDataset' # Dataset type, this will be used to define the dataset
data_root = 'data/coco/' # Root path of data
img_norm_cfg = dict( # Image normalization config to normalize the input images
mean=[123.675, 116.28, 103.53], # Mean values used to pre-training the pre-trained backbone models
std=[58.395, 57.12, 57.375], # Standard variance used to pre-training the pre-trained backbone models
to_rgb=True
) # The channel orders of image used to pre-training the pre-trained backbone models
train_pipeline = [ # Training pipeline
dict(type='LoadImageFromFile'), # First pipeline to load images from file path
dict(
type='LoadAnnotations', # Second pipeline to load annotations for current image
with_bbox=True, # Whether to use bounding box, True for detection
with_mask=True, # Whether to use instance mask, True for instance segmentation
poly2mask=False), # Whether to convert the polygon mask to instance mask, set False for acceleration and to save memory
dict(
type='Resize', # Augmentation pipeline that resize the images and their annotations
img_scale=(1333, 800), # The largest scale of image
keep_ratio=True
), # whether to keep the ratio between height and width.
dict(
type='RandomFlip', # Augmentation pipeline that flip the images and their annotations
flip_ratio=0.5), # The ratio or probability to flip
dict(
type='Normalize', # Augmentation pipeline that normalize the input images
mean=[123.675, 116.28, 103.53], # These keys are the same of img_norm_cfg since the
std=[58.395, 57.12, 57.375], # keys of img_norm_cfg are used here as arguments
to_rgb=True),
dict(
type='Pad', # Padding config
size_divisor=32), # The number the padded images should be divisible
dict(type='DefaultFormatBundle'), # Default format bundle to gather data in the pipeline
dict(
type='Collect', # Pipeline that decides which keys in the data should be passed to the detector
keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks'])
]
test_pipeline = [
dict(type='LoadImageFromFile'), # First pipeline to load images from file path
dict(
type='MultiScaleFlipAug', # An encapsulation that encapsulates the testing augmentations
img_scale=(1333, 800), # Decides the largest scale for testing, used for the Resize pipeline
flip=False, # Whether to flip images during testing
transforms=[
dict(type='Resize', # Use resize augmentation
keep_ratio=True), # Whether to keep the ratio between height and width, the img_scale set here will be suppressed by the img_scale set above.
dict(type='RandomFlip'), # Thought RandomFlip is added in pipeline, it is not used because flip=False
dict(
type='Normalize', # Normalization config, the values are from img_norm_cfg
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(
type='Pad', # Padding config to pad images divisible by 32.
size_divisor=32),
dict(
type='ImageToTensor', # convert image to tensor
keys=['img']),
dict(
type='Collect', # Collect pipeline that collect necessary keys for testing.
keys=['img'])
])
]
data = dict(
samples_per_gpu=2, # Batch size of a single GPU
workers_per_gpu=2, # Worker to pre-fetch data for each single GPU
train=dict( # Train dataset config
type='CocoDataset', # Type of dataset, refer to https://github.com/open-mmlab/mmdetection/blob/master/mmdet/datasets/coco.py#L19 for details.
ann_file='data/coco/annotations/instances_train2017.json', # Path of annotation file
img_prefix='data/coco/train2017/', # Prefix of image path
pipeline=[ # pipeline, this is passed by the train_pipeline created before.
dict(type='LoadImageFromFile'),
dict(
type='LoadAnnotations',
with_bbox=True,
with_mask=True,
poly2mask=False),
dict(type='Resize', img_scale=(1333, 800), keep_ratio=True),
dict(type='RandomFlip', flip_ratio=0.5),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='Pad', size_divisor=32),
dict(type='DefaultFormatBundle'),
dict(
type='Collect',
keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks'])
]),
val=dict( # Validation dataset config
type='CocoDataset',
ann_file='data/coco/annotations/instances_val2017.json',
img_prefix='data/coco/val2017/',
pipeline=[ # Pipeline is passed by test_pipeline created before
dict(type='LoadImageFromFile'),
dict(
type='MultiScaleFlipAug',
img_scale=(1333, 800),
flip=False,
transforms=[
dict(type='Resize', keep_ratio=True),
dict(type='RandomFlip'),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='Pad', size_divisor=32),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img'])
])
]),
test=dict( # Test dataset config, modify the ann_file for test-dev/test submission
type='CocoDataset',
ann_file='data/coco/annotations/instances_val2017.json',
img_prefix='data/coco/val2017/',
pipeline=[ # Pipeline is passed by test_pipeline created before
dict(type='LoadImageFromFile'),
dict(
type='MultiScaleFlipAug',
img_scale=(1333, 800),
flip=False,
transforms=[
dict(type='Resize', keep_ratio=True),
dict(type='RandomFlip'),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='Pad', size_divisor=32),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img'])
])
],
samples_per_gpu=2 # Batch size of a single GPU used in testing
))
evaluation = dict( # The config to build the evaluation hook, refer to https://github.com/open-mmlab/mmdetection/blob/master/mmdet/core/evaluation/eval_hooks.py#L7 for more details.
interval=1, # Evaluation interval
metric=['bbox', 'segm']) # Metrics used during evaluation
optimizer = dict( # Config used to build optimizer, support all the optimizers in PyTorch whose arguments are also the same as those in PyTorch
type='SGD', # Type of optimizers, refer to https://github.com/open-mmlab/mmdetection/blob/master/mmdet/core/optimizer/default_constructor.py#L13 for more details
lr=0.02, # Learning rate of optimizers, see detail usages of the parameters in the documentation of PyTorch
momentum=0.9, # Momentum
weight_decay=0.0001) # Weight decay of SGD
optimizer_config = dict( # Config used to build the optimizer hook, refer to https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/hooks/optimizer.py#L8 for implementation details.
grad_clip=None) # Most of the methods do not use gradient clip
lr_config = dict( # Learning rate scheduler config used to register LrUpdater hook
policy='step', # The policy of scheduler, also support CosineAnnealing, Cyclic, etc. Refer to details of supported LrUpdater from https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/hooks/lr_updater.py#L9.
warmup='linear', # The warmup policy, also support `exp` and `constant`.
warmup_iters=500, # The number of iterations for warmup
warmup_ratio=
0.001, # The ratio of the starting learning rate used for warmup
step=[8, 11]) # Steps to decay the learning rate
runner = dict(
type='EpochBasedRunner', # Type of runner to use (i.e. IterBasedRunner or EpochBasedRunner)
max_epochs=12) # Runner that runs the workflow in total max_epochs. For IterBasedRunner use `max_iters`
checkpoint_config = dict( # Config to set the checkpoint hook, Refer to https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/hooks/checkpoint.py for implementation.
interval=1) # The save interval is 1
log_config = dict( # config to register logger hook
interval=50, # Interval to print the log
hooks=[
# dict(type='TensorboardLoggerHook') # The Tensorboard logger is also supported
dict(type='TextLoggerHook')
]) # The logger used to record the training process.
dist_params = dict(backend='nccl') # Parameters to setup distributed training, the port can also be set.
log_level = 'INFO' # The level of logging.
load_from = None # load models as a pre-trained model from a given path. This will not resume training.
resume_from = None # Resume checkpoints from a given path, the training will be resumed from the epoch when the checkpoint's is saved.
workflow = [('train', 1)] # Workflow for runner. [('train', 1)] means there is only one workflow and the workflow named 'train' is executed once. The workflow trains the model by 12 epochs according to the total_epochs.
work_dir = 'work_dir' # Directory to save the model checkpoints and logs for the current experiments.

Config 的用法

config_a.py (base config)
a = 1
b = dict(b1=[0, 1, 2], b2=None)

从 base config 继承相互不存在的 key

config_b.py
_base_ = './config_a.py'
c = (1, 2)
d = 'string'

output:

cfg = Config.fromfile('./config_b.py')
print(cfg)
# output:
dict(a=1,
b=dict(b1=[0, 1, 2], b2=None),
c=(1, 2),
d='string')

从 base config 继承有相同的 key

:::config_c.py

_base_ = './config_a.py'
b = dict(b2=1)
c = (1, 2)

:::

output:

cfg = Config.fromfile('./config_c.py')
print(cfg)
# output
dict(a=1,
b=dict(b1=[0, 1, 2], b2=1),
c=(1, 2))

从 base config 继承有忽略的 key

config_d.py
_base_ = './config_a.py'
b = dict(_delete_=True, b2=None, b3=0.1)
c = (1, 2)

output:

cfg = Config.fromfile('./config_d.py')
print(cfg)
# output
dict(a=1,
b=dict(b2=None, b3=0.1),
c=(1, 2))

设置 _delete_=True 表示忽略 _base_ 的对应的键值对。

从多个 base config 继承

config_e.py
c = (1, 2)
d = 'string'
config_f.py
_base_ = ['./config_a.py', './config_e.py']

output:

cfg = Config.fromfile('./config_f.py')
print(cfg)
# output
dict(a=1,
b=dict(b1=[0, 1, 2], b2=None),
c=(1, 2),
d='string')

从 base 中引用相应的变量

base.py
item1 = 'a'
item2 = dict(item3 = 'b')
config_g.py
_base_ = ['./base.py']
item = dict(a = {{ _base_.item1 }}, b = {{ _base_.item2.item3 }})
cfg = Config.fromfile('./config_g.py')
print(cfg.pretty_text)
# output
# item1 = 'a'
# item2 = dict(item3='b')
# item = dict(a='a', b='b')

在 config 中添加 deprecation 警告信息

::: deprecated_cfg.py

_base_ = 'expected_cfg.py'

_deprecation_ = dict(
expected = 'expected_cfg.py', # optional to show expected config path in the warning information
reference = 'url to related PR' # optional to show reference link in the warning information
)

:::

output:

cfg = Config.fromfile('./deprecated_cfg.py')
# output
# UserWarning: The config file deprecated_cfg.py will be deprecated in the future. Please use expected_cfg.py instead. More information can be found at https://github.com/open-mmlab/mmcv/pull/1275