profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/ppwwyyxx/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.
Yuxin Wu ppwwyyxx http://ppwwyyxx.com Working on computer vision @facebookresearch, as well as @tensorpack

megvii-research/neural-painter 533

Paint artistic patterns using random neural network

ppwwyyxx/Adversarial-Face-Attack 347

Black-Box Adversarial Attack on Public Face Recognition Systems

ppwwyyxx/dash-docset-tensorflow 189

dash/zeal docset for tensorflow

curimit/SugarCpp 134

SugarCpp is a language which can compile to C++11.

ppwwyyxx/dotfiles 43

my dotfiles..

ppwwyyxx/dotvim 38

Over 1200+ lines of vimrc

ppwwyyxx/FRN-on-common-ImageNet-baseline 31

Filter Response Normalization tested on better ImageNet baselines.

gkioxari/aims2020_visualrecognition 22

AIMS 2020, class on Visual Recognition

ppwwyyxx/dash-docset-matlab 20

Generate Dash Docset for Matlab

jxwuyi/HouseNavAgent 17

Navigation agent with Bayesian relational memory in the House3D environment

issue closedfacebookresearch/detectron2

Please read & provide the following

If you do not know the root cause of the problem, please post according to this template:

Instructions To Reproduce the Issue:

Check https://stackoverflow.com/help/minimal-reproducible-example for how to ask good questions. Simplify the steps to reproduce the issue using suggestions from the above link, and provide them below:

  1. Full runnable code or full changes you made:
If making changes to the project itself, please use output of the following command:
git rev-parse HEAD; git diff

<put code or diff here>
  1. What exact command you run:
  2. Full logs or other relevant observations:
<put logs here>

Expected behavior:

Hi, I have test detectron2 on a lot kids of GPU. However, the speed of PointRent on Nvidia RTX A6000 is slower than Titan RTX which is much cheaper/less core than RTX A6000. I wonder why? Is this because detectron2 is optimized with only some GPU but not all? If so, are you going to optimize this soon with RTX A6000?

Environment:

Paste the output of the following command:

name: detectron2
channels:
  - pytorch
  - anaconda
  - defaults
dependencies:
  - _py-xgboost-mutex=2.0=cpu_0
  - blas=1.0=mkl
  - bottleneck=1.3.2=py38h2a96729_1
  - ca-certificates=2021.7.5=haa95532_1
  - certifi=2021.5.30=py38haa95532_0
  - cudatoolkit=10.1.243=h74a9793_0
  - freetype=2.10.4=hd328e21_0
  - hdf5=1.10.4=h7ebc959_0
  - icc_rt=2019.0.0=h0cc432a_1
  - intel-openmp=2021.3.0=haa95532_3372
  - joblib=1.0.1=pyhd3eb1b0_0
  - jpeg=9b=hb83a4c4_2
  - libopencv=4.0.1=hbb9e17c_0
  - libpng=1.6.37=h2a8f88b_0
  - libtiff=4.2.0=hd0e1b90_0
  - libuv=1.40.0=he774522_0
  - libxgboost=1.3.3=hd77b12b_0
  - lz4-c=1.9.3=h2bbff1b_1
  - mkl=2021.3.0=haa95532_524
  - mkl-service=2.4.0=py38h2bbff1b_0
  - mkl_fft=1.3.0=py38h277e83a_2
  - mkl_random=1.2.2=py38hf11a4ad_0
  - ninja=1.10.2=h6d14046_1
  - numexpr=2.7.3=py38hb80d3ca_1
  - numpy=1.20.3=py38ha4e8547_0
  - numpy-base=1.20.3=py38hc2deb75_0
  - olefile=0.46=pyhd3eb1b0_0
  - opencv=4.0.1=py38h2a7c758_0
  - openssl=1.1.1l=h2bbff1b_0
  - pandas=1.3.2=py38h6214cd6_0
  - pip=21.2.2=py38haa95532_0
  - py-opencv=4.0.1=py38he44ac1e_0
  - py-xgboost=1.3.3=py38haa95532_0
  - python=3.8.0=hff0d562_2
  - python-dateutil=2.8.2=pyhd3eb1b0_0
  - pytorch=1.8.1=py3.8_cuda10.1_cudnn7_0
  - pytz=2021.1=pyhd3eb1b0_0
  - scikit-learn=0.24.2=py38hf11a4ad_1
  - scipy=1.7.1=py38hbe87c03_2
  - setuptools=52.0.0=py38haa95532_0
  - six=1.16.0=pyhd3eb1b0_0
  - sqlite=3.36.0=h2bbff1b_0
  - threadpoolctl=2.2.0=pyh0d69192_0
  - tk=8.6.10=he774522_0
  - torchvision=0.9.1=py38_cu101
  - typing_extensions=3.10.0.0=pyhca03da5_0
  - vc=14.2=h21ff451_1
  - vs2015_runtime=14.27.29016=h5e58377_2
  - wheel=0.37.0=pyhd3eb1b0_1
  - wincertstore=0.2=py38_0
  - xgboost=1.3.3=py38haa95532_0
  - xz=5.2.5=h62dcd97_0
  - zlib=1.2.11=h62dcd97_4
  - zstd=1.4.9=h19a0ad4_0
  - pip:
    - absl-py==0.13.0
    - antlr4-python3-runtime==4.8
    - appdirs==1.4.4
    - black==21.4b2
    - cachetools==4.2.2
    - charset-normalizer==2.0.4
    - click==8.0.1
    - cloudpickle==2.0.0
    - colorama==0.4.4
    - cycler==0.10.0
    - cython==0.29.24
    - future==0.18.2
    - fvcore==0.1.5.post20210825
    - google-auth==1.35.0
    - google-auth-oauthlib==0.4.6
    - grpcio==1.40.0
    - hydra-core==1.1.1
    - idna==3.2
    - importlib-resources==5.2.2
    - iopath==0.1.9
    - kiwisolver==1.3.2
    - markdown==3.3.4
    - matplotlib==3.4.3
    - mypy-extensions==0.4.3
    - oauthlib==3.1.1
    - omegaconf==2.1.1
    - pathspec==0.9.0
    - pillow==8.3.2
    - portalocker==2.3.2
    - protobuf==3.17.3
    - pyasn1==0.4.8
    - pyasn1-modules==0.2.8
    - pycocotools==2.0.2
    - pydot==1.4.2
    - pyparsing==2.4.7
    - pywin32==301
    - pyyaml==5.4.1
    - regex==2021.8.28
    - requests==2.26.0
    - requests-oauthlib==1.3.0
    - rsa==4.7.2
    - tabulate==0.8.9
    - tensorboard==2.6.0
    - tensorboard-data-server==0.6.1
    - tensorboard-plugin-wit==1.8.0
    - termcolor==1.1.0
    - toml==0.10.2
    - tqdm==4.62.2
    - urllib3==1.26.6
    - werkzeug==2.0.1
    - yacs==0.1.8
    - zipp==3.5.0
prefix: C:\ProgramData\Anaconda3\envs\detectron2

If your issue looks like an installation issue / environment issue, please first check common issues in https://detectron2.readthedocs.io/tutorials/install.html#common-installation-issues

closed time in 14 hours

zw2253782

issue commentfacebookresearch/detectron2

Please read & provide the following

Using cuda10.1 on A6000 will not be fast. This is unrelated to detectron2 therefore closing.

zw2253782

comment created time in 14 hours

issue commentfacebookresearch/detectron2

building error : undefined reference to `google::protobuf::RepeatedPtrField<std::string>::begin() const'

The build should work in our docker file. If you can provide a docker file that reproduces the failure then we can take a look.

torchscript_traced_mask_rcnn is different from the binary that fails to build. if torchscript_traced_mask_rcnn is all you need then the failure can be ignored.

ahong007007

comment created time in 14 hours

issue closedfacebookresearch/detectron2

cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST value impacts coco_evaluation metrics

By altering the cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST config option before running inference, we get drastically different evaluation metric values.

Instructions To Reproduce the Issue:

Take the colab notebook as an example. Replace cell 12, Train!, with the following:

from detectron2.engine import DefaultTrainer
from detectron2.evaluation import COCOEvaluator, inference_on_dataset
from detectron2.data import build_detection_test_loader

results = []
cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml"))
cfg.DATASETS.TRAIN = ("balloon_train",)
cfg.DATASETS.TEST = ()
cfg.DATALOADER.NUM_WORKERS = 2
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml")  # Let training initialize from model zoo
cfg.SOLVER.IMS_PER_BATCH = 2
cfg.SOLVER.BASE_LR = 0.00025  # pick a good LR
cfg.SOLVER.MAX_ITER = 50    # 300 iterations seems good enough for this toy dataset; you will need to train longer for a practical dataset
cfg.SOLVER.STEPS = []        # do not decay learning rate
cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 128   # faster, and good enough for this toy dataset (default: 512)
cfg.MODEL.ROI_HEADS.NUM_CLASSES = 1  # only has one class (ballon). (see https://detectron2.readthedocs.io/tutorials/datasets.html#update-the-config-for-new-datasets)
# NOTE: this config means the number of classes, but a few popular unofficial tutorials incorrect uses num_classes+1 here.

os.makedirs(cfg.OUTPUT_DIR, exist_ok=True)
trainer = DefaultTrainer(cfg) 
trainer.resume_or_load(resume=False)
trainer.train()
saved_cfg = cfg

results = []
for i in range(10):
  cfg = saved_cfg
  cfg.MODEL.WEIGHTS = os.path.join(cfg.OUTPUT_DIR, "model_final.pth")  # path to the model we just trained
  cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = i/10   # set a custom testing threshold
  predictor = DefaultPredictor(cfg)

  evaluator = COCOEvaluator("balloon_val", output_dir="./output")
  val_loader = build_detection_test_loader(cfg, "balloon_val")
  results.append((i/10, inference_on_dataset(predictor.model, val_loader, evaluator)['bbox']['AP50']))

results

The output is as follows (value of results):

[(0.0, 79.30198781745244),
 (0.1, 79.30198781745244),
 (0.2, 79.02937929464987),
 (0.3, 78.11410410973554),
 (0.4, 77.51446668120396),
 (0.5, 73.18508795959687),
 (0.6, 61.27812781278128),
 (0.7, 10.891089108910892),
 (0.8, nan),
 (0.9, nan)]

Expected behavior:

I would expect the AP50 to be the same in each element of this list, regardless of the change in the threshold. The AP metrics should be calculated over the entire precision/recall curve by altering this threshold. By setting the threshold to anything other than 0, the evaluator has access to a smaller part of the precision/recall curve, and hence underestimates the performance of models. This means that the performance stated in the tutorial could be underestimated (as the default is 0.05), and I imagine many other projects.

I propose that at the least, this behaviour should be stated in the colab tutorial and README. Ideally, there would be a way to alter the config value to 0 when being inputted into the coco evaluator. Currently I do this manually in all my projects.

Note this issue was mentioned in a previous issue here, but I think it was not fully realised or acted upon.

Environment:

Detectron2 basic configuration. See colab tutorial example above for reproduction.

closed time in 14 hours

jonnyevans3210

issue commentfacebookresearch/detectron2

cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST value impacts coco_evaluation metrics

I propose that at the least, this behaviour should be stated in the colab tutorial and README

This is expected behavior and stated in the config documentation already, e.g. search for "ROI_HEADS.SCORE_THRESH_TEST" in https://detectron2.readthedocs.io/en/latest/modules/config.html#yaml-config-references

jonnyevans3210

comment created time in 14 hours

PullRequestReviewEvent

issue closedfacebookresearch/detectron2

Problem to follow installation instructions

📚 Documentation Issue

If I follow the instructions from the site to install. In my case (pytorch 1.9 cuda 111):

pip install detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu111/torch1.9/index.html

give me :

Looking in links: https://dl.fbaipublicfiles.com/detectron2/wheels/cu111/torch1.9/index.html
ERROR: Could not find a version that satisfies the requirement detectron2 (from versions: none)
ERROR: No matching distribution found for detectron2

Using win10, py3.9 in a venv.

closed time in a day

Zawarud

issue commentfacebookresearch/detectron2

Problem to follow installation instructions

As the instruction says, this installation method is linux only. https://detectron2.readthedocs.io/en/latest/tutorials/install.html#install-pre-built-detectron2-linux-only

Zawarud

comment created time in a day

issue closedtensorpack/tensorpack

Activation function for last layer in DoReFaNet

Hi,

I was using DoReFaNet to create quantized versions of networks and so far it has been working fine. One of the things that I noticed was that we do not quantize first (weight of the first) and last layers which is also mentioned in the DoReFaNet paper. I tested this and it does improve performance when I do not quantize the first and last layer. But one of the things that I noticed was that the last layer always has a non-linear activation function applied to it whereas the original model (such as alexnet) has ReLU as an activation function.

Is there any specific reason for doing so? I did not find anything related to it in the DoReFaNet paper.

image

closed time in 3 days

Abhishek2271

issue commenttensorpack/tensorpack

Activation function for last layer in DoReFaNet

There is no specific reason. In fact the nonlin is pretty much just a relu

Abhishek2271

comment created time in 3 days

issue commentfacebookresearch/fvcore

"flop_count" for Detectron2 DeformConv

Does the example in https://detectron2.readthedocs.io/en/latest/modules/fvcore.html#fvcore.nn.ActivationCountAnalysis.set_op_handle help? It means that the arguments of the handler (not of the function set_op_handle) has type List[torch._C.Value]. The name is just a string, and should be the operator (something like torchvision::nms).

However a problem is that deform conv in detectron2 is not registered as a torch operator so it cannot be counted. That's a different issue that belongs to detectron2.

bowenc0221

comment created time in 3 days

issue commentfacebookresearch/detectron2

[PointRend] Problem with traced on CPU model

It might work if you add @torch.jit.script_if_tracing on the function "mask_ops.py(148): paste_masks_in_image". If this is not enough, more work will be needed to properly support tracing in postprocess.

QNester

comment created time in 4 days

issue commentfacebookresearch/detectron2

[PointRend] Problem with traced on CPU model

postprocessing is not supported, which is why we use do_postprocess=False.

QNester

comment created time in 4 days

issue commentfacebookresearch/detectron2

Exporting a model to ONNX; onnx.optimizer

We can try...except and import onnxoptimizer when onnx.optimizer is not available. Would you like to send us a PR?

mdebeer

comment created time in 4 days

issue commentfacebookresearch/detectron2

IMPOSSIBLE TO FINETUNE KEYPOINTS

Expected behavior: NO WARNING ABOUT KEYPOINT LOSS

The warnings do not affect training and are benigh.

WARNING [09/13 10:54:00 d2.data.datasets.coco]: Category ids in annotations are not in [1, #categories]! We'll apply a mapping for you.

The annotations do not follow the correct format but were fixed automatically.

WARNING [09/13 10:54:00 d2.solver.build]: SOLVER.STEPS contains values larger than SOLVER.MAX_ITER. These values will be ignored.

The config SOLVER.STEPS contains wrong values and as a result will not affect training.

eaedk

comment created time in 4 days

issue closedfacebookresearch/detectron2

IMPOSSIBLE TO FINETUNE KEYPOINTS

Instructions To Reproduce the 🐛 Bug:

  1. Full runnable code or full changes you made:

from detectron2.engine import DefaultTrainer

cfg = get_cfg()
#cfg.MODEL.DEVICE = "cpu"


cfg.merge_from_file(model_zoo.get_config_file("COCO-Keypoints/keypoint_rcnn_R_50_FPN_3x.yaml"))
cfg.DATASETS.TRAIN = ("mrz_train",)


cfg.DATASETS.TEST = ("mrz_validation",)  #Dataset 'hand_test' is empty in my case
#cfg.DATASETS.TEST = ()

cfg.DATALOADER.NUM_WORKERS = 2
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-Keypoints/keypoint_rcnn_R_50_FPN_3x.yaml")

cfg.SOLVER.IMS_PER_BATCH = 2
cfg.SOLVER.BASE_LR = 0.00025  # pick a good LR
cfg.SOLVER.MAX_ITER = 2000   # 300 iterations seems good enough for this toy dataset; you may need to train longer for a practical dataset
cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 128   #128   # faster, and good enough for this toy dataset (default: 512)

cfg.MODEL.ROI_HEADS.NUM_CLASSES = 1  # hand
cfg.MODEL.RETINANET.NUM_CLASSES = 1
cfg.MODEL.ROI_KEYPOINT_HEAD.NUM_KEYPOINTS = 4
cfg.TEST.KEYPOINT_OKS_SIGMAS = np.ones((4, 1), dtype=float).tolist()


os.makedirs(cfg.OUTPUT_DIR, exist_ok=True)
trainer = DefaultTrainer(cfg)    #CocoTrainer(cfg)
#trainer.resume_or_load(resume=False)
trainer.train()
  1. Full logs or other relevant observations:
[09/13 10:54:00 d2.engine.defaults]: Model:
GeneralizedRCNN(
  (backbone): FPN(
    (fpn_lateral2): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1))
    (fpn_output2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (fpn_lateral3): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1))
    (fpn_output3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (fpn_lateral4): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1))
    (fpn_output4): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (fpn_lateral5): Conv2d(2048, 256, kernel_size=(1, 1), stride=(1, 1))
    (fpn_output5): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (top_block): LastLevelMaxPool()
    (bottom_up): ResNet(
      (stem): BasicStem(
        (conv1): Conv2d(
          3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False
          (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
        )
      )
      (res2): Sequential(
        (0): BottleneckBlock(
          (shortcut): Conv2d(
            64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv1): Conv2d(
            64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
          )
          (conv2): Conv2d(
            64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
          )
          (conv3): Conv2d(
            64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
        )
        (1): BottleneckBlock(
          (conv1): Conv2d(
            256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
          )
          (conv2): Conv2d(
            64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
          )
          (conv3): Conv2d(
            64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
        )
        (2): BottleneckBlock(
          (conv1): Conv2d(
            256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
          )
          (conv2): Conv2d(
            64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=64, eps=1e-05)
          )
          (conv3): Conv2d(
            64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
        )
      )
      (res3): Sequential(
        (0): BottleneckBlock(
          (shortcut): Conv2d(
            256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
          (conv1): Conv2d(
            256, 128, kernel_size=(1, 1), stride=(2, 2), bias=False
            (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
          )
          (conv2): Conv2d(
            128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
          )
          (conv3): Conv2d(
            128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
        )
        (1): BottleneckBlock(
          (conv1): Conv2d(
            512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
          )
          (conv2): Conv2d(
            128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
          )
          (conv3): Conv2d(
            128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
        )
        (2): BottleneckBlock(
          (conv1): Conv2d(
            512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
          )
          (conv2): Conv2d(
            128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
          )
          (conv3): Conv2d(
            128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
        )
        (3): BottleneckBlock(
          (conv1): Conv2d(
            512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
          )
          (conv2): Conv2d(
            128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=128, eps=1e-05)
          )
          (conv3): Conv2d(
            128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
        )
      )
      (res4): Sequential(
        (0): BottleneckBlock(
          (shortcut): Conv2d(
            512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False
            (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
          )
          (conv1): Conv2d(
            512, 256, kernel_size=(1, 1), stride=(2, 2), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv2): Conv2d(
            256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv3): Conv2d(
            256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
          )
        )
        (1): BottleneckBlock(
          (conv1): Conv2d(
            1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv2): Conv2d(
            256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv3): Conv2d(
            256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
          )
        )
        (2): BottleneckBlock(
          (conv1): Conv2d(
            1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv2): Conv2d(
            256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv3): Conv2d(
            256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
          )
        )
        (3): BottleneckBlock(
          (conv1): Conv2d(
            1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv2): Conv2d(
            256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv3): Conv2d(
            256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
          )
        )
        (4): BottleneckBlock(
          (conv1): Conv2d(
            1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv2): Conv2d(
            256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv3): Conv2d(
            256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
          )
        )
        (5): BottleneckBlock(
          (conv1): Conv2d(
            1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv2): Conv2d(
            256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=256, eps=1e-05)
          )
          (conv3): Conv2d(
            256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
          )
        )
      )
      (res5): Sequential(
        (0): BottleneckBlock(
          (shortcut): Conv2d(
            1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False
            (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
          )
          (conv1): Conv2d(
            1024, 512, kernel_size=(1, 1), stride=(2, 2), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
          (conv2): Conv2d(
            512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
          (conv3): Conv2d(
            512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
          )
        )
        (1): BottleneckBlock(
          (conv1): Conv2d(
            2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
          (conv2): Conv2d(
            512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
          (conv3): Conv2d(
            512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
          )
        )
        (2): BottleneckBlock(
          (conv1): Conv2d(
            2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
          (conv2): Conv2d(
            512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=512, eps=1e-05)
          )
          (conv3): Conv2d(
            512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False
            (norm): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
          )
        )
      )
    )
  )
  (proposal_generator): RPN(
    (rpn_head): StandardRPNHead(
      (conv): Conv2d(
        256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)
        (activation): ReLU()
      )
      (objectness_logits): Conv2d(256, 3, kernel_size=(1, 1), stride=(1, 1))
      (anchor_deltas): Conv2d(256, 12, kernel_size=(1, 1), stride=(1, 1))
    )
    (anchor_generator): DefaultAnchorGenerator(
      (cell_anchors): BufferList()
    )
  )
  (roi_heads): StandardROIHeads(
    (box_pooler): ROIPooler(
      (level_poolers): ModuleList(
        (0): ROIAlign(output_size=(7, 7), spatial_scale=0.25, sampling_ratio=0, aligned=True)
        (1): ROIAlign(output_size=(7, 7), spatial_scale=0.125, sampling_ratio=0, aligned=True)
        (2): ROIAlign(output_size=(7, 7), spatial_scale=0.0625, sampling_ratio=0, aligned=True)
        (3): ROIAlign(output_size=(7, 7), spatial_scale=0.03125, sampling_ratio=0, aligned=True)
      )
    )
    (box_head): FastRCNNConvFCHead(
      (flatten): Flatten(start_dim=1, end_dim=-1)
      (fc1): Linear(in_features=12544, out_features=1024, bias=True)
      (fc_relu1): ReLU()
      (fc2): Linear(in_features=1024, out_features=1024, bias=True)
      (fc_relu2): ReLU()
    )
    (box_predictor): FastRCNNOutputLayers(
      (cls_score): Linear(in_features=1024, out_features=2, bias=True)
      (bbox_pred): Linear(in_features=1024, out_features=4, bias=True)
    )
    (keypoint_pooler): ROIPooler(
      (level_poolers): ModuleList(
        (0): ROIAlign(output_size=(14, 14), spatial_scale=0.25, sampling_ratio=0, aligned=True)
        (1): ROIAlign(output_size=(14, 14), spatial_scale=0.125, sampling_ratio=0, aligned=True)
        (2): ROIAlign(output_size=(14, 14), spatial_scale=0.0625, sampling_ratio=0, aligned=True)
        (3): ROIAlign(output_size=(14, 14), spatial_scale=0.03125, sampling_ratio=0, aligned=True)
      )
    )
    (keypoint_head): KRCNNConvDeconvUpsampleHead(
      (conv_fcn1): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (conv_fcn_relu1): ReLU()
      (conv_fcn2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (conv_fcn_relu2): ReLU()
      (conv_fcn3): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (conv_fcn_relu3): ReLU()
      (conv_fcn4): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (conv_fcn_relu4): ReLU()
      (conv_fcn5): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (conv_fcn_relu5): ReLU()
      (conv_fcn6): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (conv_fcn_relu6): ReLU()
      (conv_fcn7): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (conv_fcn_relu7): ReLU()
      (conv_fcn8): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (conv_fcn_relu8): ReLU()
      (score_lowres): ConvTranspose2d(512, 4, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
    )
  )
)
WARNING [09/13 10:54:00 d2.data.datasets.coco]: 
Category ids in annotations are not in [1, #categories]! We'll apply a mapping for you.

[09/13 10:54:00 d2.data.datasets.coco]: Loaded 618 images in COCO format from /content/drive/MyDrive/train_set/train.json
[09/13 10:54:00 d2.data.build]: Removed 0 images with no usable annotations. 618 images left.
[09/13 10:54:00 d2.data.build]: Removed 0 images with fewer than 1 keypoints.
[09/13 10:54:00 d2.data.build]: Distribution of instances among all 1 categories:
|  category  | #instances   |
|:----------:|:-------------|
|    zone    | 618          |
|            |              |
[09/13 10:54:00 d2.data.dataset_mapper]: [DatasetMapper] Augmentations used in training: [ResizeShortestEdge(short_edge_length=(640, 672, 704, 736, 768, 800), max_size=1333, sample_style='choice'), RandomFlip()]
[09/13 10:54:00 d2.data.build]: Using training sampler TrainingSampler
[09/13 10:54:00 d2.data.common]: Serializing 618 elements to byte tensors and concatenating them all ...
[09/13 10:54:00 d2.data.common]: Serialized dataset takes 0.20 MiB
WARNING [09/13 10:54:00 d2.solver.build]: SOLVER.STEPS contains values larger than SOLVER.MAX_ITER. These values will be ignored.
[09/13 10:54:02 d2.engine.train_loop]: Starting training from iteration 0
/usr/local/lib/python3.7/dist-packages/torch/_tensor.py:575: UserWarning: floor_divide is deprecated, and will be removed in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values.
To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor'). (Triggered internally at  /pytorch/aten/src/ATen/native/BinaryOps.cpp:467.)
  return torch.floor_divide(self, other)
/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at  /pytorch/c10/core/TensorImpl.h:1156.)
  return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
/usr/local/lib/python3.7/dist-packages/numpy/core/fromnumeric.py:3373: RuntimeWarning: Mean of empty slice.
  out=out, **kwargs)
/usr/local/lib/python3.7/dist-packages/numpy/core/_methods.py:170: RuntimeWarning: invalid value encountered in double_scalars
  ret = ret.dtype.type(ret / rcount)
[09/13 10:54:31 d2.utils.events]:  eta: 0:44:50  iter: 19  total_loss: 0.04612  loss_cls: 0  loss_box_reg: 0  loss_keypoint: 0  loss_rpn_cls: 0.04612  loss_rpn_loc: 0  time: 1.3853  data_time: 0.0675  lr: 4.9953e-06  max_mem: 2934M

  1. please simplify the steps as much as possible so they do not require additional resources to run, such as a private dataset.

Expected behavior:

NO WARNING ABOUT KEYPOINT LOSS

Environment:

Provide your environment information using the following command:

google colab GPU

If your issue looks like an installation issue / environment issue, please first try to solve it yourself with the instructions in https://detectron2.readthedocs.io/tutorials/install.html#common-installation-issues

closed time in 4 days

eaedk

issue commentfacebookresearch/detectron2

Bug of performance in trainer.model and predictor.model

That depends the code & config that creates trainer and predictor. With same config options & model weights, they are expected to produce same results. If you need help to solve an unexpected issue you observed, please include details following the "Unexpected behaviors" issue template.

Nurungyi

comment created time in 4 days

PullRequestReviewEvent

pull request commentfacebookresearch/detectron2

Filter out a UserWarning in torch-1.9

According to the CI this breaks pytorch1.7 support, so we cannot merge this for now.

Johnqczhang

comment created time in 4 days

pull request commentfacebookresearch/detectron2

Fix reduction of cross_entropy

Thanks for the fix!

sukjunhwang

comment created time in 4 days

PullRequestReviewEvent

pull request commentfacebookresearch/detectron2

Feat add bbox scale factor

addressed in comments of https://github.com/facebookresearch/detectron2/issues/3428

MichalHek

comment created time in 4 days

PR closed facebookresearch/detectron2

Feat add bbox scale factor CLA Signed

Add a configurable scaling factor that enables scaling the RoI box heads in the Region Proposal Network. This is useful for avoiding cropped RoI box detections. It enables tight detections, followed by better segmentation masks.

A feature proposal was opened in issues: https://github.com/facebookresearch/detectron2/issues/3428

+18 -7

2 comments

4 changed files

MichalHek

pr closed time in 4 days

PullRequestReviewEvent

issue closedfacebookresearch/detectron2

Lossing the file: box_iou_rotated_cuda.obj 文件缺失

After install all dependency run this commend python setup.py build develop

got the error (base) C:\Users\vorte\Downloads\detectron2-main>python setup.py build develop c:\users\vorte\anaconda3\lib\site-packages\numpy_distributor_init.py:32: UserWarning: loaded more than 1 DLL from .libs: c:\users\vorte\anaconda3\lib\site-packages\numpy.libs\libopenblas.NOIJJG62EMASZI6NYURL6JBKM4EVBGM7.gfortran-win_amd64.dll c:\users\vorte\anaconda3\lib\site-packages\numpy.libs\libopenblas.WCDJNK7YVMPZQ2ME2ZZHJJRJ3JIKNDB7.gfortran-win_amd64.dll stacklevel=1) running build running build_py running build_ext c:\users\vorte\anaconda3\lib\site-packages\torch\utils\cpp_extension.py:287: UserWarning: Error checking compiler version for cl: 'utf-8' codec can't decode byte 0xd3 in p osition 0: invalid continuation byte warnings.warn('Error checking compiler version for {}: {}'.format(compiler, error)) building 'detectron2._C' extension Emitting ninja build file C:\Users\vorte\Downloads\detectron2-main\build\temp.win-amd64-3.6\Release\build.ninja... Compiling objects... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) 1.10.2.git.kitware.jobserver-1 C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX86\x64\link.exe /nologo /INCREMENTAL:NO /LTCG /DLL /MANIFEST:EMBED,ID=2 /M ANIFESTUAC:NO /LIBPATH:c:\users\vorte\anaconda3\lib\site-packages\torch\lib "/LIBPATH:C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.0\lib/x64" /LIBPATH:C:\Users\v orte\Anaconda3\libs /LIBPATH:C:\Users\vorte\Anaconda3\PCbuild\amd64 "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\ATLMF C\lib\x64" "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\lib\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\NETFXSD K\4.8\lib\um\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.19041.0\ucrt\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.19041.0\um\x64" c10 .lib torch.lib torch_cpu.lib torch_python.lib cudart.lib c10_cuda.lib torch_cuda.lib /EXPORT:PyInit__C C:\Users\vorte\Downloads\detectron2-main\build\temp.win-amd64-3.6\Re lease\Users\vorte\Downloads\detectron2-main\detectron2\layers\csrc\vision.obj C:\Users\vorte\Downloads\detectron2-main\build\temp.win-amd64-3.6\Release\Users\vorte\Downloa ds\detectron2-main\detectron2\layers\csrc\box_iou_rotated\box_iou_rotated_cpu.obj C:\Users\vorte\Downloads\detectron2-main\build\temp.win-amd64-3.6\Release\Users\vorte\Dow nloads\detectron2-main\detectron2\layers\csrc\cocoeval\cocoeval.obj C:\Users\vorte\Downloads\detectron2-main\build\temp.win-amd64-3.6\Release\Users\vorte\Downloads\detectr on2-main\detectron2\layers\csrc\nms_rotated\nms_rotated_cpu.obj C:\Users\vorte\Downloads\detectron2-main\build\temp.win-amd64-3.6\Release\Users\vorte\Downloads\detectron2- main\detectron2\layers\csrc\ROIAlignRotated\ROIAlignRotated_cpu.obj C:\Users\vorte\Downloads\detectron2-main\build\temp.win-amd64-3.6\Release\Users\vorte\Downloads\detectr on2-main\detectron2\layers\csrc\box_iou_rotated\box_iou_rotated_cuda.obj C:\Users\vorte\Downloads\detectron2-main\build\temp.win-amd64-3.6\Release\Users\vorte\Downloads\de tectron2-main\detectron2\layers\csrc\deformable\deform_conv_cuda.obj C:\Users\vorte\Downloads\detectron2-main\build\temp.win-amd64-3.6\Release\Users\vorte\Downloads\detect ron2-main\detectron2\layers\csrc\deformable\deform_conv_cuda_kernel.obj C:\Users\vorte\Downloads\detectron2-main\build\temp.win-amd64-3.6\Release\Users\vorte\Downloads\det ectron2-main\detectron2\layers\csrc\nms_rotated\nms_rotated_cuda.obj C:\Users\vorte\Downloads\detectron2-main\build\temp.win-amd64-3.6\Release\Users\vorte\Downloads\detect ron2-main\detectron2\layers\csrc\ROIAlignRotated\ROIAlignRotated_cuda.obj C:\Users\vorte\Downloads\detectron2-main\build\temp.win-amd64-3.6\Release\Users\vorte\Downloads\d etectron2-main\detectron2\layers\csrc\cuda_version.obj /OUT:build\lib.win-amd64-3.6\detectron2_C.cp36-win_amd64.pyd /IMPLIB:C:\Users\vorte\Downloads\detectron2-main\build \temp.win-amd64-3.6\Release\Users\vorte\Downloads\detectron2-main\detectron2\layers\csrc_C.cp36-win_amd64.lib LINK : fatal error LNK1181: 无法打开输入文件“C:\Users\vorte\Downloads\detectron2-main\build\temp.win-amd64-3.6\Release\Users\vorte\Downloads\detectron2-main\detectron2\la yers\csrc\box_iou_rotated\box_iou_rotated_cuda.obj” error: command 'C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX86\x64\link.exe' failed with exit status 1181

"Instructions To Reproduce the Issue and Full Logs" pip install --ignore-installed -e .

closed time in 4 days

Louis24

issue commentfacebookresearch/detectron2

Lossing the file: box_iou_rotated_cuda.obj 文件缺失

As https://detectron2.readthedocs.io/en/latest/tutorials/install.html#common-installation-issues says we don't provide support for issues on windows.

Louis24

comment created time in 4 days

issue closedfacebookresearch/detectron2

Potential Issue in NMS Implementation

Not sure if this counts as a bug, but I found that in the implementation of NMS, there might be a corner case where a bbox is assigned several different classes after NMS during inference. [In faster_rcnn_inference_single_image() funtion]

Here's an example, if bbox \Beta has prediction scores higher than nms_thresh for class A (score_A) and class B (score_B). And in the meantime, score_A rank top K among all the bboxes for class A, and same for score_B in class B. You will end up having two duplicate boxes \Beta with prediction classes for class A and class B. I'm wondering if this is an expected behavior for multi-class NMS or it's an implementation issue?

Please correct me if I'm wrong, thanks!

Yujia

closed time in 4 days

IssacCyj

issue commentfacebookresearch/detectron2

Potential Issue in NMS Implementation

This is an expected behavior.

IssacCyj

comment created time in 4 days

push eventppwwyyxx/fvcore

Yuxin Wu

commit sha e05980b721dff2e0a9a21040b4cc51857b24cffb

flop count for aten:mm Reviewed By: bxiong1202 Differential Revision: D29756579 fbshipit-source-id: 328fa97ea4f612c649e725695aca0b524fe66399

view details

Pyre Bot Jr

commit sha 2d073f1b713f6ff3bb310af9a4313a5e8e03f49c

suppress errors in `fbcode/vision` - batch 1 Differential Revision: D29827499 fbshipit-source-id: b67ea1050ba6ab9998e20741a69c194cf767108d

view details

Pyre Bot Jr

commit sha f768f7a01fa1187862e43aba440c51d2a1655fd4

suppress errors in `vision/fair/fvcore` Differential Revision: D29929127 fbshipit-source-id: 80e9171808f095e2fca1c8cdb897b9c28ce8cef3

view details

Yuxin Wu

commit sha 9958c1df66ac6c1a995a15a8f150d2eba6f7cc84

fix flop counter for trivial module Summary: otherwise it fails. Reviewed By: haooooooqi Differential Revision: D29981975 fbshipit-source-id: dab1c8e404f4dfa7ec8eb373077009f435259dbf

view details

Pyre Bot Jr

commit sha 4763d31152ac97398ddeb1fd2bf2982dca1e1bf8

suppress errors in `fbcode/vision` - batch 2 Differential Revision: D30071832 fbshipit-source-id: ba71b104d87ace3e5049f1004d328490c5050140

view details

Zhipeng Fan

commit sha f10a1646bc74309e26641c75277e098e2d7900f6

Fix padding value to zero for segmentations in PadTransform Summary: Fix the padding value in apply_segmentation in PadTransform to zero, instead of adopting the padding value used in apply_image, which is often set to 128. Reviewed By: ppwwyyxx Differential Revision: D30199688 fbshipit-source-id: 2ee857f096b506ac87842f98428f018010774dbf

view details

Pyre Bot Jr

commit sha e5a3815eb1ae4562b2f47605e6ca43e8e6e87585

suppress errors in `fbcode/vision` - batch 2 Differential Revision: D30514671 fbshipit-source-id: 5c92206688b3ded06c7272e4dd7dd2c713d5d344

view details

Yuxin Wu

commit sha 20d5713229e4d9387c451476071530be8d816338

flop counter fix Summary: * skip an unintended warning * skip a no-op operator Reviewed By: bxiong1202 Differential Revision: D30952964 fbshipit-source-id: b04d26e91c30c0761654b98e1c9d33157d763fcd

view details

Yuxin Wu

commit sha 0af987135de95d334c08f841ec1dc4c3d08f19a4

add option to count MAC (default to True) instead of flops (#77) Summary: Pull Request resolved: https://github.com/facebookresearch/fvcore/pull/77 MACs and FLOPs are different concepts but often misused. This should make it more clear that FlopCounter is actually counting MACs. Maybe we should even change the default but that's a different decision to make. Differential Revision: D28859722 fbshipit-source-id: e0a1ec37ce1888a5ab8138edd9301e59959c3d8f

view details

push time in 6 days

issue commentfacebookresearch/detectron2

loading after saving lazyconfig throws an error No module named 'detectron2._cfg_loader1c03'

This is as expected. Because the config is a python file, there is no way to guarantee the config can be correctly saved to a yaml file since yaml has much more limited syntax.

darwinharianto

comment created time in 8 days