profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/aquariusjay/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.

tensorflow/models 71685

Models and examples built with TensorFlow

google-research/deeplab2 485

DeepLab2 is a TensorFlow library for deep labeling, aiming to provide a unified and state-of-the-art TensorFlow codebase for dense pixel labeling tasks.

bowenc0221/panoptic-deeplab 436

This is Pytorch re-implementation of our CVPR 2020 paper "Panoptic-DeepLab: A Simple, Strong, and Fast Baseline for Bottom-Up Panoptic Segmentation" (https://arxiv.org/abs/1911.10194)

aquariusjay/models 2

Models and examples built with TensorFlow

issue commentgoogle-research/deeplab2

Is this tensorflow lite compatible?

Hi @CodeMonkey3435,

We have not experimented with TF-Lite yet, and thus not sure how compatible the code is.

Cheers,

CodeMonkey3435

comment created time in 19 days

issue commentgoogle-research/deeplab2

Possible MaX-DeepLab-L-Backbone ImageNet checkpoint issue

Hi @louisquinn,

Could you please try running the provided configs (MaX-DeepLab-S and MaX-DeepLab-L) on COCO, and see what happens? Also, please make sure that the new experiments do not load the checkpoints from the old experiments.

Cheers,

louisquinn

comment created time in 19 days

issue commentgoogle-research/deeplab2

Mismatch between train and eval loss results

Hi @Notou,

Could you please try running the model on some academic datasets (e.g., Cityscapes)? We have no idea about what your data is and what is happening there (e.g., overfitting? or some stronger data augmentation is needed). As pointed out, the train loss is evaluated batchwise, while the eval loss is evaluated on whole eval set. Another indicator to check is the evaluation accuracy curve, which may be more informative than the eval loss.

Cheers,

Notou

comment created time in 19 days

issue commentgoogle-research/deeplab2

Cannot do evaluation on vip-deeplab

Hi @HarborYuan Would you please share your solution to fix the evaluation issue with the transposed solution? This would be helpful for other users as well. Thanks,

HarborYuan

comment created time in 19 days

issue commentgoogle-research/deeplab2

Problems about how to use optical flow in iou_tracker.py

Closing the issue, as there is no active discussion for a while. Please feel free to reopen or create a new one if you still have any questions.

Dash-coder

comment created time in a month

issue closedgoogle-research/deeplab2

Problems about how to use optical flow in iou_tracker.py

I want to employ the optional parameters: optical_flow while runing iou_tracker.py for evaluating STQ. But it involves two files with different suffixes: _OCCLUSION_EXT = '.occ_forward' and _FLOW_EXT = '.flow_forward'. I want to know the meaning the above two files and how to generate above two files through optical flow diagram.

closed time in a month

Dash-coder

issue commentgoogle-research/deeplab2

Cannot place MergeSemanticAndInstanceMaps op on GPU

Hi @ilaripih,

Thanks for reporting the issue. Maybe you could try running the provided unit test to make sure you can run the operation on GPU?

Cheers,

ilaripih

comment created time in a month

issue closedgoogle-research/deeplab2

Setting panoptic_label_divisor=None gives error

I am trying to run training with the cityscapes dataset with only semantic segmentation annotations. I have set panoptic_label_divisor=None in dataset.py. I'm using ${DEEPLAB2}/configs/cityscapes/panoptic_deeplab/ resnet50_os32_semseg.textproto, where the instance branch is not initiated.

When running: python trainer/train.py
--config_file= deeplab2/configs/cityscapes/panoptic_deeplab/resnet50_os32_semseg.textproto
--mode=train
--model_dir=deeplab2/model
--num_gpus=1

I get the following error

Traceback (most recent call last): File "trainer/train.py", line 81, in <module> app.run(main) File "/home/mimi/Envs/deeplab2/lib/python3.6/site-packages/absl/app.py", line 312, in run _run_main(main, args) File "/home/mimi/Envs/deeplab2/lib/python3.6/site-packages/absl/app.py", line 258, in _run_main sys.exit(main(argv)) File "trainer/train.py", line 77, in main FLAGS.num_gpus) File "/home/mimi/code/deeplab2_proj/deeplab2/trainer/train_lib.py", line 137, in run_experiment trainer = trainer_lib.Trainer(config, deeplab_model, losses, global_step) File "/home/mimi/code/deeplab2_proj/deeplab2/trainer/trainer.py", line 137, in init only_semantic_annotations=not support_panoptic) File "/home/mimi/code/deeplab2_proj/deeplab2/trainer/runner_utils.py", line 122, in create_dataset focus_small_instances=focus_small_instances) File "/home/mimi/code/deeplab2_proj/deeplab2/data/sample_generator.py", line 127, in init *self._dataset_info['panoptic_label_divisor'], TypeError: unsupported operand type(s) for : 'int' and 'NoneType'

Any suggestions of what may be the problem.

closed time in a month

mi2celis

issue commentgoogle-research/deeplab2

Setting panoptic_label_divisor=None gives error

Closing the issue as there is no active discussion for a while. Please feel free to reopen or create a new one if you still have any questions.

mi2celis

comment created time in a month

issue closedgoogle-research/deeplab2

About the performance on cityscapes panoptic segmentation of MaX-DeepLab

Hi,

Thanks for your great work on creating this codebase. I am trying to use the MaX-DeepLab in my own project. I want to ask about what the results of MaX-DeepLab on the cityscapes panoptic segmentation val set would be like if you have tested before. I would really appreciate it if it is possible to report this even if it is just an unofficial approximate result.

Thanks.

closed time in a month

HarborYuan

issue commentgoogle-research/deeplab2

About the performance on cityscapes panoptic segmentation of MaX-DeepLab

Closing the issue as there is no active discussion for a while. Please feel free to reopen or create a new one if you have any more questions.

HarborYuan

comment created time in a month

issue commentgoogle-research/deeplab2

Tfrecords not getting read during training

Closing the issue as there is no active discussion for a while. Please feel free to reopen or create a new one if you have any question.

Kishaan

comment created time in a month

issue closedgoogle-research/deeplab2

Tfrecords not getting read during training

I'm getting this error while running deeplab2/trainer/train.py' --config_file"deeplab2/configs/coco/max_deeplab/max_deeplab_s_os16_res641_400k.textproto" --mode "train" --model_dir checkpoint/max_deeplab_s_os16_res641_400k_coco_train --num_gpus 0

Traceback (most recent call last):
  File "/volumes2/Other/deeplab2/trainer/train.py", line 76, in <module>
    app.run(main)
  File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 312, in run
    _run_main(main, args)
  File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 258, in _run_main
    sys.exit(main(argv))
  File "/volumes2/Other/deeplab2/trainer/train.py", line 72, in main
    FLAGS.num_gpus)
  File "/volumes2/Other/deeplab2/trainer/train_lib.py", line 191, in run_experiment
    steps=config.trainer_options.solver_options.training_number_of_steps)
  File "/volumes2/Other/models/orbit/controller.py", line 240, in train
    self._train_n_steps(num_steps)
  File "/volumes2/Other/models/orbit/controller.py", line 439, in _train_n_steps
    train_output = self.trainer.train(num_steps_tensor)
  File "/volumes2/Other/models/orbit/standard_runner.py", line 146, in train
    self._train_loop_fn(self._train_iter, num_steps)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py", line 885, in __call__
    result = self._call(*args, **kwds)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py", line 950, in _call
    return self._stateless_fn(*args, **kwds)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py", line 3040, in __call__
    filtered_flat_args, captured_inputs=graph_function.captured_inputs)  # pylint: disable=protected-access
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py", line 1964, in _call_flat
    ctx, args, cancellation_manager=cancellation_manager))
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py", line 596, in call
    ctx=ctx)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/execute.py", line 60, in quick_execute
    inputs, attrs, num_outputs)
tensorflow.python.framework.errors_impl.FailedPreconditionError:  /volumes2/Other/data; Is a directory
	 [[{{node MultiDeviceIteratorGetNextFromShard}}]]
	 [[RemoteCall]]
	 [[while/body/_1/IteratorGetNext]] [Op:__inference_loop_fn_123475]

Function call stack:
loop_fn


Process finished with exit code 1

/volumes2/Other/data has 3000 .tfrecord files as a result of running build_coco_data.py file. Isn't this file supposed to return just three sharded tfrecord files?

Any help would be much appreciated!

closed time in a month

Kishaan

issue closedgoogle-research/deeplab2

Cannot be training?

thank you for the great codes! and when i run training codes "python trainer/train.py --config_file='configs/coco/max_deeplab/max_deeplab_s_os16_res641_100k.textproto' --mode='train' --model_dir='checkpoint/max_deeplab_s_os16_res641_100k' --num_gpus=1" there error logs:

I0814 02:46:35.469203 139737541474112 train.py:65] Reading the config file. I0814 02:46:35.472224 139737541474112 train.py:69] Starting the experiment. 2021-08-14 02:46:35.472912: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2021-08-14 02:46:36.288440: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1510] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 30987 MB memory: -> device: 0, name: Tesla V100-SXM2-32GB, pci bus id: 0000:88:00.0, compute capability: 7.0 I0814 02:46:36.301358 139737541474112 train_lib.py:105] Using strategy <class 'tensorflow.python.distribute.one_device_strategy.OneDeviceStrategy'> with 1 replicas I0814 02:46:36.621985 139737541474112 deeplab.py:57] Synchronized Batchnorm is used. I0814 02:46:36.622916 139737541474112 axial_resnet_instances.py:144] Axial-ResNet final config: {'num_blocks': [3, 4, 6, 3], 'backbone_layer_multiplier': 1.0, 'width_multiplier': 1.0, 'stem_width_multiplier': 1.0, 'output_stride': 16, 'classification_mode': False, 'backbone_type': 'resnet_beta', 'use_axial_beyond_stride': 16, 'backbone_use_transformer_beyond_stride': 32, 'extra_decoder_use_transformer_beyond_stride': 32, 'backbone_decoder_num_stacks': 0, 'backbone_decoder_blocks_per_stage': 1, 'extra_decoder_num_stacks': 0, 'extra_decoder_blocks_per_stage': 1, 'max_num_mask_slots': 128, 'num_mask_slots': 128, 'memory_channels': 256, 'base_transformer_expansion': 1.0, 'global_feed_forward_network_channels': 256, 'high_resolution_output_stride': 4, 'activation': 'relu', 'block_group_config': {'attention_bottleneck_expansion': 2, 'drop_path_keep_prob': 0.800000011920929, 'drop_path_beyond_stride': 16, 'drop_path_schedule': 'linear', 'positional_encoding_type': None, 'use_global_beyond_stride': 0, 'use_sac_beyond_stride': -1, 'use_squeeze_and_excite': False, 'conv_use_recompute_grad': False, 'axial_use_recompute_grad': True, 'recompute_within_stride': 0, 'transformer_use_recompute_grad': False, 'axial_layer_config': {'query_shape': (129, 129), 'key_expansion': 1, 'value_expansion': 2, 'memory_flange': (32, 32), 'double_global_attention': False, 'num_heads': 8, 'use_query_rpe_similarity': True, 'use_key_rpe_similarity': True, 'use_content_similarity': True, 'retrieve_value_rpe': True, 'retrieve_value_content': True, 'initialization_std_for_query_key_rpe': 1.0, 'initialization_std_for_value_rpe': 1.0, 'self_attention_activation': 'softmax'}, 'dual_path_transformer_layer_config': {'num_heads': 8, 'bottleneck_expansion': 2, 'key_expansion': 1, 'value_expansion': 2, 'feed_forward_network_channels': 2048, 'use_memory_self_attention': True, 'use_pixel2memory_feedback_attention': True, 'transformer_activation': 'softmax'}}, 'bn_layer': functools.partial(<class 'keras.layers.normalization.batch_normalization.SyncBatchNormalization'>, momentum=0.9900000095367432, epsilon=0.0010000000474974513), 'conv_kernel_weight_decay': 0.0} I0814 02:46:36.977007 139737541474112 deeplab.py:96] Setting pooling size to (41, 41) I0814 02:46:36.977362 139737541474112 aspp.py:135] Global average pooling in the ASPP pooling layer was replaced with tiled average pooling using the provided pool_size. Please make sure this behavior is intended. WARNING:tensorflow:AutoGraph could not transform <function resize_to_range at 0x7f16dcac2a70> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, export AUTOGRAPH_VERBOSITY=10) and attach the full output. Cause: [Errno 28] No space left on device: '/tmp/tmp3onbzwhs.py' To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert W0814 02:46:40.066303 139737541474112 ag_logging.py:146] AutoGraph could not transform <function resize_to_range at 0x7f16dcac2a70> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, export AUTOGRAPH_VERBOSITY=10) and attach the full output. Cause: [Errno 28] No space left on device: '/tmp/tmp3onbzwhs.py' To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert WARNING:tensorflow:AutoGraph could not transform <function get_random_scale at 0x7f16dcac25f0> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, export AUTOGRAPH_VERBOSITY=10) and attach the full output. Cause: [Errno 28] No space left on device: '/tmp/tmpfij6_rpa.py' To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert W0814 02:46:40.161577 139737541474112 ag_logging.py:146] AutoGraph could not transform <function get_random_scale at 0x7f16dcac25f0> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, export AUTOGRAPH_VERBOSITY=10) and attach the full output. Cause: [Errno 28] No space left on device: '/tmp/tmpfij6_rpa.py' To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert WARNING:tensorflow:AutoGraph could not transform <function randomly_scale_image_and_label at 0x7f16dcac2680> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, export AUTOGRAPH_VERBOSITY=10) and attach the full output. Cause: [Errno 28] No space left on device: '/tmp/tmpjt5b9l39.py' To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert W0814 02:46:40.261782 139737541474112 ag_logging.py:146] AutoGraph could not transform <function randomly_scale_image_and_label at 0x7f16dcac2680> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, export AUTOGRAPH_VERBOSITY=10) and attach the full output. Cause: [Errno 28] No space left on device: '/tmp/tmpjt5b9l39.py' To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert Traceback (most recent call last): File "trainer/train.py", line 76, in <module> app.run(main) File "/data/anaconda3/lib/python3.7/site-packages/absl/app.py", line 303, in run _run_main(main, args) File "/data/anaconda3/lib/python3.7/site-packages/absl/app.py", line 251, in _run_main sys.exit(main(argv)) File "trainer/train.py", line 72, in main FLAGS.num_gpus) File "/data/PanopticFCN-main/deeplab2/trainer/train_lib.py", line 137, in run_experiment trainer = trainer_lib.Trainer(config, deeplab_model, losses, global_step) File "/data/PanopticFCN-main/deeplab2/trainer/trainer.py", line 137, in init only_semantic_annotations=not support_panoptic) File "/data/PanopticFCN-main/deeplab2/trainer/runner_utils.py", line 130, in create_dataset return reader(dataset_config.batch_size) File "/data/PanopticFCN-main/deeplab2/data/dataloader/input_reader.py", line 89, in call self._generator_fn, num_parallel_calls=tf.data.experimental.AUTOTUNE) File "/data/anaconda3/lib/python3.7/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 1868, in map preserve_cardinality=True) File "/data/anaconda3/lib/python3.7/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 5024, in init use_legacy_function=use_legacy_function) File "/data/anaconda3/lib/python3.7/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 4218, in init self._function = fn_factory() File "/data/anaconda3/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 3151, in get_concrete_function *args, **kwargs) File "/data/anaconda3/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 3116, in _get_concrete_function_garbage_collected graph_function, _ = self._maybe_define_function(args, kwargs) File "/data/anaconda3/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 3463, in _maybe_define_function graph_function = self._create_graph_function(args, kwargs) File "/data/anaconda3/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 3308, in _create_graph_function capture_by_value=self._capture_by_value), File "/data/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py", line 1007, in func_graph_from_py_func func_outputs = python_func(*func_args, **func_kwargs) File "/data/anaconda3/lib/python3.7/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 4195, in wrapped_fn ret = wrapper_helper(*args) File "/data/anaconda3/lib/python3.7/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 4125, in wrapper_helper ret = autograph.tf_convert(self._func, ag_ctx)(*nested_args) File "/data/anaconda3/lib/python3.7/site-packages/tensorflow/python/autograph/impl/api.py", line 695, in wrapper raise e.ag_error_metadata.to_exception(e) tensorflow.python.framework.errors_impl.OperatorNotAllowedInGraphError: in user code:

/data/PanopticFCN-main/deeplab2/data/sample_generator.py:159 __call__  *
    return self.call(**sample_dict)
/data/PanopticFCN-main/deeplab2/data/sample_generator.py:215 call  *
    resized_image, image, label, prev_image, prev_label, depth = (
/data/PanopticFCN-main/deeplab2/data/preprocessing/input_preprocessing.py:266 preprocess_image_and_label  *
    processed_image, label = preprocess_utils.randomly_scale_image_and_label(
/data/PanopticFCN-main/deeplab2/data/preprocessing/preprocess_utils.py:267 randomly_scale_image_and_label  **
    if scale == 1.0:
/data/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:900 __bool__
    self._disallow_bool_casting()
/data/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:507 _disallow_bool_casting
    self._disallow_in_graph_mode("using a `tf.Tensor` as a Python `bool`")
/data/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:496 _disallow_in_graph_mode
    " this function with @tf.function.".format(task))

OperatorNotAllowedInGraphError: using a `tf.Tensor` as a Python `bool` is not allowed in Graph execution. Use Eager execution or decorate this function with @tf.function.

I'd appreciate it if anyone could tell me how to solve it.

closed time in a month

pingjun18-li

issue commentgoogle-research/deeplab2

Cannot be training?

Closing the issue since there is no active discussion for a while. Please feel free to re-open it if you have any question.

pingjun18-li

comment created time in a month

issue closedgoogle-research/deeplab2

Questions about batch_norm_on_an_extra_axis

Hi,

Thanks for your great work~ I just have some questions about the "batch_norm_on_an_extra_axis" function in the "max_deeplab.py" file. I am wondering about how much this operation affects the results. And could you please have a check that whether the following Pytorch-based implementation is able to achieve the same effect as the current deeplab2 version? Thanks a lot~

self.bn = nn.BatchNorm2d(1)

batch_size, slot_num = pixel_space_mask_logits.size(0), pixel_space_mask_logits.size(1)
pixel_space_mask_logits = self.bn(rearrange(pixel_space_mask_logits, "b l h w -> (b l) a h w", a=1))
pixel_space_mask_logits = rearrange(pixel_space_mask_logits, "(b l) a h w -> b l h w", b=batch_size, l=slot_num)

Best Regards,

closed time in a month

BritaryZhou

issue commentgoogle-research/deeplab2

Questions about batch_norm_on_an_extra_axis

Closing the issue, and please feel free to reopen or create a new one if you still have any question.

BritaryZhou

comment created time in a month

issue commentgoogle-research/deeplab2

VertexAi integration

Hi @rnditdev,

Thanks for the proposal. The plan sounds great to us. We are interested in learning more about it.

Cheers,

rnditdev

comment created time in a month

issue commentgoogle-research/deeplab2

Weighted loss function?

Hi @rnditdev,

Thanks for the question. Unfortunately, we currently do not have any plan to support class-weighted loss function. However, you could refer to our TensorFlow1 implementation for the support. If you are interested in contributing to this, it is very welcome, and we will be happy to review the PR.

Cheers,

rnditdev

comment created time in a month

issue commentgoogle-research/deeplab2

Some question about the eval codes on on Cityscapes test datasets?

Hi @pingjun18-li,

We currently only support multi-scale test for Panoptic-DeepLab. You could specify the eval_scales and left-right flips. Here is the link to the place where multi-scale inference is performed.

Cheers,

pingjun18-li

comment created time in a month

issue commentgoogle-research/deeplab2

Some question about the eval codes on on Cityscapes test datasets?

Hi @pingjun18-li,

Based on the provided error log, it seems that the code tries to read the segmentation groundtruth but they are not provided. Since you would like to generate panoptic segmentation results on unlabeled data such as test set, maybe you could take a look at this tutorial.

Cheers,

pingjun18-li

comment created time in a month

PullRequestReviewEvent

Pull request review commentgoogle-research/deeplab2

Add numpy implementation of Segmentation and Tracking Quality (STQ)

+# coding=utf-8+# Copyright 2021 The Deeplab2 Authors.+#+# Licensed under the Apache License, Version 2.0 (the "License");+# you may not use this file except in compliance with the License.+# You may obtain a copy of the License at+#+#     http://www.apache.org/licenses/LICENSE-2.0+#+# Unless required by applicable law or agreed to in writing, software+# distributed under the License is distributed on an "AS IS" BASIS,+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+# See the License for the specific language governing permissions and+# limitations under the License.++"""Implementation of the Segmentation and Tracking Quality (STQ) metric."""++import collections+from typing import Mapping, MutableMapping, Sequence, Text, Any+import numpy as np++_EPSILON = 1e-15+++def _update_dict_stats(stat_dict: MutableMapping[int, np.ndarray],+                       id_array: np.ndarray):+  """Updates a given dict with corresponding counts."""+  ids, counts = np.unique(id_array, return_counts=True)+  for idx, count in zip(ids, counts):+    if idx in stat_dict:+      stat_dict[idx] += count+    else:+      stat_dict[idx] = count+++class STQuality(object):+  """Metric class for the Segmentation and Tracking Quality (STQ).++  Please see the following paper for more details about the metric:++  "STEP: Segmenting and Tracking Every Pixel", Weber et al., arXiv:2102.11859, +  2021.+++  The metric computes the geometric mean of two terms.+  - Association Quality: This term measures the quality of the track ID+      assignment for `thing` classes. It is formulated as a weighted IoU+      measure.+  - Segmentation Quality: This term measures the semantic segmentation quality.+      The standard class IoU measure is used for this.++  Example usage:++  stq_obj = segmentation_tracking_quality.STQuality(num_classes, things_list,+    ignore_label, label_bit_shift, offset)+  stq_obj.update_state(y_true_1, y_pred_1)+  stq_obj.update_state(y_true_2, y_pred_2)+  ...+  result = stq_obj.result()+  """++  def __init__(self,+               num_classes: int,+               things_list: Sequence[int],+               ignore_label: int,+               label_bit_shift: int,+               offset: int+               ):+    """Initialization of the STQ metric.++    Args:+      num_classes: Number of classes in the dataset as an integer.+      things_list: A sequence of class ids that belong to `things`.+      ignore_label: The class id to be ignored in evaluation as an integer or+        integer tensor.+      label_bit_shift: The number of bits the class label is shifted as an

I see your point. You are right about it. Sorry for the false alarm. Thanks for the clarification, Mark!

markweberdev

comment created time in 2 months

PullRequestReviewEvent
PullRequestReviewEvent

Pull request review commentgoogle-research/deeplab2

Add numpy implementation of Segmentation and Tracking Quality (STQ)

+# coding=utf-8+# Copyright 2021 The Deeplab2 Authors.+#+# Licensed under the Apache License, Version 2.0 (the "License");+# you may not use this file except in compliance with the License.+# You may obtain a copy of the License at+#+#     http://www.apache.org/licenses/LICENSE-2.0+#+# Unless required by applicable law or agreed to in writing, software+# distributed under the License is distributed on an "AS IS" BASIS,+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+# See the License for the specific language governing permissions and+# limitations under the License.++"""Tests for segmentation_and_tracking_quality.py"""++import unittest+import numpy as np++import segmentation_and_tracking_quality as stq

Maybe update the file-scope docstring in segmentation_and_tracking_quality.py saying that this file is stand-alone and the users could just copy the whole folder in their own use case.

markweberdev

comment created time in 2 months

PullRequestReviewEvent
PullRequestReviewEvent

Pull request review commentgoogle-research/deeplab2

Add numpy implementation of Segmentation and Tracking Quality (STQ)

+# coding=utf-8+# Copyright 2021 The Deeplab2 Authors.+#+# Licensed under the Apache License, Version 2.0 (the "License");+# you may not use this file except in compliance with the License.+# You may obtain a copy of the License at+#+#     http://www.apache.org/licenses/LICENSE-2.0+#+# Unless required by applicable law or agreed to in writing, software+# distributed under the License is distributed on an "AS IS" BASIS,+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+# See the License for the specific language governing permissions and+# limitations under the License.++"""Implementation of the Segmentation and Tracking Quality (STQ) metric."""

Numpy implementation of the Segmentation and Tracking Quality (STQ) metric.

markweberdev

comment created time in 2 months

Pull request review commentgoogle-research/deeplab2

Add numpy implementation of Segmentation and Tracking Quality (STQ)

+# coding=utf-8+# Copyright 2021 The Deeplab2 Authors.+#+# Licensed under the Apache License, Version 2.0 (the "License");+# you may not use this file except in compliance with the License.+# You may obtain a copy of the License at+#+#     http://www.apache.org/licenses/LICENSE-2.0+#+# Unless required by applicable law or agreed to in writing, software+# distributed under the License is distributed on an "AS IS" BASIS,+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+# See the License for the specific language governing permissions and+# limitations under the License.++"""Implementation of the Segmentation and Tracking Quality (STQ) metric."""++import collections+from typing import Mapping, MutableMapping, Sequence, Text, Any+import numpy as np++_EPSILON = 1e-15+++def _update_dict_stats(stat_dict: MutableMapping[int, np.ndarray],+                       id_array: np.ndarray):+  """Updates a given dict with corresponding counts."""+  ids, counts = np.unique(id_array, return_counts=True)+  for idx, count in zip(ids, counts):+    if idx in stat_dict:+      stat_dict[idx] += count+    else:+      stat_dict[idx] = count+++class STQuality(object):+  """Metric class for the Segmentation and Tracking Quality (STQ).++  Please see the following paper for more details about the metric:++  "STEP: Segmenting and Tracking Every Pixel", Weber et al., arXiv:2102.11859, +  2021.+++  The metric computes the geometric mean of two terms.+  - Association Quality: This term measures the quality of the track ID+      assignment for `thing` classes. It is formulated as a weighted IoU+      measure.+  - Segmentation Quality: This term measures the semantic segmentation quality.+      The standard class IoU measure is used for this.++  Example usage:++  stq_obj = segmentation_tracking_quality.STQuality(num_classes, things_list,+    ignore_label, label_bit_shift, offset)+  stq_obj.update_state(y_true_1, y_pred_1)+  stq_obj.update_state(y_true_2, y_pred_2)+  ...+  result = stq_obj.result()+  """++  def __init__(self,+               num_classes: int,+               things_list: Sequence[int],+               ignore_label: int,+               label_bit_shift: int,+               offset: int+               ):+    """Initialization of the STQ metric.++    Args:+      num_classes: Number of classes in the dataset as an integer.+      things_list: A sequence of class ids that belong to `things`.+      ignore_label: The class id to be ignored in evaluation as an integer or+        integer tensor.+      label_bit_shift: The number of bits the class label is shifted as an

Even though label_bit_shift yields faster computation speed, I am worried that it is not compatible with our current setting in other places. More specifically, we have panoptic_label_divisor = 1000 for KITTI-STEP. If users train a model with that setting, how could they use this script to evaluate their exported results?

markweberdev

comment created time in 2 months