profile
viewpoint
Hadrien Mary hadim InvivoAI Montréal https://www.invivoai.com/ Biophysicist playing with small molecules at @invivoai

hadim/docker-omero 11

Set of Dockerfile to build a ready-to-use OMERO server

brouhardlab/Kappa 7

A Fiji plugin for Curvature Analysis.

brouhardlab/anamic 4

Simulate, fit and analyze microtubules.

fiji/FilamentDetector 2

A Fiji plugin that allows easy, fast and accurate detection and tracking of biological filaments.

hadim/bain 2

A DIY IoT wireless sensor for temperature, humidity and pressure.

hadim/captchanet 1

A simple but yet efficient neural networks to solve captcha images.

hadim/docker-github-backup 1

A Docker image that run github-backup periodically using cron and s6-overlay.

hadim/argo 0

Argo Workflows: Get stuff done with Kubernetes.

hadim/argo-client-python 0

Python client for Argo Workflows

push eventPyTorchLightning/pytorch-lightning

tchaton

commit sha 0a2efb22c90bbff4c152db35ba8122b531021bcd

update on comments

view details

push time in 2 minutes

issue commentencode/django-rest-framework

Schema generation - prefer 'multipart' if file field exists

This issue is also affecting me currently. Maybe it would be an option to have a get_parser_classes method I could override akin to get_serializer_class that I can override on a viewset?

tomchristie

comment created time in 2 minutes

pull request commentPyTorchLightning/pytorch-lightning

Add trainer.predict

Can you create issues for the pending TODOs?

Yes :)

tchaton

comment created time in 4 minutes

Pull request review commentPyTorchLightning/pytorch-lightning

[bugfix] Resolve bug with multiple optimizers and toggle.

 def toggle_optimizer(self, optimizer: Optimizer, optimizer_idx: int):          Override for your own behavior +        It works with ``untoggle_optimizer`` to make sure param_requires_grad_state is properly reset.+         Args:-            optimizer:-            optimizer_idx:+            optimizer: Current optimizer used in training_loop+            optimizer_idx: Current optimizer idx in training_loop         """-        for param in self.parameters():-            param.requires_grad = False+        param_requires_grad_state = {}+        # make sure current optimizer is latest to be iterated over.+        optimizers = [opt for opt in self.optimizers(use_pl_optimizer=False) if opt != optimizer] + [optimizer]+        num_optimizers = len(optimizers) - 1+        for opt_idx, opt in enumerate(optimizers):+            for group in opt.param_groups:+                for param in group['params']:+                    if num_optimizers == opt_idx:

I don't understand what you mean :)

tchaton

comment created time in 4 minutes

Pull request review commentPyTorchLightning/pytorch-lightning

[bugfix] Resolve bug with multiple optimizers and toggle.

 def toggle_optimizer(self, optimizer: Optimizer, optimizer_idx: int): 

Do you plan to do it in this PR?

tchaton

comment created time in 8 minutes

Pull request review commentPyTorchLightning/pytorch-lightning

Add trainer.predict

 def __test_given_model(self, model, test_dataloaders):          return results +    def predict(+        self,+        model: Optional[LightningModule] = None,+        dataloaders: Optional[Union[DataLoader, List[DataLoader]]] = None,+    ):+        r"""++        Separates from fit to make sure you never run on your predictions set until you want to.++        This will call the model forward function to compute predictions.++        Args:+            model: The model to predict on.++            dataloaders: Either a single+                Pytorch Dataloader or a list of them, specifying inference samples.++        Returns:+            The final test result dictionary. If no test_epoch_end is defined returns a list of dictionaries+        """++        # --------------------+        # SETUP HOOK+        # --------------------+        if not (+            isinstance(dataloaders, DataLoader)+            or isinstance(dataloaders, (list, tuple))+            and all(isinstance(d, DataLoader) for d in dataloaders)+        ):+            raise MisconfigurationException(+                'You need to pass a dataloader or a list of dataloaders to `trainer.predict`. '+            )++        if model is None:+            raise MisconfigurationException(+                'You need to pass a model to `trainer.predict`. '+            )++        # attach data+        if dataloaders is not None:+            self.data_connector.attach_dataloaders(model, test_dataloaders=dataloaders)++        # set path variable+        self.is_predicting = True+        os.environ['PL_TESTING_MODE'] = '1'+        self.model = model++        results = self.fit(model)

question: if the Trainer is set to run for N epochs, will this run for N epochs?

tchaton

comment created time in 13 minutes

Pull request review commentPyTorchLightning/pytorch-lightning

Add trainer.predict

 def __test_given_model(self, model, test_dataloaders):          return results +    def predict(+        self,+        model: Optional[LightningModule] = None,+        dataloaders: Optional[Union[DataLoader, List[DataLoader]]] = None,+    ):+        r"""++        Separates from fit to make sure you never run on your predictions set until you want to.++        This will call the model forward function to compute predictions.++        Args:+            model: The model to predict on.++            dataloaders: Either a single+                Pytorch Dataloader or a list of them, specifying inference samples.++        Returns:+            The final test result dictionary. If no test_epoch_end is defined returns a list of dictionaries+        """++        # --------------------+        # SETUP HOOK+        # --------------------+        if not (+            isinstance(dataloaders, DataLoader)+            or isinstance(dataloaders, (list, tuple))+            and all(isinstance(d, DataLoader) for d in dataloaders)+        ):+            raise MisconfigurationException(+                'You need to pass a dataloader or a list of dataloaders to `trainer.predict`. '+            )++        if model is None:+            raise MisconfigurationException(+                'You need to pass a model to `trainer.predict`. '+            )++        # attach data+        if dataloaders is not None:+            self.data_connector.attach_dataloaders(model, test_dataloaders=dataloaders)++        # set path variable+        self.is_predicting = True

Same here

tchaton

comment created time in 13 minutes

Pull request review commentPyTorchLightning/pytorch-lightning

Add trainer.predict

 def __init__(         self._device_type = DeviceType.CPU         self._distrib_type = None         self._running_stage = None+        self.is_predicting = False

It is limitation of the current routing system. I will take care of this in another PR.

tchaton

comment created time in 13 minutes

Pull request review commentPyTorchLightning/pytorch-lightning

Add trainer.predict

 # See the License for the specific language governing permissions and # limitations under the License. from collections import defaultdict-from typing import Any, Dict, List, Optional, Union+from typing import Any, Dict, List, Optional  import torch  from pytorch_lightning.core.step_result import Result+from pytorch_lightning.trainer.states import RunningStage from pytorch_lightning.utilities import DistributedType, LightningEnum  -class LoggerStages(LightningEnum):

Fine !

tchaton

comment created time in 19 minutes

push eventPyTorchLightning/pytorch-lightning-bolts

Jirka Borovec

commit sha 7e2662607dc4d44f848c52f328362d2e041bcde7

fix

view details

push time in 20 minutes

Pull request review commentPyTorchLightning/pytorch-lightning

Add trainer.predict

 def validation_step(self, args):     def test_step(self, args):         return self._step(args) +    def forward(self, args):+        return self._step(args)+

I kept it this way for consistency. If we decide to do so, let's move them all to base accelerator. But this will be another PR.

tchaton

comment created time in 24 minutes

Pull request review commentPyTorchLightning/pytorch-lightning

[feat] Add PyTorch Profiler.

 def test_trainer_profiler_incorrect_arg_type(profiler):                        match=r"Only None, bool, str and subclasses of `BaseProfiler`"                              r" are valid values for `Trainer`'s `profiler` parameter. *"):         Trainer(profiler=profiler)+++def test_pytorch_profiler(tmpdir):+    class TestModel(BoringModel):+        def training_step(self, batch, batch_idx):+            output = self.layer(batch)+            loss = self.loss(batch, output)+            return {"loss": loss}++    model = TestModel()

We can use BoringModel()

tchaton

comment created time in 27 minutes

Pull request review commentPyTorchLightning/pytorch-lightning

[feat] Add PyTorch Profiler.

 def __del__(self):         """Close profiler's stream."""         if self.output_file:             self.output_file.close()+++class PytorchProfiler(BaseProfiler):+    """+    This profiler uses PyTorch's Autograd Profiler and let's you inspect the cost of
    This profiler uses PyTorch's Autograd Profiler and lets you inspect the cost of
tchaton

comment created time in 29 minutes

Pull request review commentPyTorchLightning/pytorch-lightning

[feat] Add PyTorch Profiler.

 def test_trainer_profiler_incorrect_arg_type(profiler):                        match=r"Only None, bool, str and subclasses of `BaseProfiler`"                              r" are valid values for `Trainer`'s `profiler` parameter. *"):         Trainer(profiler=profiler)+++def test_pytorch_profiler(tmpdir):+    class TestModel(BoringModel):+        def training_step(self, batch, batch_idx):+            output = self.layer(batch)+            loss = self.loss(batch, output)+            return {"loss": loss}++    model = TestModel()++    limit_train_batches = 2+    trainer = Trainer(+        default_root_dir=tmpdir,+        limit_train_batches=limit_train_batches,+        limit_val_batches=2,+        max_epochs=1,+        profiler='pytorch'+    )++    trainer.fit(model)

Let's capture the profiler output and run some basic assertions

tchaton

comment created time in 26 minutes

Pull request review commentPyTorchLightning/pytorch-lightning

[feat] Add PyTorch Profiler.

 def validation_step(self, batch, batch_idx):         weights_summary=None,         accelerator="ddp",         gpus=2,+        profiler="pytorch"

what is the point of adding it here? does this cover a case which is not covered in the other tests?

tchaton

comment created time in 27 minutes

Pull request review commentPyTorchLightning/pytorch-lightning

[feat] Add PyTorch Profiler.

 def custom_processing_step(self, data):     model = MyModel(profiler)     trainer = Trainer(profiler=profiler, max_epochs=1) ++PyTorch Profiling+--------------------++Autograd includes a profiler that lets you inspect the cost of different operators+inside your model - both on the CPU and GPU.++.. code-block:: python++    trainer = Trainer(..., profiler="pytorch")++    or++    profiler = PytorchProfiler(+            output_filename = ...+            enabled = ...+            use_cuda = ...+            record_shapes = ...+            profile_memory = ...+            with_stack = ...+            use_kineto = ...+            use_cpu = ...+            emit_nvtx = ...+            export_to_chrome = ...+            sort_by_key = ...+            path_to_export_trace = ...+    )+    trainer = Trainer(..., profiler=profiler)++The profiler's results will be printed at the completion of a training `fit()`. This profiler+report can be quite long, so you can also specify an `output_filename` to save the report instead+of logging it to the output in your terminal. The output below shows the profiling for the action+`get_train_batch`.+This profiler will record only for `training_step`, `evaluation_step` and `test_step` functions.++.. code-block:: python++    Profiler Report++    Profile stats for: training_step+    ---------------------  ---------------  ---------------  ---------------  ---------------  ---------------+    Name                   Self CPU total %  Self CPU total   CPU total %      CPU total        CPU time avg+    ---------------------  ---------------  ---------------  ---------------  ---------------  ---------------+    t                      62.10%           1.044ms          62.77%           1.055ms          1.055ms+    addmm                  32.32%           543.135us        32.69%           549.362us        549.362us+    mse_loss               1.35%            22.657us         3.58%            60.105us         60.105us+    mean                   0.22%            3.694us          2.05%            34.523us         34.523us+    div_                   0.64%            10.756us         1.90%            32.001us         16.000us+    ones_like              0.21%            3.461us          0.81%            13.669us         13.669us+    sum_out                0.45%            7.638us          0.74%            12.432us         12.432us+    transpose              0.23%            3.786us          0.68%            11.393us         11.393us+    as_strided             0.60%            10.060us         0.60%            10.060us         3.353us+    to                     0.18%            3.059us          0.44%            7.464us          7.464us+    empty_like             0.14%            2.387us          0.41%            6.859us          6.859us+    empty_strided          0.38%            6.351us          0.38%            6.351us          3.175us+    fill_                  0.28%            4.782us          0.33%            5.566us          2.783us+    expand                 0.20%            3.336us          0.28%            4.743us          4.743us+    empty                  0.27%            4.456us          0.27%            4.456us          2.228us+    copy_                  0.15%            2.526us          0.15%            2.526us          2.526us+    broadcast_tensors      0.15%            2.492us          0.15%            2.492us          2.492us+    size                   0.06%            0.967us          0.06%            0.967us          0.484us+    is_complex             0.06%            0.961us          0.06%            0.961us          0.481us+    stride                 0.03%            0.517us          0.03%            0.517us          0.517us+    ---------------------  ---------------  ---------------  ---------------  ---------------  ---------------+    Self CPU time total: 1.681ms++When running with `PytorchProfiler(emit_nvtx=True)`. You should run as following:++nvprof --profile-from-start off -o trace_name.prof -- <regular command here>++To visualize the profiled operation, you can either:++* Use: nvvp trace_name.prof++* Use: python -c 'import torch; print(torch.autograd.profiler.load_nvprof("trace_name.prof"))'+ """ -from pytorch_lightning.profiler.profilers import AdvancedProfiler, BaseProfiler, PassThroughProfiler, SimpleProfiler+from pytorch_lightning.profiler.profilers import (+    AdvancedProfiler,+    BaseProfiler,+    PassThroughProfiler,+    PytorchProfiler,+    SimpleProfiler,+)  __all__ = [     'BaseProfiler',     'SimpleProfiler',     'AdvancedProfiler',     'PassThroughProfiler',+    "PytorchProfiler",

Maybe change it to PyTorchProfiler?

tchaton

comment created time in 30 minutes

Pull request review commentPyTorchLightning/pytorch-lightning

[feat] Add PyTorch Profiler.

 def custom_processing_step(self, data):     model = MyModel(profiler)     trainer = Trainer(profiler=profiler, max_epochs=1) ++PyTorch Profiling+--------------------++Autograd includes a profiler that lets you inspect the cost of different operators+inside your model - both on the CPU and GPU.++.. code-block:: python++    trainer = Trainer(..., profiler="pytorch")++    or++    profiler = PytorchProfiler(+            output_filename = ...+            enabled = ...+            use_cuda = ...+            record_shapes = ...+            profile_memory = ...+            with_stack = ...+            use_kineto = ...+            use_cpu = ...+            emit_nvtx = ...+            export_to_chrome = ...+            sort_by_key = ...+            path_to_export_trace = ...+    )+    trainer = Trainer(..., profiler=profiler)++The profiler's results will be printed at the completion of a training `fit()`. This profiler
The profiler's results will be printed on the completion of a training `fit()`. This profiler
tchaton

comment created time in 32 minutes

Pull request review commentPyTorchLightning/pytorch-lightning

[feat] Add PyTorch Profiler.

 def custom_processing_step(self, data):     model = MyModel(profiler)     trainer = Trainer(profiler=profiler, max_epochs=1) ++PyTorch Profiling+--------------------++Autograd includes a profiler that lets you inspect the cost of different operators+inside your model - both on the CPU and GPU.++.. code-block:: python++    trainer = Trainer(..., profiler="pytorch")++    or++    profiler = PytorchProfiler(+            output_filename = ...+            enabled = ...+            use_cuda = ...+            record_shapes = ...+            profile_memory = ...+            with_stack = ...+            use_kineto = ...+            use_cpu = ...+            emit_nvtx = ...+            export_to_chrome = ...+            sort_by_key = ...+            path_to_export_trace = ...+    )+    trainer = Trainer(..., profiler=profiler)
    trainer = Trainer(..., profiler=PytorchProfiler(...))
tchaton

comment created time in 37 minutes

Pull request review commentPyTorchLightning/pytorch-lightning

[feat] Add PyTorch Profiler.

 def custom_processing_step(self, data):     model = MyModel(profiler)     trainer = Trainer(profiler=profiler, max_epochs=1) ++PyTorch Profiling+--------------------++Autograd includes a profiler that lets you inspect the cost of different operators+inside your model - both on the CPU and GPU.++.. code-block:: python++    trainer = Trainer(..., profiler="pytorch")++    or++    profiler = PytorchProfiler(+            output_filename = ...+            enabled = ...+            use_cuda = ...+            record_shapes = ...+            profile_memory = ...+            with_stack = ...+            use_kineto = ...+            use_cpu = ...+            emit_nvtx = ...+            export_to_chrome = ...+            sort_by_key = ...+            path_to_export_trace = ...+    )+    trainer = Trainer(..., profiler=profiler)++The profiler's results will be printed at the completion of a training `fit()`. This profiler+report can be quite long, so you can also specify an `output_filename` to save the report instead+of logging it to the output in your terminal. The output below shows the profiling for the action+`get_train_batch`.+This profiler will record only for `training_step`, `evaluation_step` and `test_step` functions.
of logging it to the output in your terminal. This profiler will record only for `training_step`, `evaluation_step` and `test_step` functions.

The output below shows the profiling for the action `get_train_batch`.
tchaton

comment created time in 33 minutes

Pull request review commentPyTorchLightning/pytorch-lightning

[feat] Add PyTorch Profiler.

 def custom_processing_step(self, data):     model = MyModel(profiler)     trainer = Trainer(profiler=profiler, max_epochs=1) ++PyTorch Profiling+--------------------++Autograd includes a profiler that lets you inspect the cost of different operators+inside your model - both on the CPU and GPU.++.. code-block:: python++    trainer = Trainer(..., profiler="pytorch")++    or++    profiler = PytorchProfiler(+            output_filename = ...+            enabled = ...+            use_cuda = ...+            record_shapes = ...+            profile_memory = ...+            with_stack = ...+            use_kineto = ...+            use_cpu = ...+            emit_nvtx = ...+            export_to_chrome = ...+            sort_by_key = ...+            path_to_export_trace = ...+    )+    trainer = Trainer(..., profiler=profiler)++The profiler's results will be printed at the completion of a training `fit()`. This profiler+report can be quite long, so you can also specify an `output_filename` to save the report instead+of logging it to the output in your terminal. The output below shows the profiling for the action+`get_train_batch`.+This profiler will record only for `training_step`, `evaluation_step` and `test_step` functions.++.. code-block:: python

this would be more like a code-block console... i'm not sure if that's valid

tchaton

comment created time in 31 minutes

Pull request review commentPyTorchLightning/pytorch-lightning

Add trainer.predict

 def validation_step(self, args):     def test_step(self, args):         return self._step(args) +    def forward(self, args):+        return self._step(args)+

Screenshot 2021-01-20 at 20 01 05

tchaton

comment created time in 25 minutes

Pull request review commentPyTorchLightning/pytorch-lightning

[bugfix] Resolve bug with multiple optimizers and toggle.

 def toggle_optimizer(self, optimizer: Optimizer, optimizer_idx: int):          Override for your own behavior +        It works with ``untoggle_optimizer`` to make sure param_requires_grad_state is properly reset.+         Args:-            optimizer:-            optimizer_idx:+            optimizer: Current optimizer used in training_loop+            optimizer_idx: Current optimizer idx in training_loop         """-        for param in self.parameters():-            param.requires_grad = False+        param_requires_grad_state = {}+        # make sure current optimizer is latest to be iterated over.+        optimizers = [opt for opt in self.optimizers(use_pl_optimizer=False) if opt != optimizer] + [optimizer]+        num_optimizers = len(optimizers) - 1+        for opt_idx, opt in enumerate(optimizers):+            for group in opt.param_groups:+                for param in group['params']:+                    if num_optimizers == opt_idx:

can we include some early break, such as you do not need to continue in the loop after the assignment?

tchaton

comment created time in 25 minutes

push eventPyTorchLightning/pytorch-lightning

tchaton

commit sha f0bdbd372ab421ba0ae87bd62a8011d59714a08c

update test doc

view details

push time in 26 minutes

Pull request review commentPyTorchLightning/pytorch-lightning

Add trainer.predict

 def __test_given_model(self, model, test_dataloaders):          return results +    def predict(+        self,+        model: Optional[LightningModule] = None,+        dataloaders: Optional[Union[DataLoader, List[DataLoader]]] = None,+    ):+        r"""++        Separates from fit to make sure you never run on your predictions set until you want to.++        This will call the model forward function to compute predictions.++        Args:+            model: The model to predict on.++            dataloaders: Either a single+                Pytorch Dataloader or a list of them, specifying inference samples.++        Returns:+            The final test result dictionary. If no test_epoch_end is defined returns a list of dictionaries+        """++        # --------------------+        # SETUP HOOK+        # --------------------+        if not (+            isinstance(dataloaders, DataLoader)+            or isinstance(dataloaders, (list, tuple))+            and all(isinstance(d, DataLoader) for d in dataloaders)+        ):+            raise MisconfigurationException(+                'You need to pass a dataloader or a list of dataloaders to `trainer.predict`. '+            )++        if model is None:+            raise MisconfigurationException(+                'You need to pass a model to `trainer.predict`. '+            )++        # attach data+        if dataloaders is not None:+            self.data_connector.attach_dataloaders(model, test_dataloaders=dataloaders)++        # set path variable+        self.is_predicting = True

_set_running_stage ?

tchaton

comment created time in 28 minutes

Pull request review commentPyTorchLightning/pytorch-lightning

Add trainer.predict

 # See the License for the specific language governing permissions and # limitations under the License. from collections import defaultdict-from typing import Any, Dict, List, Optional, Union+from typing import Any, Dict, List, Optional  import torch  from pytorch_lightning.core.step_result import Result+from pytorch_lightning.trainer.states import RunningStage from pytorch_lightning.utilities import DistributedType, LightningEnum  -class LoggerStages(LightningEnum):

maybe move this deprecation and related changes to a separate PR?

tchaton

comment created time in 35 minutes

Pull request review commentPyTorchLightning/pytorch-lightning

Add trainer.predict

 def testing(self, val: bool) -> None:         if val:             self._running_stage = RunningStage.TESTING         else:-            self._running_stage = None+            if self._running_stage == RunningStage.TRAINING:+                pass+            else:+                self._running_stage = None
            if self._running_stage != RunningStage.TRAINING:
                self._running_stage = None

or move even one level up as elif

tchaton

comment created time in 33 minutes

Pull request review commentPyTorchLightning/pytorch-lightning

Add trainer.predict

 def update_logger_connector(self) -> None:         callback_metrics = {}         batch_pbar_metrics = {}         batch_log_metrics = {}-        is_train = self._stage in LoggerStages.TRAIN.value+        is_train = self._stage in RunningStage.TRAINING
        is_train = self._stage == RunningStage.TRAINING
tchaton

comment created time in 36 minutes

Pull request review commentPyTorchLightning/pytorch-lightning

Add trainer.predict

 def validation_step(self, args):     def test_step(self, args):         return self._step(args) +    def forward(self, args):+        return self._step(args)+
    def forward(self, args):
        return self._step(args)

can we move this to parent class?

tchaton

comment created time in 38 minutes

Pull request review commentPyTorchLightning/pytorch-lightning

Add trainer.predict

 def __init__(         self._device_type = DeviceType.CPU         self._distrib_type = None         self._running_stage = None+        self.is_predicting = False

is it setter? I thought we discussed having it as a read-only property

tchaton

comment created time in 31 minutes

push eventPyTorchLightning/pytorch-lightning

Carlos Mocholí

commit sha 088b3528ff82699a05fecd3b75e8752ddd935ab1

Prepare 1.1.5 release (#5576) Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>

view details

Carlos Mocholí

commit sha f477c2fd2980ad128bfe79a3b859e0b81b435507

Add new CHANGELOG section (#5580)

view details

Carlos Mocholí

commit sha 7abd8224d62c8bd3a0bb5b58c7d95488303daa98

Add note about returning None (#5578) Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com> Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com> Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>

view details

chaton

commit sha dd6b5d37990f30b427884c037b5e18b30a1072a4

Merge branch 'master' into bugfix/toggle

view details

push time in 30 minutes

more