profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/nzw0301/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.
Kento Nozawa nzw0301 University of Tokyo / @riken-aip / @pfnet Japan https://nzw0301.github.io/ Ph.D. student in machine learning

keras-team/keras-docs-ja 63

Japanese translation of the Keras documentation.

nzw0301/lightLDA 50

fast sampling algorithm based on CGS

nzw0301/pb-contrastive 10

Codes for PAC-Bayesian Contrastive Unsupervised Representation Learning

nzw0301/fastText 1

Library for fast text representation and classification.

nzw0301/bayesianNonparametrics 0

https://www.amazon.co.jp/%E3%83%8E%E3%83%B3%E3%83%91%E3%83%A9%E3%83%A1%E3%83%88%E3%83%AA%E3%83%83%E3%82%AF%E3%83%99%E3%82%A4%E3%82%BA-%E7%82%B9%E9%81%8E%E7%A8%8B%E3%81%A8%E7%B5%B1%E8%A8%88%E7%9A%84%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%81%AE%E6%95%B0%E7%90%86-%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%83%97%E3%83%AD%E3%83%95%E3%82%A7%E3%83%83%E3%82%B7%E3%83%A7%E3%83%8A%E3%83%AB%E3%82%B7%E3%83%AA%E3%83%BC%E3%82%BA-%E4%BD%90%E8%97%A4-%E4%B8%80%E8%AA%A0/dp/4061529153

startedlucidrains/x-transformers

started time in an hour

startedfacebookresearch/LeViT

started time in 7 hours

issue commentfacebookresearch/moco

AssertionError: Default process group is not initialized

The default #GPU in the code uses 8 GPUs. You may need to change the number of GPUs, batch size, number of training iterations, and lr to make it run on 4 GPUs.

upupbo

comment created time in 8 hours

release oxcsml/Rmap

release

released time in 12 hours

release oxcsml/Rmap

release

released time in 12 hours

startedtwitter/scalding

started time in 12 hours

fork carlthome/vscode-dev-containers

A repository of development container definitions for the VS Code Remote - Containers extension and GitHub Codespaces

https://aka.ms/vscode-remote

fork in 12 hours

startedcympfh/procon-rs

started time in 13 hours

fork lawrennd/ipynb

Package / Module importer for importing code from Jupyter Notebook files (.ipynb)

https://ipynb.readthedocs.io/en/latest/

fork in 13 hours

issue commentfacebookresearch/moco

AssertionError: Default process group is not initialized

i have the same problem....how to fix?

upupbo

comment created time in 14 hours

created repositorylawrennd/clexp

Simple utilities for helping write a CL expense claim.

created time in 16 hours

created repositorylawrennd/cmtutils

Software for managing conference submissions.

created time in 17 hours

created repositorylawrennd/mlai

Software for lectures on machine learning.

created time in 18 hours

pull request commentoptuna/optuna

Refactor a unittest in test_median.py

Codecov Report

Merging #2644 (ef50fe2) into master (211b1a9) will increase coverage by 0.00%. The diff coverage is n/a.

Impacted file tree graph

@@           Coverage Diff           @@
##           master    #2644   +/-   ##
=======================================
  Coverage   91.70%   91.70%           
=======================================
  Files         138      138           
  Lines       11497    11499    +2     
=======================================
+ Hits        10543    10545    +2     
  Misses        954      954           
Impacted Files Coverage Δ
...a/visualization/matplotlib/_intermediate_values.py 100.00% <0.00%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 211b1a9...ef50fe2. Read the comment docs.

keisuke-umezawa

comment created time in 20 hours

fork himkt/homebrew-cask

🍻 A CLI workflow for the administration of macOS applications distributed as binaries

https://brew.sh

fork in a day

Pull request review commentoptuna/optuna

Refactor a unittest in test_median.py

  def test_median_pruner_with_one_trial() -> None: -    study = optuna.study.create_study()-    trial = optuna.trial.Trial(study, study._storage.create_new_trial(study._study_id))-    trial.report(1, 1)     pruner = optuna.pruners.MedianPruner(0, 0)+    study = optuna.study.create_study(pruner=pruner)+    trial = study.ask()+    trial.report(1, 1)      # A pruner is not activated at a first trial.-    assert not pruner.prune(study=study, trial=study._storage.get_trial(trial._trial_id))+    assert not trial.should_prune()   @pytest.mark.parametrize("direction_value", [("minimize", 2), ("maximize", 0.5)]) def test_median_pruner_intermediate_values(direction_value: Tuple[str, float]) -> None:      direction, intermediate_value = direction_value     pruner = optuna.pruners.MedianPruner(0, 0)-    study = optuna.study.create_study(direction=direction)+    study = optuna.study.create_study(direction=direction, pruner=pruner) -    trial = optuna.trial.Trial(study, study._storage.create_new_trial(study._study_id))+    trial = study.ask()     trial.report(1, 1)-    study._storage.set_trial_state(trial._trial_id, TrialState.COMPLETE)+    study.tell(trial, 1) -    trial = optuna.trial.Trial(study, study._storage.create_new_trial(study._study_id))+    trial = study.ask()     # A pruner is not activated if a trial has no intermediate values.-    assert not pruner.prune(study=study, trial=study._storage.get_trial(trial._trial_id))+    assert not trial.should_prune()      trial.report(intermediate_value, 1)     # A pruner is activated if a trial has an intermediate value.-    assert pruner.prune(study=study, trial=study._storage.get_trial(trial._trial_id))+    assert trial.should_prune()   def test_median_pruner_intermediate_values_nan() -> None:      pruner = optuna.pruners.MedianPruner(0, 0)-    study = optuna.study.create_study()+    study = optuna.study.create_study(pruner=pruner) -    trial = optuna.trial.Trial(study, study._storage.create_new_trial(study._study_id))+    trial = study.ask()     trial.report(float("nan"), 1)     # A pruner is not activated if the study does not have any previous trials.-    assert not pruner.prune(study=study, trial=study._storage.get_trial(trial._trial_id))-    study._storage.set_trial_state(trial._trial_id, TrialState.COMPLETE)+    assert not trial.should_prune()+    study.tell(trial, -1)

It also makes sense for me to use -1 here.

keisuke-umezawa

comment created time in a day

fork turian/DawDreamer

Digital Audio Workstation with Python; VST instruments/effects, parameter automation, and native processors

fork in a day

issue commentYannDubs/Neural-Process-Family

Why using posterior sampling for evaluation

Hi @xuesongwang thanks for the kind words but I don't completely understand what you are saying. Are you essentially suggesting to mask Y_trgt but still use X_trgt during evaluation ? If so I don't really see how that can help, the decoder already has access to X_trgt ...

xuesongwang

comment created time in a day

GollumEvent

startedDBraun/DawDreamer

started time in a day

PR closed optuna/optuna

Move examples to sub directories example optuna.integration stale

Motivation

A successive work of #2302 and #2458. Some examples for ML libraries are in the root directory of examples. I'd like to create sub directories for them.

Description of the changes

  • catboost_simple.py --> catboost/catboost_simple.py
  • dask_ml_simple.py --> dask_ml/dask_ml_simple.py
  • haiku_simple.py --> haiku/haiku_simple.py
  • ray-joblib.py --> ray/ray-joblib.py
  • skimage_lbp_simple.py --> skimage/skimage_lbp_simple.py
  • tensorboard_simple.py --> tensorboard/tensorboard_simple.py
+9 -9

4 comments

9 changed files

toshihikoyanase

pr closed time in 2 days

pull request commentoptuna/optuna

Move examples to sub directories

@keisuke-umezawa @HideakiImamura As mentioned in #2654, I'd like to close this PR and create a new repository for the Optuna examples. Thank you for your reviews.

toshihikoyanase

comment created time in 2 days

PR opened optuna/optuna

[WIP] Move examples to a new repository example

Motivation

This PR is related to #2654.

Description of the changes

  • Remove examples. They will be placed in a new repository by using git subtree split command. The prototype repository is https://github.com/toshihikoyanase/optuna-examples.

TODO

  • [ ] Update README.md
  • [ ] Update links in the references

Opinion Wanted

  • Do we move all examples? Or do we keep some basic examples like quadratic_simple.py?
+15 -5784

0 comment

68 changed files

pr created time in 2 days

issue openedoptuna/optuna

[RFC] Create `oputna/examples` repository and move existing examples

Motivation

Currently, the optuna/optuna repository has more than 60 example files, thanks to the great efforts of the contributors. It provides the usage of Optuna for various kinds of ML libraries such as PyTorch, LightGBM and etc.

However, I saw some problems in the CI of the examples as follows:

  • The GitHub Actions workflow file is getting complicated. For example, some libraries does not support Python 3.6, and we need to exclude them in the workflow file. Such exception handling will increase when we add new libraries.
  • The conflicts of library versions. For example, we wanted to test PyTorch examples with PyTorch v1.8.0, but AllenNLP did not support it.
  • It takes more than 4 hours to execute all examples sequentially. Due to the execution time, we cannot run the CI jobs when submitting pull requests.

Some of these problems can be solved by splitting the CI workflow, but I think that this is a good opportunity to create an independent repository for Optuna examples. This is partly because we had already had enough contents, and partly because we may want to employ different policies (code review (1 approval or 2 approvals) and coding style(#2240)) from the optuna/optuna repository.

Description

I propose to create a new repository to host the Optuna examples. I created a prototype repository and please take a look: https://github.com/toshihikoyanase/optuna-examples

  • It has an independent workflow file for each ML libraries (18 workflows in total). These workflows can be executed in parallel, and the CI jobs finish in an hour. With this change, we can execute CI with
  • We can preserves the git history by using the git subtree split command.

Alternatives (optional)

We can just divide the examples.yml in the optuna/optuna repository. I tried this approach in #2587, and I created about 20 workflows for the CI of examples.

However, I and reviewers found that this approach had potential risk to deteriorate the productivity. This is because it doubled the workflows and it may annoy developers/reviewers when they edit/review the workflow. If we can nest the workflow like .github/workflow/examples/allennlp, we can employ this approach, but GitHub does not recognize the yaml files in subdirectories as workflows.

created time in 2 days

Pull request review commentoptuna/optuna

Add a queue-like data structure to manage updated trials

+from collections import deque+import threading+from typing import Any+from typing import Deque+from typing import Dict+from typing import List+from typing import Tuple+from typing import TYPE_CHECKING+++if TYPE_CHECKING:+    from optuna import Study+    from optuna.trial import FrozenTrial+    from optuna.trial import TrialState+++class UpdatedTrialsQueue(object):+    """A virtual queue of trials in the specified states.++    This class imitates a queue that trial is added when it goes into specified states.++    It is not supposed to be directly accessed by library users except to write user-defined+    samplers.++    Note that the ``states`` argument should consist of the same stage states.+    """++    def __init__(self, study: "Study", states: Tuple["TrialState", ...]) -> None:+        for state in states:+            if state < states[0] or states[0] < state:+                raise RuntimeError("The states should be in the same stage.")++        self._study = study+        self._states = states++        self._queue: Deque[int] = deque()+        self._watching_trial_indices: List[int] = []+        self._next_min_trial_index = 0++        self._lock = threading.Lock()++    def __getstate__(self) -> Dict[Any, Any]:++        state = self.__dict__.copy()+        del state["_lock"]+        return state++    def __setstate__(self, state: Dict[Any, Any]) -> None:++        self.__dict__.update(state)+        self._lock = threading.Lock()++    def _fetch_trials(self, deepcopy: bool) -> List["FrozenTrial"]:+        trials = self._study.get_trials(deepcopy=deepcopy)

Here, we're still constructing all trials. How about taking some benchmarks before going forward to understand in what practical use cases this feature still becomes beneficial?

not522

comment created time in 2 days

Pull request review commentoptuna/optuna

Add a queue-like data structure to manage updated trials

+from collections import deque+import threading+from typing import Any+from typing import Deque+from typing import Dict+from typing import List+from typing import Tuple+from typing import TYPE_CHECKING+++if TYPE_CHECKING:+    from optuna import Study+    from optuna.trial import FrozenTrial+    from optuna.trial import TrialState+++class UpdatedTrialsQueue(object):+    """A virtual queue of trials in the specified states.++    This class imitates a queue that trial is added when it goes into specified states.++    It is not supposed to be directly accessed by library users except to write user-defined+    samplers.++    Note that the ``states`` argument should consist of the same stage states.+    """++    def __init__(self, study: "Study", states: Tuple["TrialState", ...]) -> None:+        for state in states:+            if state < states[0] or states[0] < state:+                raise RuntimeError("The states should be in the same stage.")++        self._study = study+        self._states = states++        self._queue: Deque[int] = deque()+        self._watching_trial_indices: List[int] = []+        self._next_min_trial_index = 0++        self._lock = threading.Lock()++    def __getstate__(self) -> Dict[Any, Any]:

Maybe I missed something but when are __{get,set}state__ called?

not522

comment created time in 2 days

Pull request review commentoptuna/optuna

Add a queue-like data structure to manage updated trials

 def __repr__(self) -> str:     def is_finished(self) -> bool:          return self != TrialState.RUNNING and self != TrialState.WAITING++    def __lt__(self, state: "TrialState") -> bool:

I think it's clearer to implement this as a method/function with a name. E.g.

    def is_promotable_to(self, state: "TrialState") -> bool:
not522

comment created time in 2 days

startedCelemony/ARA_SDK

started time in 2 days

pull request commentoptuna/optuna

Use `command` to check the existence of the libraries to avoid partially matching

Codecov Report

Merging #2653 (d0f0c5c) into master (b9103a0) will increase coverage by 0.00%. The diff coverage is n/a.

Impacted file tree graph

@@           Coverage Diff           @@
##           master    #2653   +/-   ##
=======================================
  Coverage   91.70%   91.70%           
=======================================
  Files         138      138           
  Lines       11497    11499    +2     
=======================================
+ Hits        10543    10545    +2     
  Misses        954      954           
Impacted Files Coverage Δ
...a/visualization/matplotlib/_intermediate_values.py 100.00% <0.00%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update b9103a0...d0f0c5c. Read the comment docs.

nzw0301

comment created time in 2 days