profile
viewpoint
David Pollack dhpollack @solvemate Berlin ML Engineer @ Solvemate GmbH Formerly i2x, fellowship.ai

dhpollack/fast-wavenet.pytorch 87

A PyTorch implementation of fast-wavenet

dhpollack/programming_notebooks 11

A collection of programming notebooks that I've created.

dhpollack/bytenet.pytorch 9

A pytorch implementation of the Bytenet network for neural machine translation.

dhpollack/polymer-arabic-game 4

A language game using Polymer / WebComponents

dhpollack/BillboardAR 2

AR BIllboard Purchasing Experience

dhpollack/audio 1

simple audio I/O for pytorch

dhpollack/dhpollack.github.io 1

test jekyll site

dhpollack/fellowshipai 1

Fellowship AI Challenge - Language Identification

dhpollack/huggingface_libtorch 1

Minimal example of using a traced huggingface transformers model with libtorch

issue commentfacebookresearch/hydra

[Feature Request] allow for dashed overrides

Their script works out of the box if you remove the dashes. That was actually the first thing that I tried. FYI

dhpollack

comment created time in 2 days

issue commentfacebookresearch/hydra

[Feature Request] allow for dashed overrides

I'll look into using environmental variables for the torch distributed case as a brief look at the launcher makes it seem like that's possible. I see the disadvantages of allowing the dashes and actually don't like the idea of mixing and matching styles but this was one case where it would be useful. I don't know many other apps where this would be a problem so maybe I'll just bug the pytorch guys to add an option to use Hydra style arguments with their launcher 🤣

dhpollack

comment created time in 3 days

issue openedfacebookresearch/hydra

[Feature Request] allow for dashed overrides

🚀 Feature Request

It would be nice if one could use dashed overrides as well as the current non-dashed version

arg1: default1
arg2: default2
python app.py arg1=override1 arg2=override2
python app.py arg1=override1 --arg2=override2
python app.py arg1=override1 --arg3=unknown_arg1  # this still fails

Some scripts assume that arguments have the preceding dashes and thus breaks if you call hydra scripts. A notable example is the pytorch distributed launch script. The pytorch example basically invokes multiple runs of the given command with --local_rank={local_rank} attached to the command. So even if you have local_rank as an expected variable in your hydra config, it fails because of the formatting of the variable.

Motivation

I want to use hydra with pytorch distributed training and specifically something like python -m torch.distributed.launch --nproc_per_node=2 app.py arg1=override1

Pitch

I haven't done extensive testing, but I think the following diff on hydra/_internal/utils.py would enable this feature.

diff --git a/hydra/_internal/utils.py b/hydra/_internal/utils.py
index 7eb38bf0..f23ac64f 100644
--- a/hydra/_internal/utils.py
+++ b/hydra/_internal/utils.py
@@ -148,6 +148,15 @@ def create_config_search_path(search_path_dir: Optional[str]) -> ConfigSearchPat
     return search_path
 
 
+def _convert_dashed_args(unknown_args: List[str]) -> List[str]:
+    new_args = []
+    for arg in unknown_args:
+        while arg.startswith("-"):
+            arg = arg[1:]
+        new_args.append(arg)
+    return new_args
+
+
 def run_hydra(
     args_parser: argparse.ArgumentParser,
     task_function: TaskFunction,
@@ -172,7 +181,8 @@ def run_hydra(
         task_name=task_name, config_search_path=search_path, strict=strict
     )
     try:
-        args = args_parser.parse_args()
+        args, unknown_args = args_parser.parse_known_args()
+        args.overrides += _convert_dashed_args(unknown_args)
         if args.help:
             hydra.app_help(config_name=config_name, args_parser=args_parser, args=args)
             sys.exit(0)
@@ -276,7 +286,9 @@ def get_args_parser() -> argparse.ArgumentParser:
 
 
 def get_args(args: Optional[Sequence[str]] = None) -> Any:
-    return get_args_parser().parse_args(args=args)
+    known_args, unknown_args = get_args_parser().parse_known_args(args=args)
+    known_args.overrides += _convert_dashed_args(unknown_args)
+    return known_args
 
 
 def _strict_mode_strategy(strict: Optional[bool], config_name: Optional[str]) -> bool:

created time in 3 days

startedv-a-s-a/bayesian_nlde

started time in 3 days

push eventdhpollack/hydra

David Pollack

commit sha cf29c5fff81b95a0cd214ee9bf098fcbab58b5a8

news item for hydra.utils.call()

view details

push time in 7 days

push eventdhpollack/hydra

David Pollack

commit sha 66533c94435432e6e3f569cc66dbc235e45f6fb1

change order of objects docs Split the current objects docs into two sections. Moved the call section to the top and the db example to the bottom. Renamed the header to reflect the ability to call functions and methods.

view details

push time in 8 days

pull request commentfacebookresearch/hydra

Allow instantiation from class methods

I didn't but I'll check it tomorrow

dhpollack

comment created time in 8 days

push eventdhpollack/hydra

David Pollack

commit sha 7dce3742319974e8acae75820d3f4a5b2ef76603

Update website/docs/patterns/objects.md Co-Authored-By: Omry Yadan <omry@fb.com>

view details

push time in 8 days

Pull request review commentfacebookresearch/hydra

Allow instantiation from class methods

 Change the instantiated object class and override values from the command line: $ python my_app.py db=postgresql db.params.password=abcde PostgreSQL connecting to localhost with user=root and password=abcde and database=tutorial ```++In addition to instantiating objects from classes, the `hydra.utils.call` method can also+be used to call functions and methods.  Simply put the path to the function or method in the+`cls` field and optionally use `params` to pass parameters to the function or method.++Example file for classes, class and static methods, and functions (`models.py`)+```python+class Foo:+  def __init__(x: int, y: int) -> None:+    self.x = x+    self.y = y++  @classmethod+  def class_method(self, z: int) -> Any:+    return self(z, 10)++  @staticmethod+  def static_method(z: int) -> int:+    return z + 1++def bar(z: int) -> int:+  return z + 2+```+Example configs+```yaml+# instantiates an object Foo(10, 20)+myobject:+  cls: models.Foo+  params:+    x: 10+    y: 20+# instantiates an object Foo(5, 10)+myclassmethod:+  cls: models.Foo.class_method+  params:+    z: 5+# returns 16+mystaticmethod:+  cls: models.Foo.static_method+  params:+    z: 15+# returns 17+myfunction:+  cls: models.bar+  params:+    z: 15+```+Now to test these instantiate / call them as follows:+```python+import hydra++@hydra.main(config_path="config.yaml")+def app(cfg):+  myobject = hydra.utils.call(cfg.myobject)+  myclassmethod = hydra.utils.call(cfg.myclassmethod)+  mystaticmethod = hydra.utils.call(cfg.mystaticmethod)+  myfunction = hydra.utils.call(cfg.myfunction)+```+Note, the old `instantiate` function is now an alias of the `call` function used above.

done

dhpollack

comment created time in 8 days

Pull request review commentfacebookresearch/hydra

Allow instantiation from class methods

 def _instantiate_class(      for k, v in rest.items():         final_kwargs[k] = v+    return final_kwargs ++def _instantiate_class(+    clazz: Type[Any], config: PluginConf, *args: Any, **kwargs: Any+) -> Any:+    final_kwargs = _get_kwargs(config, **kwargs)     return clazz(*args, **final_kwargs)+++def _call_callable(+    fn: Callable[..., Any], config: PluginConf, *args: Any, **kwargs: Any+) -> Any:+    final_kwargs = _get_kwargs(config, **kwargs)+    return fn(*args, **final_kwargs)+++# aliases+instantiate = call+get_static_method = get_fn_or_method+get_method = get_fn_or_method+_get_class_name = _get_cls_name

:+1:

dhpollack

comment created time in 8 days

push eventdhpollack/hydra

David Pollack

commit sha d9d599c9fba8dcf8ef2be6d68cc2904230b44cfc

fixes based on comments

view details

push time in 8 days

pull request commentfacebookresearch/hydra

Allow instantiation from class methods

Alright, I think that I solved the mypy issues by using _instantiate_class for classes and _call_callable for functions and methods. I also created a bunch of aliases for the old naming scheme and renamed functions as well. I also added a few extra tests for builtins and nested methods.

As for the documentation, let me know what you think. I tried to keep it as compact as possible, but may be doing too many things with the same class.

P.S. the win37 failure looks unrelated to this PR. Looks like the conda servers glitched.

dhpollack

comment created time in 8 days

push eventdhpollack/hydra

David Pollack

commit sha cf80cb1e6ea59364959bb2e8c6511d600e318604

fix typo in variable name

view details

push time in 8 days

push eventdhpollack/hydra

David Pollack

commit sha 6d57655f7b0c5929a0dcb2a793d1dcc836be3204

update docs and make changes based on PR comments

view details

push time in 8 days

PR opened stanfordnlp/stanza

override resources dir with environmental variable

Desciption

See issue #227. Basically, it'd be nice to override the default resources dir and it's super easy to do.

Fixes Issues

this fixes #227. Basically fixes unwanted behavior when using pip install -e . and also makes the default resources dir user changeable.

+146 -9

0 comment

2 changed files

pr created time in 10 days

push eventdhpollack/stanza

David Pollack

commit sha 43abec97676e409f7f87dbf028e633160b4793c3

update .gitignore, user configurable resources dir * this allows one to set the default resources dir with an environmental variable * this fixes a bunch of potential issues with python related files. Namely, I had an issue when I used "pip install -e .".

view details

push time in 10 days

push eventdhpollack/stanza

Yuhui Zhang

commit sha c0deede0e22ef0860f93642f5d7051adf8b5b6df

Merge branch 'master' into dev

view details

Yuhui Zhang

commit sha e16609c94477e71762c4e7705ad8ff17f51a243b

Merge branch 'master' into dev

view details

Yuhui Zhang

commit sha 230e68fd62f96c1d7b2f521a54a66ea184392b00

remove code will never reach in tokenize

view details

Yuhui Zhang

commit sha 3fa83c388c91b6008b269405bf4265c9e74c4c24

bug fix for last commit: len(current_sent)!=0 if no_ssplit

view details

Yuhao Zhang

commit sha 10c8561dda9f0bc5043d60975f7041be1d4861b6

Add pytest.ini to register test markers

view details

Yuhao Zhang

commit sha e843f3867d23275851712f19c72d9d5e766886de

Use r string for regex in tokenizer

view details

Yuhao Zhang

commit sha 9f7d60d8861bf004bddb8684307a61dc68719c9e

Add summary writer to charlm training with --summary

view details

Yuhao Zhang

commit sha ed9344d96553e5c4f9a34cc0231e6f3d456d8d7a

Fix summary folder

view details

Peng Qi

commit sha 5e2d0ef7a2f6735d0ab6d2488d7433179bc44614

Fix issue in Vietnamese where assertion fails if first character is punctuation

view details

Peng Qi

commit sha c2d7228b5ebc5928db1ad347727724525daac719

Update bug report template

view details

Yuhao Zhang

commit sha 3d84725f5466e6b08afe3c769722f61bbeba6811

Move contributing

view details

Yuhao Zhang

commit sha 22cd0a08647f0310aa3eb09f82bbb0be9bea0e34

Typo fix

view details

Peng Qi

commit sha e7499050f0de7adc4ec26b9272d6f83d72379081

Create a pull request template

view details

Peng Qi

commit sha 4be97a91cec0448cb82040e013a85f38e00d83ad

Move pull request template

view details

Yuhao Zhang

commit sha ea5d2e0366211fcf108ecae05642b8b71790dab1

Fix stanza version in CoreNLP notebook example

view details

Peng Qi

commit sha adc291e69b4461efdbf6a71eb7a497bf1a764704

Fix prompt warning in README

view details

Peng Qi

commit sha 16b21de46f71d6195d98e9c99ae1e8acc5dcbe35

Update issue templates

view details

Peng Qi

commit sha ee93cf33744191ca7f9d76c6e9d777c04bc7eb06

Update issue templates

view details

David Pollack

commit sha e32b9279e92d5a43e818fb51e6cc18b4e31f2eb0

update .gitignore and allow user to override resources dir * The .gitignore is currently incomplete. I noticed this when using "pip install -e .", but these changes cover a bunch of python related things that you would want to ignore. * Added the ability to override the default resources dir with an environmental variable. This is nice when you don't want to clutter up the $HOME or if you just want to put the models somewhere else.

view details

push time in 10 days

issue openedstanfordnlp/stanza

Override default resources dir and update .gitignore

Is your feature request related to a problem? Please describe.

  1. I want to be able to specify the default dir for the stanza resources. Specifically, I kind of hate that it's a visible dir in my home folder, so I wanted to put it in the .cache folder with a bunch of other models.
  2. I wanted to install this library as a development version to make the above change, but then the .egg_info dir was not ignored when I did git add.

Describe the solution you'd like

  1. allow to override the default location $HOME/stanza_resources by setting an environmental variable called STANZA_RESOURCES_DIR.
  2. add the github default python .gitignore to the current .gitignore

created time in 10 days

create barnchdhpollack/stanza

branch : dhp/override_resources_dir

created branch time in 10 days

fork dhpollack/stanza

Official Stanford NLP Python Library for Many Human Languages

https://stanfordnlp.github.io/stanza/

fork in 10 days

Pull request review commentfacebookresearch/hydra

Allow instantiation from class methods

 log = logging.getLogger(__name__)  +def _safeimport(path: str) -> Optional[ModuleType]:+    """+    Import a module; handle errors; return None if the module isn't found.+    This is a typed simplified version of the `pydoc` function `safeimport`.+    """+    from importlib import import_module++    try:+        module = import_module(path)+    except ImportError as e:+        if e.name == path:+            return None+        else:+            log.error(f"Error importing module: {path}")+            raise e+    except Exception as e:+        log.error(f"Non-ImportError while importing module {path}: {e}")+        raise ValueError(f"Non-ImportError while importing module {path}: {e}")+    for part in path.replace(module.__name__, "").split("."):+        if not hasattr(module, part):+            break+        module = getattr(module, part)+    return module+++def _locate(path: str) -> ModuleType:+    """+    Locate an object by name or dotted path, importing as necessary.+    This is similar to the pydoc function `locate`, except that it checks for+    the module from the given path from back to front.+    """+    parts = [part for part in path.split(".") if part]+    module = None+    for n in reversed(range(len(parts))):+        try:+            module = _safeimport(".".join(parts[:n]))+        except Exception:+            continue+        if module:+            break+    if module:+        obj = module+    else:+        log.error(f"Module not found: {path}")+        raise ValueError(f"Module not found: {path}")+    for part in parts[n:]:+        if not hasattr(obj, part):+            log.error(+                f"Error finding attribute ({part}) in class ({obj.__name__}): {path}"+            )+            raise ValueError(+                f"Error finding attribute ({part}) in class ({obj.__name__}): {path}"+            )+        obj = getattr(obj, part)+    return obj++ def get_method(path: str) -> type:     return get_class(path)   def get_class(path: str) -> type:     try:-        from importlib import import_module--        module_path, _, class_name = path.rpartition(".")-        mod = import_module(module_path)-        try:-            klass: type = getattr(mod, class_name)-        except AttributeError:-            raise ImportError(-                "Class {} is not in module {}".format(class_name, module_path)-            )+        klass = cast(type, _locate(path))

I am in Europe so I can join a chat today (for me, probably tomorrow for you) if you're free. I just registered for the chat, so I'll be in there.

dhpollack

comment created time in 10 days

push eventdhpollack/hydra

Omry Yadan

commit sha aa00a575ab48293263c402d816dd2f3d11a5d83d

Upgrade to OmegaConf==2.0.0rc16

view details

David Pollack

commit sha 29688067952664bef122f4df15191cbf635adb7d

typo in config file prevented AxSweeper from loading correctly (#502)

view details

Shagun Sodhani

commit sha cfd1e61b4b621bfc518d823e414ad4a6342aae4a

Import Ax lazily (#492) * Rename earlystopper.py as _earlystopper.py Now the file is not loaded while scanning plugins * Move the logic of Ax plugin in a separate file Now Ax is imported lazily * Skip files prefixed with _ during plugin discovery (#495) * Fix the broken example code * Update plugins.md Co-authored-by: Omry Yadan <omry@fb.com>

view details

David Pollack

commit sha eecd24af099dce38e5656f198da5687e212b0399

Allow instantiation from class methods * instantiate an object from a class method of that object * add tests for this * simplify get_static_method function Signed-off-by: David Pollack <d.pollack@solvemate.com>

view details

David Pollack

commit sha cbdc5777e5a618c55389ae6fbd7ae714fcc70448

pydoc.locate to instantiate * works with class, classmethod, or staticmethod * still requires the user to specify the module * simplifies importing of all types into a single method Signed-off-by: David Pollack <david@da3.net>

view details

David Pollack

commit sha 71b460b2723abd41e036fbc19c4ded88964fae38

Use f-strings Signed-off-by: David Pollack <david@da3.net>

view details

David Pollack

commit sha e9bc1a8f24adec4661569ea24bfef5fd263508ff

fix documentation example when instantiating from classmethod

view details

David Pollack

commit sha 6ebdefe0a6216471b4e6be2a7f4600ee89a6affb

reimplement locate and linting this is a reimplementation of the pydoc.locate function. That function would attempt to load a module from beginning to end and if any link in the chain did not load properly then it would fail. The reimplemented function searches for the module to load from the end of the path to the beginning. Additionally, this reimplementation will raise exceptions instead of returning None if the path cannot be located. Signed-off-by: David Pollack <david@da3.net>

view details

David Pollack

commit sha 600f5a87983d0daaef771d8d09c910fbe2969300

linting and formatting

view details

David Pollack

commit sha ae26554014f3151fd235a805bbccae939af5243c

update docs and make changes based on PR comments

view details

push time in 10 days

pull request commentfacebookresearch/hydra

Allow instantiation from class methods

The test failues look like they are due to the upgrade in omegaconf. Do you prefer a git rebase or a git merge on the master? I tested both on branches locally. I would assume rebasing, but I figured I'd ask.

dhpollack

comment created time in 10 days

Pull request review commentfacebookresearch/hydra

Allow instantiation from class methods

 log = logging.getLogger(__name__)  +def _safeimport(path: str) -> Optional[ModuleType]:+    """+    Import a module; handle errors; return None if the module isn't found.+    This is a typed simplified version of the `pydoc` function `safeimport`.+    """+    from importlib import import_module++    try:+        module = import_module(path)+    except ImportError as e:+        if e.name == path:+            return None

this was a relic from the pydoc method which checks the method from front to back, which is required if one uses __import__. I removed it now.

dhpollack

comment created time in 10 days

Pull request review commentfacebookresearch/hydra

Allow instantiation from class methods

 log = logging.getLogger(__name__)  +def _safeimport(path: str) -> Optional[ModuleType]:+    """+    Import a module; handle errors; return None if the module isn't found.+    This is a typed simplified version of the `pydoc` function `safeimport`.+    """+    from importlib import import_module++    try:+        module = import_module(path)+    except ImportError as e:+        if e.name == path:+            return None+        else:+            log.error(f"Error importing module: {path}")+            raise e+    except Exception as e:+        log.error(f"Non-ImportError while importing module {path}: {e}")+        raise ValueError(f"Non-ImportError while importing module {path}: {e}")+    for part in path.replace(module.__name__, "").split("."):+        if not hasattr(module, part):+            break+        module = getattr(module, part)+    return module+++def _locate(path: str) -> ModuleType:+    """+    Locate an object by name or dotted path, importing as necessary.+    This is similar to the pydoc function `locate`, except that it checks for+    the module from the given path from back to front.+    """+    parts = [part for part in path.split(".") if part]+    module = None+    for n in reversed(range(len(parts))):+        try:+            module = _safeimport(".".join(parts[:n]))+        except Exception:+            continue+        if module:+            break+    if module:+        obj = module+    else:+        log.error(f"Module not found: {path}")+        raise ValueError(f"Module not found: {path}")+    for part in parts[n:]:+        if not hasattr(obj, part):+            log.error(+                f"Error finding attribute ({part}) in class ({obj.__name__}): {path}"+            )+            raise ValueError(+                f"Error finding attribute ({part}) in class ({obj.__name__}): {path}"+            )+        obj = getattr(obj, part)+    return obj++ def get_method(path: str) -> type:     return get_class(path)   def get_class(path: str) -> type:     try:-        from importlib import import_module--        module_path, _, class_name = path.rpartition(".")-        mod = import_module(module_path)-        try:-            klass: type = getattr(mod, class_name)-        except AttributeError:-            raise ImportError(-                "Class {} is not in module {}".format(class_name, module_path)-            )+        klass = cast(type, _locate(path))

I tried this but it didn't work. The return types of static methods and class methods were different than class objects. I tried explicitly typing t with t: type, but assert statement still failed. I also tried changing the assert statement to Callable instead of type, but then I recieved a failure from the mypy check. In this case, all the tests passed.

I also tried changing all of the return types to ModuleType and adjust the function signature of _instantiate_class accordingly, but this caused more mypy errors and some of the tests started failing as well.

dhpollack

comment created time in 10 days

push eventdhpollack/hydra

David Pollack

commit sha cbe1a656137b31e529dc0c87bf477edd941d6c81

update docs and make changes based on PR comments

view details

push time in 10 days

push eventdhpollack/vimrc

David Pollack

commit sha de46e6e7168a6968bdfb350e4520971463e1abc0

relative line numbers

view details

push time in 12 days

pull request commentfacebookresearch/hydra

typo in config file prevented AxSweeper from loading correctly

Also, it wasn't completely obvious to me, but you could remove that line entirely as long as you have the defaults line loading the sweeper. It makes sense to override some of the default settings, but it doesn't really ever make sense to override the class itself.

dhpollack

comment created time in 12 days

PR opened facebookresearch/hydra

typo in config file prevented AxSweeper from loading correctly

this is a super mini PR. The config file points to the wrong location. I bashed my head against the wall for a while trying to figure out what was going wrong.

+1 -1

0 comment

1 changed file

pr created time in 12 days

push eventdhpollack/hydra

Omry Yadan

commit sha aa00a575ab48293263c402d816dd2f3d11a5d83d

Upgrade to OmegaConf==2.0.0rc16

view details

David Pollack

commit sha 281029d5febd448528e8c69bded14c683fe7d74c

typo in config file prevented AxSweeper from loading correctly

view details

push time in 12 days

push eventdhpollack/hydra

push time in 12 days

push eventdhpollack/hydra

Shagun Sodhani

commit sha 67a525b84188ea48cf2e22ff9eac9b648eaf5efd

Skip files prefixed with _ during plugin discovery (#495)

view details

Omry Yadan

commit sha aa00a575ab48293263c402d816dd2f3d11a5d83d

Upgrade to OmegaConf==2.0.0rc16

view details

David Pollack

commit sha 311e0ff2220116252776026f1ee86664b9b26b11

Merge branch 'master' of https://github.com/facebookresearch/hydra

view details

push time in 12 days

create barnchdhpollack/hydra

branch : dhp/ax_sweeper_config_typo

created branch time in 13 days

Pull request review commentfacebookresearch/hydra

Allow instantiation from class methods

 log = logging.getLogger(__name__)  +def _safeimport(path: str) -> Optional[ModuleType]:+    """+    Import a module; handle errors; return None if the module isn't found.+    This is a typed simplified version of the `pydoc` function `safeimport`.+    """+    from importlib import import_module++    try:+        module = import_module(path)+    except ImportError as e:+        if e.name == path:+            return None+        else:+            log.error(f"Error importing module: {path}")+            raise e+    except Exception as e:+        log.error(f"Non-ImportError while importing module {path}: {e}")+        raise ValueError(f"Non-ImportError while importing module {path}: {e}")

You're right. I took it from the original file but I've noticed you've only used the log for BaseException errors that don't have a message attached.

dhpollack

comment created time in 13 days

Pull request review commentfacebookresearch/hydra

Allow instantiation from class methods

 def __eq__(self, other: Any) -> Any:         return False  +class Baz(Foo):+    @classmethod

done

dhpollack

comment created time in 13 days

Pull request review commentfacebookresearch/hydra

Allow instantiation from class methods

 def get_method(path: str) -> type:  def get_class(path: str) -> type:     try:-        from importlib import import_module--        module_path, _, class_name = path.rpartition(".")-        mod = import_module(module_path)-        try:-            klass: type = getattr(mod, class_name)-        except AttributeError:-            raise ImportError(-                "Class {} is not in module {}".format(class_name, module_path)-            )+        from pydoc import locate++        klass = locate(path)

I ended up re-implementing and simplifying the pydoc function because there was some strange behavior if a submodule on the chain doesn't load.

dhpollack

comment created time in 13 days

push eventdhpollack/hydra

David Pollack

commit sha 22a0e6c8ae4d511dd41c53e917ff9bacad289615

linting and formatting

view details

push time in 13 days

push eventdhpollack/hydra

David Pollack

commit sha 52e99ccc59078af038f23b5668bc47923b8b8bf0

reimplement locate and linting this is a reimplementation of the pydoc.locate function. That function would attempt to load a module from beginning to end and if any link in the chain did not load properly then it would fail. The reimplemented function searches for the module to load from the end of the path to the beginning. Additionally, this reimplementation will raise exceptions instead of returning None if the path cannot be located. Signed-off-by: David Pollack <david@da3.net>

view details

push time in 13 days

Pull request review commentfacebookresearch/hydra

Allow instantiation from class methods

 def get_method(path: str) -> type:  def get_class(path: str) -> type:     try:-        from importlib import import_module--        module_path, _, class_name = path.rpartition(".")-        mod = import_module(module_path)-        try:-            klass: type = getattr(mod, class_name)-        except AttributeError:-            raise ImportError(-                "Class {} is not in module {}".format(class_name, module_path)-            )+        from pydoc import locate++        klass = locate(path)

Under the hood it is implemented with __import__ but with a bunch of logic to get the class methods and to recursive import the correct module. I believe importlib is just a friendly wrapper around __import__. You can see the code for locate and safeimport (the underlying importer). One could implement this with importlib. but it's not going to be as good as this implementation.

dhpollack

comment created time in 14 days

Pull request review commentfacebookresearch/hydra

Allow instantiation from class methods

 conf/ ├── config.yaml └── db     ├── mysql.yaml-    └── postgresql.yaml+    ├── postgresql.yaml+    └── postgresql_default.yaml

Ok, where in the main doc is this? I can add that to this PR.

dhpollack

comment created time in 14 days

Pull request review commentfacebookresearch/hydra

Allow instantiation from class methods

 def test_get_static_method(path: str, return_value: Any) -> None:             {"a": 10, "d": 40},             Bar(10, 200, 200, 40),         ),+        (+            {+                "cls": "tests.test_utils.Baz",+                "method": "class_method",

@omry alright, it now works. I used the the locate from from pydoc which basically does all the work. I also simplified the other parts of the code that were duplicating functionality of this such as your static method function.

dhpollack

comment created time in 16 days

push eventdhpollack/hydra

David Pollack

commit sha 5d0de5073811151fa604929f02b26ef7ea185156

fix documentation example when instantiating from classmethod

view details

push time in 16 days

push eventdhpollack/hydra

David Pollack

commit sha a8fcde0ee12c351a5fbcf13db56dc1010a58c6ae

pydoc.locate to instantiate * works with class, classmethod, or staticmethod * still requires the user to specify the module * simplifies importing of all types into a single method Signed-off-by: David Pollack <david@da3.net>

view details

David Pollack

commit sha 1439acc225a1cdf97b48755aa689fc11c0b3de6e

Use f-strings Signed-off-by: David Pollack <david@da3.net>

view details

push time in 16 days

create barnchdhpollack/hydra

branch : dhp/instantiate_from_class_method_fix

created branch time in 16 days

push eventdhpollack/hydra

David Pollack

commit sha bd36485f00051b54868a6d5c6bc2f56c8fee3be9

Use f-strings Signed-off-by: David Pollack <david@da3.net>

view details

push time in 16 days

push eventdhpollack/hydra

Shagun Sodhani

commit sha 67a525b84188ea48cf2e22ff9eac9b648eaf5efd

Skip files prefixed with _ during plugin discovery (#495)

view details

David Pollack

commit sha ac21687761a86ed7d4332cb5abd9084e20fd68ab

Merge branch 'dhp/instantiate_from_class_method' of github.com:dhpollack/hydra

view details

David Pollack

commit sha 495e0c51a327e5df11de99609c9c26c3d39d59a9

pydoc.locate to instantiate * works with class, classmethod, or staticmethod * still requires the user to specify the module * simplifies importing of all types into a single method Signed-off-by: David Pollack <david@da3.net>

view details

push time in 16 days

push eventdhpollack/vimrc

David Pollack

commit sha 848bc2e75f59bae3d98279abe278f374224afdfa

update modules and change theme

view details

push time in 18 days

Pull request review commentfacebookresearch/hydra

Allow instantiation from class methods

 def test_get_static_method(path: str, return_value: Any) -> None:             {"a": 10, "d": 40},             Bar(10, 200, 200, 40),         ),+        (+            {+                "cls": "tests.test_utils.Baz",+                "method": "class_method",

Yea, I wanted that too, but it didn't seem easy if you use the instantiate method because you would have to always check the last and second to last items to see if the last item in the string was a method or a class of the method of the second to last item. I thought it would create extra logic that made things more confusing. My other idea was to create a instantiate_from_class_method function (or whatever you'd want it to be called) which assumes that form. Should work either way, it just depends where you want to put the logic.

I'll take a look at it and see if I can put it all into the cls key neatly.

dhpollack

comment created time in 19 days

Pull request review commentfacebookresearch/hydra

Allow instantiation from class methods

 def test_get_static_method(path: str, return_value: Any) -> None:             {"a": 10, "d": 40},             Bar(10, 200, 200, 40),         ),+        (+            {+                "cls": "tests.test_utils.Baz",+                "method": "class_method",

Yea, I wanted that too, but it didn't seem easy if you use the instantiate method because you would have to always check the last and second to last items to see if the last item in the string was a method or a class of the method of the second to last item. I thought it would create extra logic that made things more confusing. My other idea was to create a instantiate_from_class_method function (or whatever you'd want it to be called) which assumes that form. Should work either way, it just depends where you want to put the logic.

I'll take a look at it and see if I can put it all into the cls key neatly.

dhpollack

comment created time in 19 days

PR opened facebookresearch/hydra

Allow instantiation from class methods
  • instantiate an object from a class method of that object
  • add tests for this
  • simplify get_static_method function

Signed-off-by: David Pollack d.pollack@solvemate.com

Motivation

I wanted to use hydra with the huggingface transfromers library and they extensively use a class method named from_pretrained.

Currently one could do this by using the get_static_method and then manually converting the params key and passing them to the gotten method. However that's not very straight forward or intuitive and this doesn't add much logic to the utility function.

Have you read the Contributing Guidelines on pull requests?

Yes - passed tests in test_utils.py and flake8

Test Plan

I added a test that tests the basic case.

+55 -4

0 comment

3 changed files

pr created time in 19 days

create barnchdhpollack/hydra

branch : dhp/instantiate_from_class_method

created branch time in 19 days

fork dhpollack/hydra

Hydra is a framework for elegantly configuring complex applications

https://hydra.cc

fork in 19 days

issue commentnteract/commuter

Syntax Error for @font-face

I no longer receive this error, but now I have a new error. Using yarn dev on the master branch, I am able to get a listing of the notebooks, but then it fails with an error about a missing module d3-contour (see nteract/vega-embed-v3#3).

However, that's a new issue.

yummydum

comment created time in a month

issue commentnteract/commuter

Failed to compile

@captainsafia I also built from master and get the following error:

ModuleNotFoundError: Module not found: Error: Can't resolve 'd3-contour'

I didn't see d3-contour in your PR on the vega-embed-v3 PR, although it could be a subdependency of something that you've already added.

MycoBean

comment created time in a month

push eventdhpollack/spaCy

David Pollack

commit sha 80004930ed098ec5b6bf9ecd081b96b1e7e7080f

fix typo in svg file

view details

push time in a month

PR opened explosion/spaCy

fix typo in svg file - caused documentation build error

fixed a typo (space went was supposed to be outside quotes, but it was inside)

Description

This error caused a Gatsby build error when I tried to build the documentation locally

Types of change

bug fix

Checklist

  • [ ] I have submitted the spaCy Contributor Agreement.
  • [X] I ran the tests, and all new and existing tests passed.
  • [X] My changes don't require a change to the documentation, or if they do, I've added all required information.

Can I sign the contributor agreement electronically somehow?

+1 -1

0 comment

1 changed file

pr created time in a month

create barnchdhpollack/spaCy

branch : dhp/fix-minor-svg-error

created branch time in a month

fork dhpollack/spaCy

💫 Industrial-strength Natural Language Processing (NLP) with Python and Cython

https://spacy.io

fork in a month

pull request commentfacebookresearch/fastText

Add build dir to gitignore

@Celebio if you want a more extensive list i.e. the github c++ and github python defaults, I could also do that. This seemed less invasive in case there was a particular reason the .gitignore is so barebones.

dhpollack

comment created time in a month

PR opened facebookresearch/fastText

Add build dir to gitignore

add build/, build_*/, and some binary files to the gitignore that you wouldn't want in your git repo

Signed-off-by: David Pollack david@da3.net

+10 -1

0 comment

1 changed file

pr created time in a month

create barnchdhpollack/fastText

branch : dhp/fix-gitignore

created branch time in a month

push eventdhpollack/vimrc

David Pollack

commit sha 56c0a952471db686175ee424ce9702960d38e480

neovim fixes

view details

push time in a month

issue commentnteract/commuter

Syntax Error for @font-face

@captainsafia could you provide a minimal example of adding a css-loader to the configuration?

yummydum

comment created time in a month

push eventdhpollack/resume

David Pollack

commit sha 35ccae83b5b8f240003ce7af76cdf171fd5c8cb0

Site updated at 2020-02-05 11:00:44 UTC

view details

push time in 2 months

push eventdhpollack/resume

David Pollack

commit sha 471c2c16cec67ef5a1ad6319ba4b6734be212799

fix date for degree

view details

David Pollack

commit sha 04d52f4d281a12a89394a8cf1f0bae530fb213c3

update pdfs

view details

David Pollack

commit sha 73e21f9d6735271fb3347407f140ce8c2c8f941d

changed email address

view details

David Pollack

commit sha 4981d0bbb2d096a90eba3e796dac2e82d39e5595

torchaudio and fellowship.ai

view details

David Pollack

commit sha 549eb3caf187f90dcc426b1ef28cd1db82d54487

solvemate

view details

push time in 2 months

push eventdhpollack/resume

David Pollack

commit sha 9cab461cd5a67634b3852f606e173e988774c925

Site updated at 2020-02-05 10:55:39 UTC

view details

push time in 2 months

push eventdhpollack/huggingface_libtorch

David Pollack

commit sha 8707b94b596e58cc3f0e9554a6c16eba85fdf6ab

weekend checkpoint

view details

push time in 2 months

delete branch dhpollack/libtorch_custom_dataset

delete branch : initial_commit

delete time in 2 months

push eventdhpollack/libtorch_custom_dataset

David Pollack

commit sha 45259526c13cc8329a0d90dc8cf5ff4ac31b9fd9

added documentation and minor updates

view details

David Pollack

commit sha 2c0255229def59ffd37cb3036d51c40d84343a7e

Merge branch 'initial_commit'

view details

push time in 2 months

PR opened pytorch/tutorials

Add custom dataset and dataloader tutorial for C++

I was struggling with creating a custom dataset and dataloader for the C++ frontend partially because I couldn't find a good minimal example and ran into a few gotchas when trying to model something after the MNIST dataset. This tutorial might help others avoid the pitfalls that I ran into.

Signed-off-by: David Pollack david@da3.net

+411 -0

0 comment

2 changed files

pr created time in 2 months

create barnchdhpollack/tutorials

branch : dhp/cpp_dataset_tutorial

created branch time in 2 months

PR opened dhpollack/huggingface_libtorch

Datasets and processors

checkpoint for datasets and processors

  • working on the squad dataset

    • thinking of a smart way of preprocessing vs online processing
  • renamed a bunch of files (and will probably do some more renamed before I merge this)

+806 -262

0 comment

31 changed files

pr created time in 2 months

push eventdhpollack/huggingface_libtorch

David Pollack

commit sha f61914c6e7867191d0fa108ac56aef51ff9cfb83

Merge pull request #1 from dhpollack/install_scripts Install scripts

view details

David Pollack

commit sha 5c5a17eebc452802b884becc40696ebf281d9f47

update setup scripts * download data * clean up after download * remove symlink that I was using locally before

view details

David Pollack

commit sha 00f0eaaee9daa8081af9bd86786f91550afebc72

working example gets better than random (but not as good as expected)

view details

David Pollack

commit sha 970c9937c89ad822739b37879535044cbf0a2869

update README.md to reflect new changes

view details

David Pollack

commit sha 19379c5126a9cc2625a5047f3a4f907f31e3aa28

fixed tokenization forgot to put the BOS and EOS tokens onto each example. Added these and boosted accuracy from 61% to 83%. I was also calculating the length of the example incorrectly which was causing the attention mask to not be properly sized. Also created a script to trace the model instead of putting the instructions in the README. Lastly, I removed the absolute paths and made them relative paths.

view details

David Pollack

commit sha e70927802873de35185765e3da92f04badf5682b

add cli arguments

view details

David Pollack

commit sha d1872cd6f3194332ff6b48236985b2bbf518aff5

importing header files more correctly

view details

David Pollack

commit sha 318b8457b514116dc8081b77899f6dabbbba51a8

clang-format

view details

David Pollack

commit sha 6d0ba0e35f1d71d0c8875716e5add54420c57064

enabling CUDA

view details

David Pollack

commit sha b65181840127cae7aebe2427dc407a4692034fcf

fixes for CUDA models

view details

David Pollack

commit sha 464c0a9cd79aa6837deae2d265240672374a0273

more CUDA fixes

view details

David Pollack

commit sha 5db41e89a2fe3a52f1558273115b6e17c1e4761e

removed some extraneous code related to working on CUDA devices

view details

David Pollack

commit sha 0977fcdfbdc6804d4d1688b2218a6d2444c5f2b7

accidently deleted the trained model so I had to reupload it

view details

David Pollack

commit sha d6256c5e17b2ead3120481105af5a811633bd729

update README with Google Colab notebook with GPU example

view details

David Pollack

commit sha a75fd01a9b35b88c02173b6023618221896e3bbc

adding travis ci

view details

David Pollack

commit sha 04adfd1b8fcd33ca6efc3f71b9e99af3a2fd4770

add env file to .travis.yml

view details

David Pollack

commit sha 4212085783f5f9ea3e4fd22ade760b0b4d43c7e7

psuedo test in travis

view details

David Pollack

commit sha 0a5bc73ccb317a82834844cdb7ff19f6499d0354

fix to travis

view details

David Pollack

commit sha 186805e1bf535a75a4bd1e22b7d42c26b6d4a99b

adding travis build icon. super important

view details

David Pollack

commit sha c1a2e6c4351ea92483136e8df88e535e5c609ae2

add tests and ignore first line of csv

view details

push time in 2 months

push eventdhpollack/huggingface_libtorch

David Pollack

commit sha f61914c6e7867191d0fa108ac56aef51ff9cfb83

Merge pull request #1 from dhpollack/install_scripts Install scripts

view details

David Pollack

commit sha 5c5a17eebc452802b884becc40696ebf281d9f47

update setup scripts * download data * clean up after download * remove symlink that I was using locally before

view details

David Pollack

commit sha 00f0eaaee9daa8081af9bd86786f91550afebc72

working example gets better than random (but not as good as expected)

view details

David Pollack

commit sha 970c9937c89ad822739b37879535044cbf0a2869

update README.md to reflect new changes

view details

David Pollack

commit sha 19379c5126a9cc2625a5047f3a4f907f31e3aa28

fixed tokenization forgot to put the BOS and EOS tokens onto each example. Added these and boosted accuracy from 61% to 83%. I was also calculating the length of the example incorrectly which was causing the attention mask to not be properly sized. Also created a script to trace the model instead of putting the instructions in the README. Lastly, I removed the absolute paths and made them relative paths.

view details

David Pollack

commit sha e70927802873de35185765e3da92f04badf5682b

add cli arguments

view details

David Pollack

commit sha d1872cd6f3194332ff6b48236985b2bbf518aff5

importing header files more correctly

view details

David Pollack

commit sha 318b8457b514116dc8081b77899f6dabbbba51a8

clang-format

view details

David Pollack

commit sha 6d0ba0e35f1d71d0c8875716e5add54420c57064

enabling CUDA

view details

David Pollack

commit sha b65181840127cae7aebe2427dc407a4692034fcf

fixes for CUDA models

view details

David Pollack

commit sha 464c0a9cd79aa6837deae2d265240672374a0273

more CUDA fixes

view details

David Pollack

commit sha 5db41e89a2fe3a52f1558273115b6e17c1e4761e

removed some extraneous code related to working on CUDA devices

view details

David Pollack

commit sha 0977fcdfbdc6804d4d1688b2218a6d2444c5f2b7

accidently deleted the trained model so I had to reupload it

view details

David Pollack

commit sha d6256c5e17b2ead3120481105af5a811633bd729

update README with Google Colab notebook with GPU example

view details

David Pollack

commit sha a75fd01a9b35b88c02173b6023618221896e3bbc

adding travis ci

view details

David Pollack

commit sha 04adfd1b8fcd33ca6efc3f71b9e99af3a2fd4770

add env file to .travis.yml

view details

David Pollack

commit sha 4212085783f5f9ea3e4fd22ade760b0b4d43c7e7

psuedo test in travis

view details

David Pollack

commit sha 0a5bc73ccb317a82834844cdb7ff19f6499d0354

fix to travis

view details

David Pollack

commit sha 186805e1bf535a75a4bd1e22b7d42c26b6d4a99b

adding travis build icon. super important

view details

David Pollack

commit sha c1a2e6c4351ea92483136e8df88e535e5c609ae2

add tests and ignore first line of csv

view details

push time in 2 months

push eventdhpollack/huggingface_libtorch

David Pollack

commit sha 032b14ce164608cf8694bbe4621eef75d15a77bc

different dataset types * creating a dataset type for classification problems (currently only for multi-class problems) and question answering (based on SQuAD). * continuing to try to make these datasets more generic without becoming too difficult to use/understand * done some speed comparisons with the original huggingface and Google implementations of SQuAD. This library is about 10x fast than huggingface and 100x faster than Google. * wrote a few more tests

view details

push time in 2 months

push eventdhpollack/huggingface_libtorch

David Pollack

commit sha 3db5d585e087f9c68faef08bcc1aa28e2ac688a0

update travis to download models before make

view details

push time in 2 months

push eventdhpollack/huggingface_libtorch

David Pollack

commit sha 296dfe9e86fb6319ab8ab5ac68352a8f68d09151

added processors and generic datasets ultimate goal is to make everything a bit more generic. this commit adds processors and a slightly more generic dataset type. I also made a bunch of updates related to clang-tidy. Additionally, I created a micro dataset for testing from the SST-2 dataset. I also wrote more scripts around installing the pretrained model. Finally, I updated the README.

view details

push time in 2 months

delete branch dhpollack/huggingface_libtorch

delete branch : template_dataset

delete time in 2 months

create barnchdhpollack/huggingface_libtorch

branch : datasets_and_processors

created branch time in 2 months

push eventdhpollack/huggingface_libtorch

David Pollack

commit sha 9bf92bb8ccd5c989132128f3c9d2122c79a2d887

changes related to libtorch 1.4 began testing this with libtorch 1.4 and ran into a lot of problems with gcc-9. I was able to get this to compile with gcc-8 or clang. Some of the changes in this commit are related to clang, which gave me a few different errors than gcc.

view details

push time in 2 months

push eventdhpollack/huggingface_libtorch

David Pollack

commit sha e1428d939c901f5ffa4670173da5dc7e988f1e78

add .clang-format file

view details

push time in 2 months

issue openeddhpollack/huggingface_libtorch

build failure with gcc-9

I am having a build failure with gcc-9 using the downloaded version of libtorch 1.4 from pytorch.org. I have tried both the pre-cxx11 and cxx11 ABIs and run into the same problem with a minimal libtorch program.

This could be related to pytorch/pytorch#32277 as I get a similar build error and they are also using Arch. Note that I was able to build with gcc-8 and clang on the same system. Additionally, I was able to build with libtorch 1.3.1 with gcc-9. The error log itself is 5mb of text uncompressed and it can be obtained here.

CMakeLists.txt

cmake_minimum_required(VERSION 3.0 FATAL_ERROR)
project(minimal-libtorch)

find_package(Torch REQUIRED)

set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${TORCH_CXX_FLAGS}")
add_executable(minimal-libtorch main.cpp)
target_link_libraries(minimal-libtorch "${TORCH_LIBRARIES}")
set_property(TARGET minimal-libtorch PROPERTY CXX_STANDARD 14)

main.cpp

#include <torch/torch.h>
#include <iostream>

int main() {
  torch::Tensor t = torch::rand({2, 3});
  std::cout << t << std::endl;
}
[ 50%] Building CXX object CMakeFiles/minimal-libtorch.dir/main.cpp.o
In file included from /home/david/.local/libtorch_cxx11abi_1_4/include/ATen/core/TensorMethods.h:10,
                 from /home/david/.local/libtorch_cxx11abi_1_4/include/ATen/Tensor.h:12,
                 from /home/david/.local/libtorch_cxx11abi_1_4/include/ATen/Context.h:4,
                 from /home/david/.local/libtorch_cxx11abi_1_4/include/ATen/ATen.h:5,
                 from /home/david/.local/libtorch_cxx11abi_1_4/include/torch/csrc/api/include/torch/types.h:3,
                 from /home/david/.local/libtorch_cxx11abi_1_4/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
                 from /home/david/.local/libtorch_cxx11abi_1_4/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
                 from /home/david/.local/libtorch_cxx11abi_1_4/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
                 from /home/david/.local/libtorch_cxx11abi_1_4/include/torch/csrc/api/include/torch/data/dataloader.h:3,
                 from /home/david/.local/libtorch_cxx11abi_1_4/include/torch/csrc/api/include/torch/data.h:3,
                 from /home/david/.local/libtorch_cxx11abi_1_4/include/torch/csrc/api/include/torch/all.h:4,
                 from /home/david/.local/libtorch_cxx11abi_1_4/include/torch/csrc/api/include/torch/torch.h:3,
                 from /home/david/Programming/experiments/c++/libtorch_headers/minimal/main.cpp:1:
/home/david/.local/libtorch_cxx11abi_1_4/include/ATen/core/dispatch/Dispatcher.h: In instantiation of ‘Return c10::Dispatcher::doCallUnboxedOnly(const c10::DispatchTable&, const c10::LeftRight<ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction> >&, Args ...) const [with Return = void; Args = {const at::Tensor&, const at::Tensor&, bool, bool}]’:
/home/david/.local/libtorch_cxx11abi_1_4/include/ATen/core/dispatch/Dispatcher.h:201:114:   required from ‘Return c10::Dispatcher::callUnboxedOnly(const c10::OperatorHandle&, Args ...) const [with Return = void; Args = {const at::Tensor&, const at::Tensor&, bool, bool}]’
/home/david/.local/libtorch_cxx11abi_1_4/include/ATen/core/TensorMethods.h:66:75:   required from here
/home/david/.local/libtorch_cxx11abi_1_4/include/ATen/core/dispatch/Dispatcher.h:211:80: error: redeclaration of ‘const at::Tensor& args#0’
  211 |     return kernel.template callUnboxedOnly<Return, Args...>(std::forward<Args>(args)...);
      |                                                                                ^~~~

...

/home/david/.local/libtorch_cxx11abi_1_4/include/c10/util/LeftRight.h:67:10: error: ‘typename std::result_of<F(const T&)>::type c10::LeftRight<T>::read(F&&) const [with F = c10::Dispatcher::doCallUnboxedOnly(const c10::DispatchTable&, const c10::LeftRight<ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction> >&, Args ...) const [with Return = std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>; Args = {at::Tensor&, at::Tensor&, at::Tensor&, const at::Tensor&, const at::Tensor&, const at::Tensor&, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>, const at::Tensor&, const at::Tensor&}]::<lambda(const ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction>&)>; T = ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction>; typename std::result_of<F(const T&)>::type = std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>]’, declared using local type ‘c10::Dispatcher::doCallUnboxedOnly(const c10::DispatchTable&, const c10::LeftRight<ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction> >&, Args ...) const [with Return = std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>; Args = {at::Tensor&, at::Tensor&, at::Tensor&, const at::Tensor&, const at::Tensor&, const at::Tensor&, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>, const at::Tensor&, const at::Tensor&}]::<lambda(const ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction>&)>’, is used but never defined [-fpermissive]
/home/david/.local/libtorch_cxx11abi_1_4/include/c10/util/LeftRight.h:67:10: error: ‘typename std::result_of<F(const T&)>::type c10::LeftRight<T>::read(F&&) const [with F = c10::Dispatcher::doCallUnboxedOnly(const c10::DispatchTable&, const c10::LeftRight<ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction> >&, Args ...) const [with Return = std::tuple<at::Tensor, at::Tensor, at::Tensor>; Args = {const at::Tensor&, const at::Tensor&, const at::Tensor&, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>, const at::Tensor&, const at::Tensor&, std::array<bool, 3>}]::<lambda(const ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction>&)>; T = ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction>; typename std::result_of<F(const T&)>::type = std::tuple<at::Tensor, at::Tensor, at::Tensor>]’, declared using local type ‘c10::Dispatcher::doCallUnboxedOnly(const c10::DispatchTable&, const c10::LeftRight<ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction> >&, Args ...) const [with Return = std::tuple<at::Tensor, at::Tensor, at::Tensor>; Args = {const at::Tensor&, const at::Tensor&, const at::Tensor&, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>, const at::Tensor&, const at::Tensor&, std::array<bool, 3>}]::<lambda(const ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction>&)>’, is used but never defined [-fpermissive]
/home/david/.local/libtorch_cxx11abi_1_4/include/c10/util/LeftRight.h:67:10: error: ‘typename std::result_of<F(const T&)>::type c10::LeftRight<T>::read(F&&) const [with F = c10::Dispatcher::doCallUnboxedOnly(const c10::DispatchTable&, const c10::LeftRight<ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction> >&, Args ...) const [with Return = at::Tensor&; Args = {at::Tensor&, const at::Tensor&, const at::Tensor&, c10::ArrayRef<long int>, const at::Tensor&, c10::ArrayRef<long int>, c10::ArrayRef<long int>}]::<lambda(const ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction>&)>; T = ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction>; typename std::result_of<F(const T&)>::type = at::Tensor&]’, declared using local type ‘c10::Dispatcher::doCallUnboxedOnly(const c10::DispatchTable&, const c10::LeftRight<ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction> >&, Args ...) const [with Return = at::Tensor&; Args = {at::Tensor&, const at::Tensor&, const at::Tensor&, c10::ArrayRef<long int>, const at::Tensor&, c10::ArrayRef<long int>, c10::ArrayRef<long int>}]::<lambda(const ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction>&)>’, is used but never defined [-fpermissive]
/home/david/.local/libtorch_cxx11abi_1_4/include/c10/util/LeftRight.h:67:10: error: ‘typename std::result_of<F(const T&)>::type c10::LeftRight<T>::read(F&&) const [with F = c10::Dispatcher::doCallUnboxedOnly(const c10::DispatchTable&, const c10::LeftRight<ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction> >&, Args ...) const [with Return = at::Tensor; Args = {const at::Tensor&, const at::Tensor&, c10::ArrayRef<long int>, const at::Tensor&, c10::ArrayRef<long int>, c10::ArrayRef<long int>}]::<lambda(const ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction>&)>; T = ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction>; typename std::result_of<F(const T&)>::type = at::Tensor]’, declared using local type ‘c10::Dispatcher::doCallUnboxedOnly(const c10::DispatchTable&, const c10::LeftRight<ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction> >&, Args ...) const [with Return = at::Tensor; Args = {const at::Tensor&, const at::Tensor&, c10::ArrayRef<long int>, const at::Tensor&, c10::ArrayRef<long int>, c10::ArrayRef<long int>}]::<lambda(const ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction>&)>’, is used but never defined [-fpermissive]
/home/david/.local/libtorch_cxx11abi_1_4/include/c10/util/LeftRight.h:67:10: error: ‘typename std::result_of<F(const T&)>::type c10::LeftRight<T>::read(F&&) const [with F = c10::Dispatcher::doCallUnboxedOnly(const c10::DispatchTable&, const c10::LeftRight<ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction> >&, Args ...) const [with Return = std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>; Args = {at::Tensor&, at::Tensor&, at::Tensor&, const at::Tensor&, const at::Tensor&, c10::ArrayRef<long int>, const at::Tensor&, c10::ArrayRef<long int>, c10::ArrayRef<long int>}]::<lambda(const ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction>&)>; T = ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction>; typename std::result_of<F(const T&)>::type = std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>]’, declared using local type ‘c10::Dispatcher::doCallUnboxedOnly(const c10::DispatchTable&, const c10::LeftRight<ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction> >&, Args ...) const [with Return = std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>; Args = {at::Tensor&, at::Tensor&, at::Tensor&, const at::Tensor&, const at::Tensor&, c10::ArrayRef<long int>, const at::Tensor&, c10::ArrayRef<long int>, c10::ArrayRef<long int>}]::<lambda(const ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction>&)>’, is used but never defined [-fpermissive]
/home/david/.local/libtorch_cxx11abi_1_4/include/c10/util/LeftRight.h:67:10: error: ‘typename std::result_of<F(const T&)>::type c10::LeftRight<T>::read(F&&) const [with F = c10::Dispatcher::doCallUnboxedOnly(const c10::DispatchTable&, const c10::LeftRight<ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction> >&, Args ...) const [with Return = std::tuple<at::Tensor, at::Tensor, at::Tensor>; Args = {const at::Tensor&, const at::Tensor&, c10::ArrayRef<long int>, const at::Tensor&, c10::ArrayRef<long int>, c10::ArrayRef<long int>}]::<lambda(const ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction>&)>; T = ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction>; typename std::result_of<F(const T&)>::type = std::tuple<at::Tensor, at::Tensor, at::Tensor>]’, declared using local type ‘c10::Dispatcher::doCallUnboxedOnly(const c10::DispatchTable&, const c10::LeftRight<ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction> >&, Args ...) const [with Return = std::tuple<at::Tensor, at::Tensor, at::Tensor>; Args = {const at::Tensor&, const at::Tensor&, c10::ArrayRef<long int>, const at::Tensor&, c10::ArrayRef<long int>, c10::ArrayRef<long int>}]::<lambda(const ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction>&)>’, is used but never defined [-fpermissive]
/home/david/.local/libtorch_cxx11abi_1_4/include/c10/util/LeftRight.h:67:10: error: ‘typename std::result_of<F(const T&)>::type c10::LeftRight<T>::read(F&&) const [with F = c10::Dispatcher::doCallUnboxedOnly(const c10::DispatchTable&, const c10::LeftRight<ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction> >&, Args ...) const [with Return = std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>; Args = {at::Tensor&, at::Tensor&, at::Tensor&, const at::Tensor&, const at::Tensor&, const at::Tensor&, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>, const at::Tensor&, const at::Tensor&}]::<lambda(const ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction>&)>; T = ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction>; typename std::result_of<F(const T&)>::type = std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>]’, declared using local type ‘c10::Dispatcher::doCallUnboxedOnly(const c10::DispatchTable&, const c10::LeftRight<ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction> >&, Args ...) const [with Return = std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>; Args = {at::Tensor&, at::Tensor&, at::Tensor&, const at::Tensor&, const at::Tensor&, const at::Tensor&, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>, const at::Tensor&, const at::Tensor&}]::<lambda(const ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction>&)>’, is used but never defined [-fpermissive]
/home/david/.local/libtorch_cxx11abi_1_4/include/c10/util/LeftRight.h:67:10: error: ‘typename std::result_of<F(const T&)>::type c10::LeftRight<T>::read(F&&) const [with F = c10::Dispatcher::doCallUnboxedOnly(const c10::DispatchTable&, const c10::LeftRight<ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction> >&, Args ...) const [with Return = std::tuple<at::Tensor, at::Tensor, at::Tensor>; Args = {const at::Tensor&, const at::Tensor&, const at::Tensor&, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>, const at::Tensor&, const at::Tensor&, std::array<bool, 3>}]::<lambda(const ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction>&)>; T = ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction>; typename std::result_of<F(const T&)>::type = std::tuple<at::Tensor, at::Tensor, at::Tensor>]’, declared using local type ‘c10::Dispatcher::doCallUnboxedOnly(const c10::DispatchTable&, const c10::LeftRight<ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction> >&, Args ...) const [with Return = std::tuple<at::Tensor, at::Tensor, at::Tensor>; Args = {const at::Tensor&, const at::Tensor&, const at::Tensor&, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>, const at::Tensor&, const at::Tensor&, std::array<bool, 3>}]::<lambda(const ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction>&)>’, is used but never defined [-fpermissive]
/home/david/.local/libtorch_cxx11abi_1_4/include/c10/util/LeftRight.h:67:10: error: ‘typename std::result_of<F(const T&)>::type c10::LeftRight<T>::read(F&&) const [with F = c10::Dispatcher::doCallUnboxedOnly(const c10::DispatchTable&, const c10::LeftRight<ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction> >&, Args ...) const [with Return = at::Tensor&; Args = {at::Tensor&, const at::Tensor&, const at::Tensor&, c10::ArrayRef<long int>, const at::Tensor&, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>}]::<lambda(const ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction>&)>; T = ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction>; typename std::result_of<F(const T&)>::type = at::Tensor&]’, declared using local type ‘c10::Dispatcher::doCallUnboxedOnly(const c10::DispatchTable&, const c10::LeftRight<ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction> >&, Args ...) const [with Return = at::Tensor&; Args = {at::Tensor&, const at::Tensor&, const at::Tensor&, c10::ArrayRef<long int>, const at::Tensor&, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>}]::<lambda(const ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction>&)>’, is used but never defined [-fpermissive]
/home/david/.local/libtorch_cxx11abi_1_4/include/c10/util/LeftRight.h:67:10: error: ‘typename std::result_of<F(const T&)>::type c10::LeftRight<T>::read(F&&) const [with F = c10::Dispatcher::doCallUnboxedOnly(const c10::DispatchTable&, const c10::LeftRight<ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction> >&, Args ...) const [with Return = at::Tensor; Args = {const at::Tensor&, const at::Tensor&, c10::ArrayRef<long int>, const at::Tensor&, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>}]::<lambda(const ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction>&)>; T = ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction>; typename std::result_of<F(const T&)>::type = at::Tensor]’, declared using local type ‘c10::Dispatcher::doCallUnboxedOnly(const c10::DispatchTable&, const c10::LeftRight<ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction> >&, Args ...) const [with Return = at::Tensor; Args = {const at::Tensor&, const at::Tensor&, c10::ArrayRef<long int>, const at::Tensor&, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>}]::<lambda(const ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction>&)>’, is used but never defined [-fpermissive]
/home/david/.local/libtorch_cxx11abi_1_4/include/c10/util/LeftRight.h:67:10: error: ‘typename std::result_of<F(const T&)>::type c10::LeftRight<T>::read(F&&) const [with F = c10::Dispatcher::doCallUnboxedOnly(const c10::DispatchTable&, const c10::LeftRight<ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction> >&, Args ...) const [with Return = std::tuple<at::Tensor&, at::Tensor&>; Args = {at::Tensor&, at::Tensor&, const at::Tensor&, const at::Tensor&, const at::Tensor&, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>}]::<lambda(const ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction>&)>; T = ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction>; typename std::result_of<F(const T&)>::type = std::tuple<at::Tensor&, at::Tensor&>]’, declared using local type ‘c10::Dispatcher::doCallUnboxedOnly(const c10::DispatchTable&, const c10::LeftRight<ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction> >&, Args ...) const [with Return = std::tuple<at::Tensor&, at::Tensor&>; Args = {at::Tensor&, at::Tensor&, const at::Tensor&, const at::Tensor&, const at::Tensor&, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>}]::<lambda(const ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction>&)>’, is used but never defined [-fpermissive]
/home/david/.local/libtorch_cxx11abi_1_4/include/c10/util/LeftRight.h:67:10: error: ‘typename std::result_of<F(const T&)>::type c10::LeftRight<T>::read(F&&) const [with F = c10::Dispatcher::doCallUnboxedOnly(const c10::DispatchTable&, const c10::LeftRight<ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction> >&, Args ...) const [with Return = std::tuple<at::Tensor, at::Tensor>; Args = {const at::Tensor&, const at::Tensor&, const at::Tensor&, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>, std::array<bool, 2>}]::<lambda(const ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction>&)>; T = ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction>; typename std::result_of<F(const T&)>::type = std::tuple<at::Tensor, at::Tensor>]’, declared using local type ‘c10::Dispatcher::doCallUnboxedOnly(const c10::DispatchTable&, const c10::LeftRight<ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction> >&, Args ...) const [with Return = std::tuple<at::Tensor, at::Tensor>; Args = {const at::Tensor&, const at::Tensor&, const at::Tensor&, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>, std::array<bool, 2>}]::<lambda(const ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction>&)>’, is used but never defined [-fpermissive]
/home/david/.local/libtorch_cxx11abi_1_4/include/c10/util/LeftRight.h:67:10: error: ‘typename std::result_of<F(const T&)>::type c10::LeftRight<T>::read(F&&) const [with F = c10::Dispatcher::doCallUnboxedOnly(const c10::DispatchTable&, const c10::LeftRight<ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction> >&, Args ...) const [with Return = std::tuple<at::Tensor, at::Tensor, at::Tensor>; Args = {const at::Tensor&, const at::Tensor&, const at::Tensor&, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>, std::array<bool, 3>}]::<lambda(const ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction>&)>; T = ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction>; typename std::result_of<F(const T&)>::type = std::tuple<at::Tensor, at::Tensor, at::Tensor>]’, declared using local type ‘c10::Dispatcher::doCallUnboxedOnly(const c10::DispatchTable&, const c10::LeftRight<ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction> >&, Args ...) const [with Return = std::tuple<at::Tensor, at::Tensor, at::Tensor>; Args = {const at::Tensor&, const at::Tensor&, const at::Tensor&, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>, std::array<bool, 3>}]::<lambda(const ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction>&)>’, is used but never defined [-fpermissive]
/home/david/.local/libtorch_cxx11abi_1_4/include/c10/util/LeftRight.h:67:10: error: ‘typename std::result_of<F(const T&)>::type c10::LeftRight<T>::read(F&&) const [with F = c10::Dispatcher::doCallUnboxedOnly(const c10::DispatchTable&, const c10::LeftRight<ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction> >&, Args ...) const [with Return = at::Tensor&; Args = {at::Tensor&, const at::Tensor&, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>}]::<lambda(const ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction>&)>; T = ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction>; typename std::result_of<F(const T&)>::type = at::Tensor&]’, declared using local type ‘c10::Dispatcher::doCallUnboxedOnly(const c10::DispatchTable&, const c10::LeftRight<ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction> >&, Args ...) const [with Return = at::Tensor&; Args = {at::Tensor&, const at::Tensor&, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>}]::<lambda(const ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction>&)>’, is used but never defined [-fpermissive]
/home/david/.local/libtorch_cxx11abi_1_4/include/c10/util/LeftRight.h:67:10: error: ‘typename std::result_of<F(const T&)>::type c10::LeftRight<T>::read(F&&) const [with F = c10::Dispatcher::doCallUnboxedOnly(const c10::DispatchTable&, const c10::LeftRight<ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction> >&, Args ...) const [with Return = at::Tensor; Args = {const at::Tensor&, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>}]::<lambda(const ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction>&)>; T = ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction>; typename std::result_of<F(const T&)>::type = at::Tensor]’, declared using local type ‘c10::Dispatcher::doCallUnboxedOnly(const c10::DispatchTable&, const c10::LeftRight<ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction> >&, Args ...) const [with Return = at::Tensor; Args = {const at::Tensor&, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>}]::<lambda(const ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction>&)>’, is used but never defined [-fpermissive]
/home/david/.local/libtorch_cxx11abi_1_4/include/c10/util/LeftRight.h:67:10: error: ‘typename std::result_of<F(const T&)>::type c10::LeftRight<T>::read(F&&) const [with F = c10::Dispatcher::doCallUnboxedOnly(const c10::DispatchTable&, const c10::LeftRight<ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction> >&, Args ...) const [with Return = at::Tensor&; Args = {at::Tensor&, const at::Tensor&, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>}]::<lambda(const ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction>&)>; T = ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction>; typename std::result_of<F(const T&)>::type = at::Tensor&]’, declared using local type ‘c10::Dispatcher::doCallUnboxedOnly(const c10::DispatchTable&, const c10::LeftRight<ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction> >&, Args ...) const [with Return = at::Tensor&; Args = {at::Tensor&, const at::Tensor&, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>}]::<lambda(const ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction>&)>’, is used but never defined [-fpermissive]
/home/david/.local/libtorch_cxx11abi_1_4/include/c10/util/LeftRight.h:67:10: error: ‘typename std::result_of<F(const T&)>::type c10::LeftRight<T>::read(F&&) const [with F = c10::Dispatcher::doCallUnboxedOnly(const c10::DispatchTable&, const c10::LeftRight<ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction> >&, Args ...) const [with Return = at::Tensor; Args = {const at::Tensor&, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>}]::<lambda(const ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction>&)>; T = ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction>; typename std::result_of<F(const T&)>::type = at::Tensor]’, declared using local type ‘c10::Dispatcher::doCallUnboxedOnly(const c10::DispatchTable&, const c10::LeftRight<ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction> >&, Args ...) const [with Return = at::Tensor; Args = {const at::Tensor&, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>, c10::ArrayRef<long int>}]::<lambda(const ska::flat_hash_map<c10::TensorTypeId, c10::KernelFunction>&)>’, is used but never defined [-fpermissive]
make[2]: *** [CMakeFiles/minimal-libtorch.dir/build.make:63: CMakeFiles/minimal-libtorch.dir/main.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:76: CMakeFiles/minimal-libtorch.dir/all] Error 2
make: *** [Makefile:84: all] Error 2

created time in 2 months

push eventdhpollack/huggingface_libtorch

David Pollack

commit sha 8acd8050c1248f9dab50283e36fe33716e6089cb

add .clang-format file

view details

push time in 2 months

push eventdhpollack/huggingface_libtorch

David Pollack

commit sha 0474500f2b958324e247808751a0ea929d093c55

use c++ templates for dataset and tokenizer * added a custom type for the features * dataset now uses templates to accept a custom type ,`TransformerFeatures<>`, rather than the default MNIST `Example<>` * dataset is templated for different tokenizers * refactored code to expect the custom type rather than the hack of stacking the different inputs into the channels dimension of the MNIST type. * created a Stack struct for the transform for our custom type

view details

David Pollack

commit sha b7d0e75bff13f33636b2266652713e950d762a60

remove from header files

view details

David Pollack

commit sha 8ff68a96a6209ddd64213ee47bdc3227ca5b8086

Merge pull request #3 from dhpollack/template_dataset use c++ template dataset

view details

push time in 2 months

PR merged dhpollack/huggingface_libtorch

use c++ template dataset

using c++ templates to the dataset class and removed using namespace ... from header files

+327 -283

0 comment

17 changed files

dhpollack

pr closed time in 2 months

PR opened dhpollack/huggingface_libtorch

use c++ template dataset

using c++ templates to the dataset class and removed using namespace ... from header files

+327 -283

0 comment

17 changed files

pr created time in 2 months

create barnchdhpollack/huggingface_libtorch

branch : template_dataset

created branch time in 2 months

push eventdhpollack/libtorch_custom_dataset

David Pollack

commit sha fb2dec4332222bb043c402e4453c7a0f919d0f0d

initial commit

view details

David Pollack

commit sha daee0c1aa3fd44edda01b6e683897530f6d921cf

linting

view details

David Pollack

commit sha 7267d26d01639351ded9025116cf2cc618a68894

Merge pull request #1 from dhpollack/initial_commit Initial commit

view details

push time in 2 months

push eventdhpollack/libtorch_custom_dataset

David Pollack

commit sha daee0c1aa3fd44edda01b6e683897530f6d921cf

linting

view details

push time in 2 months

push eventdhpollack/libtorch_custom_dataset

David Pollack

commit sha fb2dec4332222bb043c402e4453c7a0f919d0f0d

initial commit

view details

push time in 2 months

create barnchdhpollack/libtorch_custom_dataset

branch : initial_commit

created branch time in 2 months

create barnchdhpollack/libtorch_custom_dataset

branch : master

created branch time in 2 months

created repositorydhpollack/libtorch_custom_dataset

A dataset with custom inputs using the pytorch c++ frontend, libtorch

created time in 2 months

issue commenttensorflow/tensorflow

Dataset padded_batch does not work as documented

Ok, still seems unintuitive but that's a very helpful answer. I'll give it a shot. Thanks for clearing that up for me. 👍🏼

dhpollack

comment created time in 2 months

issue commenttensorflow/tensorflow

Dataset padded_batch does not work as documented

@jsimsa the documentation is at best unclear. If I want to pad my tensor in all dimensions with -1 then why can't I use a simple integer? This seems like the most practical case and it isn't clear to me how to do this. I'm willing to admit this might not be a bug but in that case it's a feature request to allow for simple types to be propagated logically.

dhpollack

comment created time in 2 months

delete branch dhpollack/huggingface_libtorch

delete branch : add_tests

delete time in 3 months

PR merged dhpollack/huggingface_libtorch

Major Refactoring

Major refactoring of code base. The following were done:

  • create transformers config file readers
  • add tests with google tests
  • break program into library and executable
  • split tokenizer from dataset
  • reconfigure cmake files
  • more
+815 -180

0 comment

28 changed files

dhpollack

pr closed time in 3 months

push eventdhpollack/huggingface_libtorch

David Pollack

commit sha 979b5233d49c3a7a725ed0c129a81d90764d5798

add tests and ignore first line of csv

view details

David Pollack

commit sha 927b17f9dd4c67cd5765609e8f36862ad657a638

add nlohmann dependency to travis

view details

David Pollack

commit sha d4a851c6911399b8b447eb75f430d43571e59f12

download nlohmann

view details

David Pollack

commit sha 594ed398254e684fa5667928f051f0a66e84484a

fix cmake file

view details

David Pollack

commit sha 6c335131a5c2348e82857f33277899619959130e

update bash envs

view details

David Pollack

commit sha c8c2ac35de372298b7682dba2e2a12e398e4a05d

remove apt nlohmann because it's old

view details

David Pollack

commit sha 2db326de8186d42fac34eeb6b536b2083fecfd98

troubleshooting cmake

view details

David Pollack

commit sha 69b2857e034c706656899b3b207f84dbacdf3ee5

lots of refactoring in cmake

view details

David Pollack

commit sha ea6380dc3a9435a420b76a4bab261b89345abfa0

forgot to add the new run_model.h file with last commit

view details

David Pollack

commit sha a90cc6fc313a4ac040494710ec9ce3dbc9a0224d

build googletest as shared library see [this Stack Overflow question](https://stackoverflow.com/questions/21116622/undefined-reference-to-pthread-key-create-linker-error) for more details

view details

David Pollack

commit sha 4788b46cecc3ef62fa9b6948fc57a8c079d1e030

broke path to gtest in last commit

view details

David Pollack

commit sha 948be31fd92598adfc610a3ffa55e240e7f8978f

add rebuild of ld config

view details

David Pollack

commit sha f1b6beac77469401aa52eac65a84806fc73a3f69

typo in cmake file

view details

David Pollack

commit sha 3263cdef0247c2afa810edaf39f2236ab949e514

another typo in travis

view details

David Pollack

commit sha ea72535045616a8ba164b0d70c6d2afd890fdb15

another travis fix

view details

David Pollack

commit sha b9221efa457c3b8ad0843b101bc6ff211d0bf2f6

gtests not working but shared library seems to work correctly

view details

David Pollack

commit sha d029d1028e4d35a57ad79aac8b099a4b0f0e6dfd

switched version of libtorch using the cxx11 ABI version of libtorch. The other doesn't seem to work with googletest. This is the "second" link on the pytorch downloads page.

view details

David Pollack

commit sha c796a54e9d1e9e303af966ccdae648522a023ce9

travis conda updates

view details

David Pollack

commit sha 684cd9eec4759bc612396a86326ec858e55702d9

major refactor checkpoint.

view details

David Pollack

commit sha 82a73172048569614f58792e92ea6d555357ff19

created an albert tokenizer class

view details

push time in 3 months

more