profile
viewpoint
Kashif Rasul kashif @zalando Berlin, Germany Research Scientist at @zalando Research working on Deep Learning, Reinforcement Learning and HPC.

hannes-brt/cudnn-python-wrappers 128

Python wrappers for the NVIDIA cuDNN libraries

kashif/cuda-workshop 25

Code examples for the CUDA workshop

kashif/ceres-solver 19

A Nonlinear Least Squares Minimizer

kashif/deep-learning-models 2

Keras code and weights files for popular deep learning models.

kashif/convnetjs 1

Deep Learning in Javascript. Train Convolutional Neural Networks (or ordinary ones) in your browser.

kashif/cudnn-python-wrappers 1

Python wrappers for the NVIDIA cuDNN libraries

kashif/deep-learning-time-series 1

List of papers, code and experiments using deep learning for time series forecasting

kashif/activerecord-postgis-adapter 0

ActiveRecord connection adapter for PostGIS, based on postgresql and rgeo

kashif/AdamW-and-SGDW 0

Fixing Weight Decay Regularization in Adam

created tagzalandoresearch/pytorch-ts

tagv0.1.1

PyTorch based Probabilistic Time Series forecasting framework based on GluonTS backend

created time in 5 hours

release zalandoresearch/pytorch-ts

v0.1.1

released time in 5 hours

push eventzalandoresearch/pytorch-ts

Kashif Rasul

commit sha 66c81482b64b7324cfe69665d8fe1b62bbe4eef2

update v0.1.1

view details

push time in 5 hours

delete tag zalandoresearch/pytorch-ts

delete tag : v0.1.1

delete time in 5 hours

created tagzalandoresearch/pytorch-ts

tagv0.1.1

PyTorch based Probabilistic Time Series forecasting framework based on GluonTS backend

created time in 5 hours

release zalandoresearch/pytorch-ts

v0.1.1

released time in 5 hours

delete tag zalandoresearch/pytorch-ts

delete tag : v0.1.1

delete time in 5 hours

created tagzalandoresearch/pytorch-ts

tagv0.1.1

PyTorch based Probabilistic Time Series forecasting framework based on GluonTS backend

created time in 5 hours

release zalandoresearch/pytorch-ts

v0.1.1

released time in 5 hours

push eventzalandoresearch/pytorch-ts

Kashif Rasul

commit sha 746d6d97b559f0158f474fedf78407fa2c3d398e

update link to images

view details

push time in 5 hours

created tagzalandoresearch/pytorch-ts

tagv0.1.0

PyTorch based Probabilistic Time Series forecasting framework based on GluonTS backend

created time in 6 hours

release zalandoresearch/pytorch-ts

v0.1.0

released time in 6 hours

delete tag zalandoresearch/pytorch-ts

delete tag : v0.1.0

delete time in 6 hours

push eventzalandoresearch/pytorch-ts

Kashif Rasul

commit sha 1d341ac515a230a6d85886a880e8f4c4936bf00d

fix NB test

view details

push time in 6 hours

pull request commentawslabs/gluon-ts

Scale the negative binomial's gamma()

data = np.random.negative_binomial(n=10, p=0.9, size=(200,))

data_series = pd.Series(
    data=data,
    index=pd.date_range(
        start='2014-01-05 00:00:00',
        periods=len(data),
        freq='w'
    )
)

dataset = ListDataset(
    data_iter=[
        {
            "start": '2014-01-05 00:00:00',
            "target": list(data)
        }
    ],
    freq="w"
)

estimator = SimpleFeedForwardEstimator(
    freq="w",
    prediction_length=20,
    distr_output=NegativeBinomialOutput(),
    trainer=Trainer(epochs=20, device='cuda'),
)

predictor = estimator.train(dataset)

forecast = next(iter(predictor.predict(dataset)))
data_series.plot()
forecast.plot()
plt.show()

download

kashif

comment created time in 6 hours

issue closedzalandoresearch/pytorch-ts

pip3 install failing

Outputs

ERROR: Could not find a version that satisfies the requirement pytorchts (from versions: none) ERROR: No matching distribution found for pytorchts

closed time in 11 hours

nsankar

issue commentzalandoresearch/pytorch-ts

pip3 install failing

its on pypi now.

nsankar

comment created time in 11 hours

created tagzalandoresearch/pytorch-ts

tagv0.1.0

PyTorch based Probabilistic Time Series forecasting framework based on GluonTS backend

created time in 11 hours

release zalandoresearch/pytorch-ts

v0.1.0

released time in 11 hours

push eventzalandoresearch/pytorch-ts

Kashif Rasul

commit sha d39c569d47a1bb5f34f4e0c37e8766ad29f1b447

publish to test-pypi with user token updated url added author only sdist make bdist_wheel fix typo publish to pypi moved to 1 workflow remove test workflow add to test github.event_name run on relase created

view details

push time in 11 hours

delete tag zalandoresearch/pytorch-ts

delete tag : v0.1.0

delete time in 11 hours

push eventzalandoresearch/pytorch-ts

Kashif Rasul

commit sha 70bc32ee19f41385bbad7dc328a1f3d3491315c8

run on relase created

view details

push time in 11 hours

created tagzalandoresearch/pytorch-ts

tagv0.1.0

PyTorch based Probabilistic Time Series forecasting framework based on GluonTS backend

created time in a day

release zalandoresearch/pytorch-ts

v0.1.0

released time in a day

delete tag zalandoresearch/pytorch-ts

delete tag : v0.1.0

delete time in a day

push eventzalandoresearch/pytorch-ts

Kashif Rasul

commit sha 1e7239e743fe6c1435252416a7eb9abe86eef091

github.event_name

view details

push time in a day

created tagzalandoresearch/pytorch-ts

tagv0.1.0

PyTorch based Probabilistic Time Series forecasting framework based on GluonTS backend

created time in a day

release zalandoresearch/pytorch-ts

v0.1.0

released time in a day

push eventzalandoresearch/pytorch-ts

Kashif Rasul

commit sha be4451c94a83fcc8b3bd5df34a96ab6744f12077

moved to 1 workflow

view details

Kashif Rasul

commit sha 8a27d433ff6b82677682e1e4149bf84197839057

remove test workflow

view details

Kashif Rasul

commit sha 7d05a22ff73e075b9a7f028025a6312d31554ebb

add to test

view details

push time in a day

push eventzalandoresearch/pytorch-ts

Kashif Rasul

commit sha 594cb3710dc740aed689e8228090687e27a6c223

fix typo

view details

Kashif Rasul

commit sha fbf1e11ef46b435ce5bc97d14ded2eddc9b228d1

publish to pypi

view details

push time in a day

push eventzalandoresearch/pytorch-ts

Kashif Rasul

commit sha 66056752cfc77beb69b2ba74997adb194f9eea5f

make bdist_wheel

view details

push time in a day

push eventzalandoresearch/pytorch-ts

Kashif Rasul

commit sha 55b8b8da538933c01b5e30105539e0c9a9d9a206

added author

view details

Kashif Rasul

commit sha 27acae1f9ff0edd14b8ae450e9c6e7dad4112f55

only sdist

view details

push time in a day

push eventzalandoresearch/pytorch-ts

Kashif Rasul

commit sha e9b03912fa10c0ed8fdaaccdfd83c477b2076088

updated url

view details

push time in a day

push eventzalandoresearch/pytorch-ts

Kashif Rasul

commit sha 13c53de28740e14569aa8c93433f05a28dd60b47

with user token

view details

push time in a day

push eventzalandoresearch/pytorch-ts

Kashif Rasul

commit sha 90a425337de24e54833e781cfe3e082eefbc4c1c

publish to test-pypi

view details

push time in a day

PR opened awslabs/gluon-ts

Scale the negative binomial's gamma()

For issue #906

The theta param. of Gamma gets scale by scale *theta = alpha * mu * scale. So only need to scale the mu .

Issue #, if available:

Description of changes:

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

+0 -2

0 comment

1 changed file

pr created time in 3 days

push eventkashif/gluon-ts

Kashif Rasul

commit sha cc09b0268a15d9df9fb9db9494d025080487b284

Scale the negative binomial's gamma() For issue #906 The `theta` param. of Gamma gets scale by `scale *theta = alpha * mu * scale`. So only need to scale the `mu` .

view details

push time in 3 days

push eventzalandoresearch/pytorch-ts

Kashif Rasul

commit sha f0d64126c649fb9879e83f68d14afbf82ad7d4ab

remove initialization

view details

push time in 3 days

push eventzalandoresearch/pytorch-ts

Dr. Kashif Rasul

commit sha ac1e89e2cba609426aaeb908aa1d8cdf7bcff5a7

formatting

view details

Kashif Rasul

commit sha ead4609f919bebea8e7bd7816771d977ca002371

use logits for neg. bin.

view details

push time in 3 days

issue commentawslabs/gluon-ts

DeepAR + NegativeBinomialOutput performance degradation after upgrading from v0.4.2 to 0.5.0

ok so i an alternative way which i think is easier to think about is that the NB is the discrete form of the Gamma(alpha, beta) in the sense that NB is just poisson with the rate sampled from a Gamma(alpha, beta) so the scaled rate say c*rate needs to be from Gamma(alpha, beta/c). Thus If I use the total_count and logit parametrizaton of the NB as I have it in pytorch then the scaling only means logit += log(scale). This formulation is simpler to code as well and then the above works:

image

I can send a PR with this method.

jsirol

comment created time in 3 days

issue commentawslabs/gluon-ts

DeepAR + NegativeBinomialOutput performance degradation after upgrading from v0.4.2 to 0.5.0

thanks @jgasthaus so i am checking with relu etc. I need to make food for the kids and will look again by tonight

jsirol

comment created time in 3 days

issue commentawslabs/gluon-ts

DeepAR + NegativeBinomialOutput performance degradation after upgrading from v0.4.2 to 0.5.0

@lostella one minor issue is that when scale=1.0 the transformation https://github.com/awslabs/gluon-ts/blob/master/src/gluonts/mx/distribution/neg_binomial.py#L136

scale = 1.0 + softplus(scale - 1.0) sets the scale to 1.6 which seems which ends up changing the alpha and mu

jsirol

comment created time in 3 days

startedidiap/fast-transformers

started time in 7 days

push eventkashif/tensorflow

Thomas O'Malley

commit sha 86a6d44352a7919d4b2417ca042a016c9fc6c3ca

Fix to add_metric to not create duplicate metrics when a Model is called on new inputs. PiperOrigin-RevId: 243094192

view details

Ruoxin Sang

commit sha 36e10587b556c5f61d7c7d2c9728cc076fbdac79

Make `tf.reshape(tensor, shape)` work with the case that `tensor` has 0 dimension and `shape` has unknown dimension. PiperOrigin-RevId: 243094911

view details

Dimitris Vardoulakis

commit sha 90a4b1ecfea75482eee20c3d051757517b5c8001

[TF:XLA] Print the layouts of the shapes because when the check fails, the difference may be in the layouts. PiperOrigin-RevId: 243099426

view details

Scott Zhu

commit sha 9d724a8e6034d321e97cdc9972d4d6e7adb3e3ca

Fix #27431. The layer class was incorrectly tracked by layer._layer, which should only track layer instance. This should be mitigated once b/110718070 is fixed. PiperOrigin-RevId: 243103748

view details

Pengchong Jin

commit sha 35e39cd3f102d663d198074e95ff77197d2e0a1d

Add an option to clip/not clip box outputs in CombinedNonMaxSuppression. PiperOrigin-RevId: 243104621

view details

Akshay Modi

commit sha fb8f00c0b35d25511bbd22f38e2ce340ec229cf5

Return failed status when client not found. PiperOrigin-RevId: 243105084

view details

A. Unique TensorFlower

commit sha 1942fea1b8a2be3e9eaad93cc9bd6af5a285a71f

Updates versions for Apple and Swift Bazel rules. PiperOrigin-RevId: 243106606

view details

Brian Lee

commit sha 80a47e09c97f8b7d4b45d3ed86c09b20ffca3ea4

Switch default for autograph.(to_graph|to_code|convert) to optional_features=None to better align with tf.function PiperOrigin-RevId: 243108206

view details

Saurabh Saxena

commit sha 826a2450d11bfdcefb4809ac8d3cad1345edb45b

Use the element_shape input when doing shape inference for TensorListGetItem, TensorListStack and TensorListGather. This avoids the need to manually set the shape of the output tensor in TensorArray.read/stack/gather. This is necessary to make shape inference correctly work in a deserialized graph containing v2 TensorArray ops, e.g. when building the gradient of tf.cond/while_loop. PiperOrigin-RevId: 243109816

view details

Greg Billock

commit sha 1b206c1a31cd530b55817e48e0f54ccb8ba112b7

Add benchmark for unicode_script op. PiperOrigin-RevId: 243111734

view details

Eugene Zhulenev

commit sha ceeea9c91647e21cb846599c0d326f22de98c534

[Grappler] Add IsAnyMatMul and restrict IsMatMul to just 'MatMul' op type PiperOrigin-RevId: 243115109

view details

A. Unique TensorFlower

commit sha 9a4141c306bf2a13bf4d83e685086a5fe91a3b95

Added timestamp flag in TensorFlow TPU profiler for monitoring. PiperOrigin-RevId: 243116546

view details

A. Unique TensorFlower

commit sha f8c7522bb4d72d82f483f5aaf21008dc051c4bd0

Update ops-related pbtxt files. PiperOrigin-RevId: 243117391

view details

Allen Lavoie

commit sha 48cb1ae64059ff52bdd9db643a2b9cb0f5e42cc5

Switch to wrapt for trackable dict data structures instead of subclassing collections.Mapping Adds a dependency on wrapt to TensorFlow's pip package. It's not a very large dependency, and this is a usability win when subclassing TensorFlow types (Model, Module, Layer, etc.). wrapt fixes issues with CPython's direct access to dictionaries by inheriting the memory layout of the wrapped object, so we can now pass isinstance(obj, dict) checks while staying correct (i.e. {}.update(obj) won't look at the possibly-stale wrapper object's memory). There are several new oddities with the wrapt approach, but overall this is much better. We're already doing dict wrapping, just not in a way that passes isinstance(obj, dict) checks. We need it to support restore-on-create for variables added to objects while executing eagerly. I tried switching _ListWrapper to wrapt too. It works in most Python versions (with some tweaks to TF to accommodate the change), but apparently in Python 3.4 isinstance(obj, (list, tuple)) is not functioning properly. There are no correctness issues with actually subclassing list, and it means type(obj) shows up as a subclass of list, so this isn't too bad. Since ObjectProxy copies the memory layout of the wrapped object we can't do both of these at the same time. PiperOrigin-RevId: 243118363

view details

Dan Moldovan

commit sha fce41a232064112ea3d6ad6c8cb24bba2379e529

Whitelist functions that don't expose a __code__ object. This includes things like native bindings (such as NumPy), as well as harder-to-identify entities like method-wrapper objects. It also allows the converter to step into some of the TF op implementations without error. PiperOrigin-RevId: 243121306

view details

A. Unique TensorFlower

commit sha 9cc1ff6768c14d2d0a01267c9c792055d382234f

Go: Update generated wrapper functions for TensorFlow ops. PiperOrigin-RevId: 243122003

view details

TensorFlower Gardener

commit sha c43ab94a05c0c9aedec20bf2a4b4a94733c7596c

Merge pull request #27588 from neighlyd:patch-1 PiperOrigin-RevId: 243122021

view details

A. Unique TensorFlower

commit sha 8be9158c7a701d933bbe532f5d54df17f47a4284

Remove UD730 Udacity course material, replace with pointer. PiperOrigin-RevId: 243126614

view details

TensorFlower Gardener

commit sha 02c111ab4269ab73a506164e4b54ba996d28a8cf

Merge pull request #25384 from Intel-tensorflow:optimize_filter_dataset_with_random_uniform PiperOrigin-RevId: 243130029

view details

Eugene Zhulenev

commit sha 076f6e4d3ddc9dc4aed2e59f199defb36ba454c2

[Grappler] Add support for FusedMatMul to remapper optimizer PiperOrigin-RevId: 243131510

view details

push time in 7 days

pull request commentawslabs/gluon-ts

Update bibliographic references in README

@lostella may I kindly suggest to create a reference which also acknowledges the work of the open source contributors as well?

For example, of my many open source contributions to various frameworks etc. Lasagne was the only project that officially acknowledged this e.g https://zenodo.org/record/27878#.XvWlqJMzZpI In other projects we have been involved in we have tried to add the open source contributors to the official paper as well, e.g. FlairNLP.

Perhaps one can adapt a similar reference setup here?

lostella

comment created time in 10 days

startedgoogle-research/realworldrl_suite

started time in 11 days

push eventkashif/firedup

Kashif Rasul

commit sha 9e6605dbb6f564c8e1e35121681581e103c476b5

added support for wandb logging

view details

push time in 13 days

startedloeweX/AmortizedCausalDiscovery

started time in 13 days

startedajayjain/lmconv

started time in 13 days

startedhojonathanho/diffusion

started time in 14 days

push eventkashif/pytorch

ShawnZhong

commit sha c8c53c802e3ee7c1b39d8d8ca45a2e502b08c465

Add `generator=` kwarg for DataLoader & random samplers (#39737) Summary: Fix https://github.com/pytorch/pytorch/issues/39572 Add `generator=` kwarg for DataLoader & random samplers cc: SsnL, deeppatel4557, albanD, mitar Pull Request resolved: https://github.com/pytorch/pytorch/pull/39737 Differential Revision: D22019132 Pulled By: albanD fbshipit-source-id: 835e08b86c5396bc0b0e41057661306b15394d6e

view details

Nikita Shulga

commit sha c6b69a4e4ddde8a0b309638e7f7ef684a45ad683

Delete Python <= 3.5 specific checks from the code (#39879) Summary: Remove PY3 and PY34 checks from `torch/testing/_internal/common_utils.py` Remove PY35 global var from `torch.jit.annotations` Always call `try_get_real_signature` in `torch/jit/annotations.py` Use `map` instead of `imap`, since Python-2 is no longer support, so map is always lazy. Remove all pre Python-3.6 checks from `torch/_six.py` and `torch/_appdirs.py` Pull Request resolved: https://github.com/pytorch/pytorch/pull/39879 Differential Revision: D22037811 Pulled By: malfet fbshipit-source-id: af0c79f976569c2059d39ecb49c6b8285161734f

view details

Jeff Daily

commit sha ac8d63a52f93367dd714f81b84bdaba3dac44c1a

Update jenkins caffe2 scripts for ROCm circleci images. (#39908) Summary: Remove work-around to install conda locally for older ROCm jenkins images. Remove use of sudo to install pip packages. Install missing packages for caffe2 test.sh needs on ROCm. CC ezyang xw285cornell sunway513 Pull Request resolved: https://github.com/pytorch/pytorch/pull/39908 Differential Revision: D22044404 Pulled By: ezyang fbshipit-source-id: da6b5a45dcf68432339ad6f1c4af2d8a96df73f1

view details

peter

commit sha 1d642e2adf69f78e4e7e70be0b8e3b8013200d5c

Improve cuda error message for MSVC (#39987) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39987 Differential Revision: D22039408 Pulled By: ezyang fbshipit-source-id: b15f6eced0aaee1087c77564126aa304623cbed1

view details

Tongzhou Wang

commit sha 019eeb3183e5787553a32bfdc61fbe32e9513b0f

Kill DataLoader worker when we can't join (#39869) Summary: There still are occasional reports of DataLoader workers not exiting (e.g., https://github.com/pytorch/pytorch/issues/39570). Before we figure out why, we should just kill them if the join timesout to prevent hanging. Pull Request resolved: https://github.com/pytorch/pytorch/pull/39869 Differential Revision: D22018501 Pulled By: ezyang fbshipit-source-id: 66a00d0f5b3e303b6106b336949176b3ff8ac8ae

view details

Jeremy Lilley

commit sha 569c85b45deb5bec0517d650ad15bd1ba040315b

[futures] Add assert to Future constValue() accessor, add hasValue(). (#39950) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39950 Per the comment in the code, constValue() should only be used in the case where the future was complete and value was not an error. Add an assert to enforce this. Also, add hasValue() accessor for completeness. ghstack-source-id: 105815597 Test Plan: buck test mode/dev-nosan caffe2/test/cpp/jit: Differential Revision: D22021776 fbshipit-source-id: b59b6c775eab344068a76f4cd8c3a9dc1f2a174e

view details

generatedunixname89002005287564

commit sha 42f0ea49cad151aa44f94df5d26d6e850b8ebb4c

[Codemod][GleanFbcode] Remove dead includes in caffe2/binaries Reviewed By: ilia-cher Differential Revision: D21949969 fbshipit-source-id: 80336f82e9507dd001d079644cba5012bc5c8eed

view details

Mikhail Zolotukhin

commit sha 79450edad32bde56b8fb6f5ce95b8cebc8254fe7

[JIT] IRParser: properly parse negative numbers. (#39981) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39981 Test Plan: Imported from OSS Reviewed By: jamesr66a Differential Revision: D22032786 Pulled By: ZolotukhinM fbshipit-source-id: b6c5237ac5c1c331d5053a620eb9a37a4c698125

view details

Sebastian Messmer

commit sha 4c3436838f75e5f60f9aa6c08a626c2a5b539181

Show which type was the wrong one when a signature is invalid (#39491) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39491 - ghstack-source-id: 105820787 Test Plan: waitforsandcastle Differential Revision: D21872519 fbshipit-source-id: 18f030c2b4283d6e6833d9b5164e7484137ca0fb

view details

Hongzheng Shi

commit sha f6b0fbe2c5f355190113c362e9e263df8903c88e

topk tensor k support (#39407) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39407 - support passing a single element tensor as k for topk module - support passing a single element tensor to constant fill output Test Plan: buck test dper3/dper3/modules/tests:core_modules_test -- test_topk_gating_without_split_examples_tensor_k buck test caffe2/caffe2/python:hypothesis_test -- test_constant_fill_from_tensor Reviewed By: huayuli00 Differential Revision: D21843739 fbshipit-source-id: 0c5f5c03e9f57eeba40c0068784625164c2527ec

view details

Ilia Cherniavskii

commit sha cc3fc786b7ba04a0918f0e817a896a09f74f7e78

[resubmit] [pytorch][PR] Fix for num_threads==1 in OpenMP "parallel for" (#39533) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39533 Test Plan: CI Reviewed By: ngimel Differential Revision: D21889269 fbshipit-source-id: 5ba13a0a3ec11edd0b6a7c3fdb35396b847a3d9e

view details

Jeremy Lilley

commit sha 0c2542859754a160aa7b6e946f2011e54c710781

[futures] Reland: Add torch.futures.collect_all()/wait_all() python api. (#39964) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39964 The "[fut.wait() for fut in futs]" idiom can introduce up to O(len(futs)) thread switches, which may be excessive for large N. This plumbs through the new c++ c10::collectAll() to Python space so that we only employ a single jit-side wait. Test Plan: buck test mode/dev-nosan caffe2/test/distributed/rpc:rpc_spawn Differential Revision: D22027412 fbshipit-source-id: 4e344a19a09638ee46e7fc478df80a41941b84ce

view details

Xiong Wei

commit sha 51e341df4fda6b9b1d7ca101394eb447dcf327a3

[bernoulli_kernel] Replace CPU_tensor_apply functions with cpu_serial_kernel (#39711) Summary: Resolve https://github.com/pytorch/pytorch/issues/39556 Related https://github.com/pytorch/pytorch/issues/38558 Replace CPU_tensor_apply functions with cpu_serial_kernel in bernoulli_kernel, unifying bernoulli_kernel with all other kernels in `cpu/DistributionTemplates.h`. Signed-off-by: Xiong Wei <xiongw.fnst@cn.fujitsu.com> Pull Request resolved: https://github.com/pytorch/pytorch/pull/39711 Differential Revision: D22052374 Pulled By: pbelevich fbshipit-source-id: 416334da50195b67f05a18a98971f370cba4fb0d

view details

Luca Wehrstedt

commit sha ecfe0c9a2569f50b358d178dd7c24b59fde2ec8a

[TensorPipe] Use registry for transports and channels (#39957) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39957 In order to provide a pluggable and extendable way to add new transports and channels to the TensorPipe agent, we use two registries. This allows us to separate the specific details of each backend (e.g., how it determines what address to use) from the generic logic of setting up TensorPipe. Test Plan: Built `//caffe2:ifbpy` on two devservers, one in CLN and the other in PRN, and ran RPC across them. Differential Revision: D22017614 fbshipit-source-id: 4ea7e6ed004a69187666f41bf59858e8174fde0d

view details

Shihao Xu

commit sha d602950cb430e0a0e0e2147f2ebe049537e39b6e

[torch.distributed.rpc] Add WorkerInfo python repr magic method (#40004) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40004 close https://github.com/pytorch/pytorch/issues/39965 ghstack-source-id: 105891281 Test Plan: buck test mode/opt-asan //caffe2/test:jit -- 'test_vae_quantized \(jit\.test_models\.TestModels\)' buck test mode/dev-nosan //caffe2/test/distributed/rpc/:rpc_fork Differential Revision: D5696583 fbshipit-source-id: 19570414dc833c38fcd1ad38d2f0a816dbf51743

view details

Ivan Kobzarev

commit sha 84d8a42fdb4460a991499d16b6c3e4688d7594ef

[android] Remove android fbjni subproject (#39691) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39691 After switching on using fbjni-java-only dependency, we do not need to have gradle subproject fbjni. Test Plan: Imported from OSS Differential Revision: D22054575 Pulled By: IvanKobzarev fbshipit-source-id: 331478a57dd0d0aa06a5ce96278b6c897cb0ac78

view details

Xiang Gao

commit sha eb358f49c2283e1b6683734f5ac4791766e5a245

Overload complex math functions on both :: and std:: (#39829) Summary: Because ROCm has bug on std:: functions. Pull Request resolved: https://github.com/pytorch/pytorch/pull/39829 Differential Revision: D22018430 Pulled By: anjali411 fbshipit-source-id: 671e158d3e3342394d1deaebd7ff011cce94c31a

view details

Jerry Zhang

commit sha f37b8e73f46dc46e19117f41beaf6b022897e34f

[quant][graphmode] Support prim:TupleUnpack and prim::TupleConstruct (#39895) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39895 Test Plan: Imported from OSS Differential Revision: D22009854 fbshipit-source-id: a5dab2b4f943e5e047ba9e8573088adf66f5da6b

view details

Shihao Xu

commit sha 00651b8c93354da13ab30c3fefd8cd529dc883e7

[distribtued.nn] Implement TorchScript-compatible RemoteModule API (#37139) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37139 See design doc in https://github.com/pytorch/pytorch/issues/37136 ghstack-source-id: 105926270 Test Plan: TODO: - Make the generated Interface usable. https://github.com/pytorch/pytorch/pull/37139#discussion_r434190978 - - Avoid generating the same template instances for Module that is not scriptable. - Remove "infer_module_interface_cls". - Use Python format instead of a CodeTemplate - Use Python tempfile to track and delete file. Does it work if there is crash. ``` buck test mode/dev-nosan //caffe2/test/distributed/nn/jit:test_instantiator buck build mode/dev-nosan //caffe2/test/distributed/nn/jit:test_instantiator && \ buck-out/gen/caffe2/test/distributed/nn/jit/test_instantiator\#binary.par -r test_instantiate_scripted_remote_module_template buck build mode/dev-nosan //caffe2/test/distributed/nn/jit:test_instantiator && \ buck-out/gen/caffe2/test/distributed/nn/jit/test_instantiator\#binary.par -r test_instantiate_non_scripted_remote_module_template ``` ``` buck test mode/dev-nosan //caffe2/test/distributed/nn/api:remote_module_spawn ``` ``` buck test mode/dev-nosan //caffe2/test/distributed/nn/api:remote_module_fork buck build mode/dev-nosan //caffe2/test/distributed/nn/api:remote_module_fork && \ buck-out/gen/caffe2/test/distributed/nn/api/remote_module_fork\#binary.par -r test_user_provided_global_unique_name buck build mode/dev-nosan //caffe2/test/distributed/nn/api:remote_module_fork && \ buck-out/gen/caffe2/test/distributed/nn/api/remote_module_fork\#binary.par -r test_forward_async_script buck build mode/dev-nosan //caffe2/test/distributed/nn/api:remote_module_fork && \ buck-out/gen/caffe2/test/distributed/nn/api/remote_module_fork\#binary.par -r test_forward_sync_script buck build mode/dev-nosan //caffe2/test/distributed/nn/api:remote_module_fork && \ buck-out/gen/caffe2/test/distributed/nn/api/remote_module_fork\#binary.par -r test_forward_with_kwargs buck build mode/dev-nosan //caffe2/test/distributed/nn/api:remote_module_fork && \ buck-out/gen/caffe2/test/distributed/nn/api/remote_module_fork\#binary.par -r test_user_provided_global_unique_name ``` ``` buck test mode/dev-nosan //caffe2/test/distributed/rpc:rpc_fork ``` buck test mode/opt-asan //caffe2/test:jit -- 'test_script_forward_method_replacement buck build mode/dev-nosan //caffe2/test:jit && \ buck-out/gen/caffe2/test/jit\#binary.par -r 'test_script_forward_method_replacement' buck build mode/dev-nosan //caffe2/test:jit && \ buck-out/gen/caffe2/test/jit\#binary.par -r 'test_imported_classes' Differential Revision: D20499658 fbshipit-source-id: dd9383ae4eb2343366c11127664f845b91ca3b0a

view details

Pavel Belevich

commit sha f13be5fde120d0266a6af143544f2c665a0e5e78

Check if generator has next normal sample cache methods in normal_distribution (#39816) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39816 This change replaces [`#if !defined(__CUDACC__) && !defined(__HIPCC__)`](https://github.com/pytorch/pytorch/blob/856215509d89c935cd1768ce4b496d4fc0e919a6/aten/src/ATen/core/DistributionsHelper.h#L147) with SFINAE expression that checks if RNG typename has next_double_normal_sample, set_next_double_normal_sample, next_float_normal_sample, set_next_float_normal_sample methods It is required by (and manually tested with) https://github.com/pytorch/csprng/pull/28 Fixes #39618 Test Plan: Imported from OSS Differential Revision: D22002599 Pulled By: pbelevich fbshipit-source-id: e33d42a7e88c5729b077b9cdbf1437158dab48bc

view details

push time in 14 days

startedtancik/fourier-feature-networks

started time in 16 days

starteddalmia/siren

started time in 16 days

push eventzalandoresearch/pytorch-ts

Dr. Kashif Rasul

commit sha bab3716819f85cf1b10bc632768a99815feeeef4

formatting

view details

push time in 18 days

push eventzalandoresearch/pytorch-ts

Ingmar Schuster

commit sha 5a06d3406f3f008ce195d965290c1420bc989a1e

First go at IndependentDistributionOutput (#16) * First go at IndependentDistributionOutput, subclassed by NormalOutput and NegativeBinomialOutput for now * Multivariate test for new implementation of NormalOutput * adding scaling parameter to NormalOutput * IndependentNormalOutput now is an alias of NormalOutput with a DeprecatedWarning. Some more univariate distributions now inherit from IndependentDistributionOutput * IndependentNormalOutput now is an alias of NormalOutput with a DeprecatedWarning. Some more univariate distributions now inherit from IndependentDistributionOutput

view details

push time in 18 days

issue closedzalandoresearch/pytorch-ts

DeepVAR: name 'SetField' is not defined

Description

I am trying to use DeepVAREstimator from the issue-3 branch throwing an error NameError: name 'SetField' is not defined.

To Reproduce

from pts.transform import SetField
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
trainer = Trainer(device = device, epochs = 10) 

estimator = DeepVAREstimator(input_size = 401,
                             freq = "1M", 
                             prediction_length = pred_h,
                             context_length = pred_h*2,
                             target_dim = target_dim,
                             use_feat_static_cat = True,
                             cardinality = card_static,
                             trainer = trainer)                              
predictor = estimator.train(training_data = train_ds)

Error message output

---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-27-375c015eb18b> in <module>
     20                              # time_features = feat_dynamic_real_train,
     21                              trainer = trainer)                              
---> 22 predictor = estimator.train(training_data = train_ds)
     23 predictor.__dict__["prediction_net"]

~/miniconda3/envs/pytorchts/lib/python3.7/site-packages/pts/model/estimator.py in train(self, training_data)
    132 
    133     def train(self, training_data: Dataset) -> Predictor:
--> 134         return self.train_model(training_data).predictor

~/miniconda3/envs/pytorchts/lib/python3.7/site-packages/pts/model/estimator.py in train_model(self, training_data)
     98 
     99     def train_model(self, training_data: Dataset) -> TrainOutput:
--> 100         transformation = self.create_transformation()
    101         transformation.estimate(iter(training_data))
    102 

~/miniconda3/envs/pytorchts/lib/python3.7/site-packages/pts/model/deepvar/deepvar_estimator.py in create_transformation(self)
    154                 else []
    155             )
--> 156             + [
    157                 AsNumpyArray(
    158                     field=FieldName.FEAT_STATIC_CAT, expected_ndim=1, dtype=np.long,

NameError: name 'SetField' is not defined

Potential Solution

Include the following into deepvar_estimator.py

from pts.transform import (
...
    SetField
)

closed time in 18 days

StatMixedML

issue commentzalandoresearch/pytorch-ts

DeepVAR: name 'SetField' is not defined

fixed by 8e9f31eae120d42f7e09ef738542a4d7e73f60e8

StatMixedML

comment created time in 18 days

push eventzalandoresearch/pytorch-ts

Kashif Rasul

commit sha 859595d5557be68300c66c5412a6c10163662983

fixed test

view details

push time in 18 days

push eventzalandoresearch/pytorch-ts

Kashif Rasul

commit sha 2649874d2a40fe5c9d1a6b54ce23c51baec9655d

fixed typo

view details

push time in 18 days

PR closed zalandoresearch/pytorch-ts

Issue 3

for issue #3

+94 -27

1 comment

2 changed files

kashif

pr closed time in 18 days

pull request commentzalandoresearch/pytorch-ts

Issue 3

fixed by 8e9f31e

kashif

comment created time in 18 days

push eventzalandoresearch/pytorch-ts

Kashif Rasul

commit sha 8e9f31eae120d42f7e09ef738542a4d7e73f60e8

Deepvar cleanup (#15) * initial age feature * added support for feat_static_cat in network * added feature_static_real also concat the log(scale) to the network * fixed typo * FEAT_STATIC_CAT are np.long

view details

push time in 18 days

issue commentawslabs/gluon-ts

Framework agnostic sub-packages

Feel free @jaheba to assign me some todos for this. I would love to help.

jaheba

comment created time in 19 days

startedasteroidhouse/INN-exploding-inverses

started time in 19 days

push eventzalandoresearch/pytorch-ts

Dr. Kashif Rasul

commit sha e063a64ccc1d24ec380291ec205276d0f435c1c6

CustomDateFeatureSet returns summed array from dates

view details

push time in 20 days

push eventkashif/pytorch

Vasiliy Kuznetsov

commit sha 26bc27279314dae81a3bc5f1f3a2e1b4cfb61bc7

quant: clean up normalization channels_last handling (#37802) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37802 * adds test coverage for channels_last input format for quantized normalization ops * fixes quantized group_norm and instance_norm to always return contiguous tensors Test Plan: ``` python test/test_quantization.py TestQuantizedOps.test_group_norm python test/test_quantization.py TestQuantizedOps.test_qlayer_norm python test/test_quantization.py TestQuantizedOps.test_instance_norm ``` Imported from OSS Differential Revision: D21395196 fbshipit-source-id: df55e842fe93ae594a336f1b115faea9ba3c88c1

view details

Vasiliy Kuznetsov

commit sha f9b675f7b6d0c0ebae68b61874256648fa96586b

groupnorm: eager static quant support (#39090) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39090 Makes quantized GroupNorm work in eager mode post training static quant. Test Plan: ``` python test/test_quantization.py TestPostTrainingStatic.test_normalization python test/test_quantization.py TestStaticQuantizedModule.test_group_norm ``` Imported from OSS Differential Revision: D21885262 fbshipit-source-id: 58b0ffb59c601fcb4c79f711c7c98a667ffc6170

view details

Vasiliy Kuznetsov

commit sha 2140874228f7b64a50de5e0d1cb317e4ff051c4e

instancenorm: eager static quant support (#39091) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39091 Adds eager mode static quant support for instancenorm. Test Plan: ``` python test/test_quantization.py TestPostTrainingStatic.test_normalization python test/test_quantization.py TestStaticQuantizedModule.test_instance_norm ``` Imported from OSS Differential Revision: D21885265 fbshipit-source-id: 277506faf108f3561867cd8449a2390b7a44c462

view details

Vasiliy Kuznetsov

commit sha 202625ba9ec16ee80b59ee50bae1aacaf50c4178

groupnorm: eager mode QAT support (#39092) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39092 Adds eager mode QAT support for GroupNorm. Test Plan: ``` python test/test_quantization.py TestQuantizationAwareTraining.test_normalization ``` Imported from OSS Differential Revision: D21885261 fbshipit-source-id: 0352e6a830e6384e7ad747067f8bf8ad64ab7fa8

view details

Vasiliy Kuznetsov

commit sha b530176d10c18e2191a2e72b1a1acbe3c79a9cf3

instancenorm: eager mode QAT support (#39093) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39093 Adds eager mode QAT support for instancenorm Test Plan: ``` python test/test_quantization.py TestQuantizationAwareTraining.test_normalization ``` Imported from OSS Differential Revision: D21885264 fbshipit-source-id: 7753995eed895bad26f713a857c6b0d194ea99d9

view details

Vasiliy Kuznetsov

commit sha 952deba828bfd1941429ca28cceb6471974314bb

layernorm: eager mode qat support (#39094) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39094 Adds eager mode QAT handling for LayerNorm Test Plan: ``` python test/test_quantization.py TestQuantizationAwareTraining.test_normalization ``` Imported from OSS Differential Revision: D21885260 fbshipit-source-id: 4f4c84a8bb8ba15dd78494f92569ed3a30d89169

view details

Vasiliy Kuznetsov

commit sha b443ca26c551c40928a4bb47202b3f19b320493b

groupnorm: graph mode static quant support (#39095) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39095 Hooks up groupnorm to graph mode static quant Test Plan: ``` python test/test_quantization.py TestQuantizeScriptPTSQOps.test_group_norm ``` Imported from OSS Differential Revision: D21885257 fbshipit-source-id: 3415c4de76181b026d2f5bfebab130fea29e1d1e

view details

Vasiliy Kuznetsov

commit sha ebdff07d4910eea464c0808e84861c14b7a3e270

instancenorm: static quant graph mode support (#39096) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39096 Hooks up instancenorm for graph mode static quant Test Plan: ``` python test/test_quantization.py TestQuantizeScriptPTSQOps.test_instance_norm ``` Imported from OSS Differential Revision: D21885258 fbshipit-source-id: 650cc5b162dda044866176fea6c345082d9788ed

view details

Xiaomeng Yang

commit sha 614dd03272c23693d0b720d6374c0be19beca313

Optimize GroupNorm on CUDA (#28204) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/28204 Optimize GroupNorm on CUDA ghstack-source-id: 105388365 Test Plan: buck test mode/dev-nosan caffe2/test:nn -- "GroupNorm" Reviewed By: houseroad Differential Revision: D17923732 fbshipit-source-id: 9afaf01288bd9d273eed89909bff77243df89e9f

view details

Linbin Yu

commit sha 8177637374bd57d007dc18c73f3d0a6eb1b2d289

remove duplicated op schema for aten::pow (#39606) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39606 Removed duplicated schema for aten::pow Test Plan: Previously there are many duplicated aten::pow ``` aten::pow.int(int a, int b) -> (float) aten::pow.float(float a, float b) -> (float) aten::pow.int_float(int a, float b) -> (float) aten::pow.float_int(float a, int b) -> (float) aten::pow(Scalar a, Scalar b) -> (float) aten::pow.int(int a, int b) -> (int) // duplicated name! aten::pow.float(float a, float b) -> (float) // duplicated schema! aten::pow.int_float(int a, float b) -> (float) // duplicated schema! aten::pow.float_int(float a, int b) -> (float) // duplicated schema! aten::pow(Scalar a, Scalar b) -> (Scalar) // duplicated name! ``` After this diff, there are only 7 ops with different overload name: ``` aten::pow.int(int a, int b) -> (float) aten::pow.float(float a, float b) -> (float) aten::pow.int_float(int a, float b) -> (float) aten::pow.float_int(float a, int b) -> (float) aten::pow(Scalar a, Scalar b) -> (float) aten::pow.Scalar(Scalar a, Scalar b) -> (Scalar) aten::pow.int_to_int(int a, int b) -> (int) ``` Reviewed By: iseeyuan Differential Revision: D21914441 fbshipit-source-id: 1e82c83c77d22206046276bbb52a65088c58ed33

view details

Linbin Yu

commit sha b06b792bbd3bcd1dff66d9708f8dcdbc7db83899

remove double registered ops (#39609) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39609 These two ops are registered twice in the same file duplicated op: aten::_infer_size(int[] a, int[] b) -> (int[]) duplicated op: aten::_no_grad_embedding_renorm_(Tensor weight, Tensor input, float max_norm, float norm_type) -> (Tensor) Test Plan: compile Reviewed By: iseeyuan Differential Revision: D21915104 fbshipit-source-id: e0147c76e3c84c02952927a7e158ccb92449640c

view details

peter

commit sha ee2bc13f446ea817b97a9022edc7c112a65cf079

Fix smoke test jobs (#39638) Summary: Fixes https://github.com/pytorch/pytorch/issues/39626. Pull Request resolved: https://github.com/pytorch/pytorch/pull/39638 Differential Revision: D21924224 Pulled By: ezyang fbshipit-source-id: 8da75e401bfbff5e11ceeccefd77d0fad81356e4

view details

Zafar Takhirov

commit sha 1db4a31d92431d1ea91d883dcb4559583f321b91

[quant] QNNPACK deconvolution packing (#37405) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37405 Test Plan: Imported from OSS Differential Revision: D21301246 fbshipit-source-id: be72e777a211d414d40e2912dbc2e0ec640c6b32

view details

Wanchao Liang

commit sha 6c56671fd96ac71346c818d0a5ea8087575e9cba

[jit] avoid pre-convert tensor to cpu in pickling (#38898) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38898 Pickling will pickle the tensor meta info, and its up to the jit exporter or other upstream who use the pickler to decide how to write the actual tensor data. This PR make we call getWritableTensorData in upper level so that rpc and TensorPipe can leverge it with only pickling tensor meta data without converting the tensor from GPU to CPU. Test Plan: Imported from OSS Differential Revision: D21879866 Pulled By: wanchaol fbshipit-source-id: 75f7ff4073e4ad15b6588973dcbdc48f97a8329f

view details

Zafar Takhirov

commit sha 172f31171a3395cc299044e06a9665fec676ddd6

[quant] QNNPACK deconv kernel and tests (#36790) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/36790 Test Plan: Imported from OSS Differential Revision: D21110111 fbshipit-source-id: 548df3a9853ad33d21d279393b91d1691050d4c4

view details

Jeremy Lilley

commit sha b83fed8d4cc6c716629021e70a5df19e675d1517

[futures] Add c++ ivalue::Future collectAll() helper (#39119) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39119 Add some base c++ unittest coverage for ivalue::Future, and in the process, add a basic collectAll() primitive, per 38937. In the process, I realized that List<Future> is effectively impossible to construct (since the Future's type is not templated, but rather passed in, the getTypePtr_<T>::call() isn't defined), so added a workaround in List to make it possible. ghstack-source-id: 105309650 Test Plan: buck test mode/dev-nosan caffe2/test/cpp/jit/... Differential Revision: D21756884 fbshipit-source-id: 5d40c8d1c55098de5497655c7b887f4f56508a37

view details

Linbin Yu

commit sha 820e81ba09df46ed89f803f587b53a220a8a4757

add overload name for min/max with list input (#39614) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39614 add overload name to differentiate prim::min.int(int a, int b) -> (int) prim::min.int(int[] l, int[] r) -> (int[]) Test Plan: verified op names for aten::min and aten::max are different before ``` prim::min.int(int a, int b) -> (int) prim::min.float(float a, float b) -> (float) prim::min.int_float(int a, float b) -> (float) prim::min.float_int(float a, int b) -> (float) prim::min(Scalar a, Scalar b) -> (Scalar) prim::max.int(int a, int b) -> (int) prim::max.float(float a, float b) -> (float) prim::max.int_float(int a, float b) -> (float) prim::max.float_int(float a, int b) -> (float) prim::max(Scalar a, Scalar b) -> (Scalar) prim::min.int(int[] l, int[] r) -> (int[]) prim::max.int(int[] l, int[] r) -> (int[]) prim::min.self_int(int[] self) -> (int) prim::max.self_int(int[] self) -> (int) prim::min.float(float[] l, float[] r) -> (float[]) prim::max.float(float[] l, float[] r) -> (float[]) prim::min.self_float(float[] self) -> (float) prim::max.self_float(float[] self) -> (float) prim::min.bool(bool[] l, bool[] r) -> (bool[]) prim::max.bool(bool[] l, bool[] r) -> (bool[]) prim::min.self_bool(bool[] self) -> (bool) prim::max.self_bool(bool[] self) -> (bool) ``` after ``` prim::min.int(int a, int b) -> (int) prim::min.float(float a, float b) -> (float) prim::min.int_float(int a, float b) -> (float) prim::min.float_int(float a, int b) -> (float) prim::min(Scalar a, Scalar b) -> (Scalar) prim::max.int(int a, int b) -> (int) prim::max.float(float a, float b) -> (float) prim::max.int_float(int a, float b) -> (float) prim::max.float_int(float a, int b) -> (float) prim::max(Scalar a, Scalar b) -> (Scalar) prim::min.int_list(int[] l, int[] r) -> (int[]) prim::max.int_list(int[] l, int[] r) -> (int[]) prim::min.self_int(int[] self) -> (int) prim::max.self_int(int[] self) -> (int) prim::min.float_list(float[] l, float[] r) -> (float[]) prim::max.float_list(float[] l, float[] r) -> (float[]) prim::min.self_float(float[] self) -> (float) prim::max.self_float(float[] self) -> (float) prim::min.bool_list(bool[] l, bool[] r) -> (bool[]) prim::max.bool_list(bool[] l, bool[] r) -> (bool[]) prim::min.self_bool(bool[] self) -> (bool) prim::max.self_bool(bool[] self) -> (bool) ``` Reviewed By: iseeyuan Differential Revision: D21914844 fbshipit-source-id: f1792a8c3b3ed6d1a4ba9705c4504f15e3665126

view details

William Gan

commit sha e41fe6086745387b3e1aa1ae6e05fe6037a06a57

Add error message when negative stride is passed to as_strided (#39508) Summary: Fixes this issue https://github.com/pytorch/pytorch/issues/33290. Builds upon this PR https://github.com/pytorch/pytorch/pull/33392. Pull Request resolved: https://github.com/pytorch/pytorch/pull/39508 Differential Revision: D21890557 Pulled By: zou3519 fbshipit-source-id: 8e1a9afb064a6e19551bf3ede3103dd3f023c660

view details

Edward Yang

commit sha a83f7a1d705d78ae205099d31c7f3d7227108777

Revert D17923732: Optimize GroupNorm on CUDA Test Plan: revert-hammer Differential Revision: D17923732 Original commit changeset: 9afaf01288bd fbshipit-source-id: 53bd5db7d61e5eda8d7953d7f6321e54321d7ac2

view details

Gregory Chanan

commit sha e3e8f24cbe30c1cc129f3add5122825d0296ab1c

Remove duplicate 'with_gil' declaration. (#39540) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39540 This gets picked up by mypy as an error in 1.5.1, not sure if it's a different version or setting, but might as well fix. Test Plan: Imported from OSS Differential Revision: D21891772 Pulled By: gchanan fbshipit-source-id: 6f95bcd0652007323cd0c79070425b64e0b71c55

view details

push time in 21 days

startedvwxyzjn/cleanrl

started time in 24 days

push eventkashif/firedup

Kashif Rasul

commit sha f15e8f5189a9359035f522444360e042af61eccb

fix typos

view details

push time in a month

issue commentawslabs/gluon-ts

Enable different tensor types in batches

thanks for looking into this... on the pytorch side though it would be ideal to continue to use the DataLoader which then return the batches from some map or Iterable style Dataset. The DataLoader also supports workers etc. and other libraries like pytorch-Lightning then work with this abstraction. On the tensorflow side I would need to check what tf-data expects or returns...

lostella

comment created time in a month

starteduber/neuropod

started time in a month

startedhmdolatabadi/LRS_NF

started time in a month

push eventkashif/pytorch

mattip

commit sha f10fbcc820d1507121ed466f3dffc728ef559c5c

Split up documentation into subpages and clean up some warnings (#37419) Summary: xref gh-32838, gh-34032 This is a major refactor of parts of the documentation to split it up using sphinx's `autosummary` feature which will build out `autofuction` and `autoclass` stub files and link to them. The end result is that the top module pages like torch.nn.rst and torch.rst are now more like table-of-contents to the actual single-class or single-function documentations pages. Along the way, I modified many of the docstrings to eliminate sphinx warnings when building. I think the only thing I changed from a non-documentation perspective is to add names to `__all__` when adding them to `globals()` in `torch.__init__.py` I do not know the CI system: are the documentation build artifacts available after the build, so reviewers can preview before merging? Pull Request resolved: https://github.com/pytorch/pytorch/pull/37419 Differential Revision: D21337640 Pulled By: ezyang fbshipit-source-id: d4ad198780c3ae7a96a9f22651e00ff2d31a0c0f

view details

Grigory Arutyunov

commit sha aa54f58041159b07563c90792f299835c19b76f2

LoopOptions::gpu_block_index(): bool -> int (#37578) Summary: Small change to allow MSVC build pass. The error is ``` D:\pytorch-scripts\caffe2_builders\v141\pytorch\torch/csrc/jit/tensorexpr/stmt.h(370): error C4805: '!=': unsafe mix of type 'bool' and type 'int' in operation (compiling source file D:\pytorch-scripts\caffe2_builders\v141\pytorch\torch \csrc\jit\passes\tensorexpr_fuser.cpp) [D:\pytorch-scripts\caffe2_builders\v141\pytorch\build\RelWithDebInfo\caffe2\tor ch_cpu.vcxproj] ``` Pull Request resolved: https://github.com/pytorch/pytorch/pull/37578 Differential Revision: D21348964 Pulled By: ezyang fbshipit-source-id: 2c5f995e0adbeb681c18625b59250d7ee3e958ef

view details

Jongsoo Park

commit sha 52169170227230c5742f2ecb9970e8e95184526b

[caffe2/dnnlowp] documentation for pack operator arguments (#37719) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37719 As title Test Plan: Just updating doc Reviewed By: hyuen Differential Revision: D21369227 fbshipit-source-id: a45e5d0fa34aea8046eb4bb83e6c4df4d2654252

view details

Michael Suo

commit sha b7f258bbd3a480d38511ead0bf42413b23d476ae

add fmt to libtorch_python.so (#37560) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37560 Test Plan: Imported from OSS Differential Revision: D21320059 Pulled By: suo fbshipit-source-id: 95cfe7cf26c515fdfcb4621cc58266d838a38a3e

view details

Shen Li

commit sha dbcfd62a1cfdb7603576a7adafc2255c9241a809

Remove unnecessary pickle and unpickle invocation in PyRRef __setstate__/__getstate__ methods (#37638) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37638 Test Plan: Imported from OSS Differential Revision: D21343280 Pulled By: mrshenli fbshipit-source-id: da462fee5815dc74c7f2dc3161699e461bc7d7d3

view details

Gao, Xiang

commit sha e6221f4ca16e7d9fa9179db625d8dea7ea6af02b

Remove std::complex from TypeMeta (#37632) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37632 Differential Revision: D21362056 Pulled By: anjali411 fbshipit-source-id: b20506a36594ad8485ba8ef31d2d8a83ff0862f2

view details

Pavel Belevich

commit sha 812a3fa03d1f0399e3a49ba42892a52feae92aea

Show warning if Tensor.random_()'s from and to are not in [-(2^digits), 2^digits] bounds for floating-point types (#37537) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37537 The documentation states that `random_()` samples "from the discrete uniform distribution". Floating-point types can support _discrete_ _uniform_ distribution only within range [-(2^digits), 2^digits], where `digits = std::numeric_limits<fp_type>::digits`, or - [-(2^53), 2^53] for double - [-(2^24), 2^24] for double - [-(2^11), 2^11] for half - [-(2^8), 2^8] for bfloat16 The worst scenario is when the floating-point type can not represent numbers between `from` and `to`. E.g. ``` torch.empty(10, dtype=torch.float).random_(16777217, 16777218) tensor([16777216., 16777216., 16777216., 16777216., 16777216., 16777216., 16777216., 16777216., 16777216., 16777216.]) ``` Because 16777217 can not be represented in float Test Plan: Imported from OSS Differential Revision: D21380387 Pulled By: pbelevich fbshipit-source-id: 80d77a5b592fff9ab35155a63045b71dcc8db2fd

view details

Jianyu Huang

commit sha fd05debbcd56ebd9017b001e276096ce28ec6db9

[TS][easy] Typo Fix (#37773) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37773 As Title says ghstack-source-id: 103385174 Test Plan: CI Reviewed By: dmudiger Differential Revision: D21374951 fbshipit-source-id: a2fc48b931f0cecbc8a995bf4b4ace30a8eb0d70

view details

Michael Suo

commit sha 20f7e62b1dc6fbfb71b4302d0c2e214403d41b44

Revert D21337640: [pytorch][PR] Split up documentation into subpages and clean up some warnings Test Plan: revert-hammer Differential Revision: D21337640 Original commit changeset: d4ad198780c3 fbshipit-source-id: fa9ba6ac542173a50bdb45bfa12f3fec0ed704fb

view details

Nikita Shulga

commit sha c0ff08577570c23694eb0e8b8750564847daccc2

[PyTorch] Modify `data_parallel` to work with small tensors (#37704) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37704 If input tensor can not be chunked, run `parallel_apply` on fewer devices Modfy input tensor dimention in `DataParallelUsesAllAvailableCUDADevices_CUDA` to be chunkable by any number of available CUDA devices Test Plan: Run `test/cpp/api/parallel` on machine with 6 GPUs Differential Revision: D21365416 fbshipit-source-id: 60fdfed4a0e6256b2c966c2ea3e8d0bfb298d9a8

view details

Xiang Gao

commit sha 1bac49f075a22705b8ecd6a4331d59fe62eca879

Migrate item() to c10::complex (#37648) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37648 Test Plan: Imported from OSS Differential Revision: D21382318 Pulled By: anjali411 fbshipit-source-id: c1d3da43f118f18739bb34906f76a5bad097c905

view details

peng

commit sha 6dd1beaaa8b8ba7854344718e97e9240c4db8120

To fix caffe2 model with Copy OP cannot export to onnx model (#37144) Summary: To fix caffe2 model with Copy OP cannot export to onnx model Pull Request resolved: https://github.com/pytorch/pytorch/pull/37144 Reviewed By: houseroad Differential Revision: D21252421 Pulled By: yinghai fbshipit-source-id: 4f1077188f36b0691d199e418880bbb27f11032d

view details

Edward Yang

commit sha efd8f70cac01122630d561670c4e40aaf3a6f208

Make msg() and msg_with_backtrace() private (#37094) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37094 Signed-off-by: Edward Z. Yang <ezyang@fb.com> Test Plan: Imported from OSS Differential Revision: D21202892 Pulled By: ezyang fbshipit-source-id: d59e6bffabd90cc734056bdce2cd1fe63262fab8

view details

Edward Yang

commit sha a058e938f944b6834b5d237735d715a79974b656

Refactor error msg stack handling, add TORCH_RETHROW (#37101) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37101 Fixes #36954. The basic concept is to streamline the process of rethrowing c10::Error with extra error information. This is in a few steps: - I completely remodeled the Error data type and the internal invariants. Instead of manually adding in newlines, the message stack formatting process is responsible for inserting newlines and spacing as necessary. Call sites are then modified to respect the new API model. - TORCH_RETHROW macro is added, which adds context to an error message and then rethrows it. New internal assert failure looks like: ``` 0 INTERNAL ASSERT FAILED at ../c10/test/util/exception_test.cpp:64, please report a bug to PyTorch. Exception raised from TestBody at ../c10/test/util/exception_test.cpp:64 (most recent call first): frame #0: <unknown function> + 0x6aab9 (0x7ff611d3aab9 in /data/users/ezyang/pytorch-tmp/build/lib/libc10.so) frame #1: ... ``` Error message with context looks like: ``` This is an error This is context 1 This is context 2 ``` Signed-off-by: Edward Z. Yang <ezyang@fb.com> Test Plan: Imported from OSS Differential Revision: D21202891 Pulled By: ezyang fbshipit-source-id: 361cadd16bc52e5886dba08e79277771ada76169

view details

Bram Wasti

commit sha 77dd00c85007cd1e86cce8437d62bc9d60e7a0ad

Permit registration of multiple triggers, but insert warning (#37772) Summary: If linking the same file multiple times, the trigger check becomes severe and crashes execution at startup. Pull Request resolved: https://github.com/pytorch/pytorch/pull/37772 Differential Revision: D21384072 Pulled By: bwasti fbshipit-source-id: 3396e69cd361f65e50517970d23497804c76023e

view details

Supriya Rao

commit sha a6aa336cc2e75ae8d3dc29084d6d3e256501ae9e

[quant][graph] Fix bug in replaceConvolutionWithConv2d (#37635) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37635 replaceConvolutionWithConv2d incorrectly assumes that the size of padding is 2. For Conv1d it is 1, in which case we cannot replace with aten::conv2d Test Plan: Imported from OSS Differential Revision: D21354930 fbshipit-source-id: a2dbad856666b4bbb2d9015ade8e1704774f20dd

view details

Supriya Rao

commit sha fe8fdb775f36a99b91c64c21eda16104a19552fa

[quant][graph] Fix bug in replicateDequant (#37637) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37637 Insert dequant op at specific offset, rather than for all inputs of user Test Plan: python test/test_quantization.py Imported from OSS Differential Revision: D21354931 fbshipit-source-id: 79a1dc63b0ed96c3d51d569116ed963106085d3b

view details

Nikolay Korovaiko

commit sha 4cdaa5956cce965648c7c9fcd9dcf9d0e8231e6c

capitalize fuseTensorExpr (#37780) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37780 Differential Revision: D21386092 Pulled By: Krovatkin fbshipit-source-id: c190f891fe25b3cee9a34b5173756c39efd49c66

view details

Gregory Chanan

commit sha e38d7591a77b3dc5d0cfe385d93a9e56620a44f3

Move broadcasting code for fmod, fmod_ from codegen layer. (#37545) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37545 Test Plan: Imported from OSS Differential Revision: D21315036 Pulled By: gchanan fbshipit-source-id: cbe82205dc71c2a704d717a5f82827fc6ff5106c

view details

Gregory Chanan

commit sha 73aa49d5293388d6b20588742fd04312fb4f8e0f

Move addr broadcasting from codegen layer to native layer. (#37546) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37546 Test Plan: Imported from OSS Differential Revision: D21315040 Pulled By: gchanan fbshipit-source-id: 1bba97bd889ec286e3e7f1d0f0450871b996c9ae

view details

push time in a month

push eventkashif/pytorch

Jerry Zhang

commit sha 54aac4af1fadc55870e4fa8c638d9619025ad9d1

Update hypothesis_utils.py (#33739) Summary: A typo.. Pull Request resolved: https://github.com/pytorch/pytorch/pull/33739 Differential Revision: D20088096 Pulled By: jerryzh168 fbshipit-source-id: d8b5d263c25f8c779698607be87bf76aca1811ab

view details

Sameer Deshmukh

commit sha d6ea4be1534a679a2f0f58233c9ef19f8d5b96a6

Fix minor problems in index_put_ docs (#33689) Summary: Fix for https://github.com/pytorch/pytorch/issues/33641 Pull Request resolved: https://github.com/pytorch/pytorch/pull/33689 Differential Revision: D20086967 Pulled By: ngimel fbshipit-source-id: d9dde8edb904de1cf56b9337920cb29e008b72fb

view details

Jeong Ukjae

commit sha 3cf97bc23c741606c04f0ef9515fad1b73743485

Fix typing error of torch/nn/modules/container.pyi.in (#33686) Summary: * `Sequential` has `__iter__` method, but type stub doesn't * `ModuleList.__getitem__` returns `Module`, but type stub doesn't * Type stub says `ParameterList` has `insert` method, but actual `ParameterList` doesn't * `ParameterDict.__getitem__` should returns `Parameter` * `ParameterList` and `ParameterDict` have `extra_repr` methods --- torch/nn/modules/container.py: https://github.com/pytorch/pytorch/blob/master/torch/nn/modules/container.py torch/nn/modules/container.pyi.in: https://github.com/pytorch/pytorch/blob/master/torch/nn/modules/container.pyi.in Pull Request resolved: https://github.com/pytorch/pytorch/pull/33686 Differential Revision: D20086730 Pulled By: ngimel fbshipit-source-id: a8271489417461c67ff84a239c4cd96c3aa17b5c

view details

Natalia Gimelshein

commit sha a9cef05f5d6ca2a74f993feb191531a03e445d40

improve EmbeddingBag performance on cuda (#33589) Summary: This PR improves performance of EmbeddingBag on cuda by removing 5 kernel launches (2 of those are synchronizing memcopies). - 2 memcopies are checking values of offsets[0] and offsets[-1] to be in expected range (0 for the former, less than number of indices for the latter). It seems strange to check only those 2 values, if users are providing invalid offsets, invalid values can be anywhere in the array, not only the first and last element. After this PR, the checks are skipped on cuda, the first value is forced to 0, if the last value is larger than expected, cuda kernel will assert. It is less nice than ValueError, but then again, the kernel could have asserted if other offset values were invalid. On the cpu, the checks are moved inside the cpu implementation from functional.py, and will throw RuntimeError instead of ValueError. - 3 or 4 initializations (depending on the mode) of the output tensors with .zeros() are unnecessary, because every element of those tensors is written to, so their data can be uninitialized on the start. Pull Request resolved: https://github.com/pytorch/pytorch/pull/33589 Reviewed By: jianyuh Differential Revision: D20078011 Pulled By: ngimel fbshipit-source-id: 2fb2e2080313af64adc5cf1b9fc6ffbdc6efaf16

view details

Shihao Xu

commit sha a1862468d094a80914a6f889d9b1dde4951b7720

Add missing test launchers for JitRpcTest and JitDistAutogradTest (#32891) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/32891 - Add JitDistAutoGradTest into fork/spawn test launcher - Add JitRpcTest into fork/spawn test launcher ghstack-source-id: 98900090 Test Plan: ``` buck test mode/dev-nosan //caffe2/test/distributed/rpc:rpc_fork buck test mode/dev-nosan //caffe2/test/distributed/rpc:rpc_spawn ``` ``` buck test mode/dev-nosan //caffe2/test/distributed/rpc:dist_autograd_fork buck test mode/dev-nosan //caffe2/test/distributed/rpc:dist_autograd_spawn ``` ``` buck test mode/dev-nosan //caffe2/test/distributed/rpc/jit:rpc_fork buck test mode/dev-nosan //caffe2/test/distributed/rpc/jit:rpc_fork_thrift buck test mode/dev-nosan //caffe2/test/distributed/rpc/jit:rpc_spawn buck test mode/dev-nosan //caffe2/test/distributed/rpc/jit:rpc_spawn_thrift ``` ``` buck test mode/dev-nosan //caffe2/test/distributed/rpc/jit:dist_autograd_fork buck test mode/dev-nosan //caffe2/test/distributed/rpc/jit:dist_autograd_fork_thrift buck test mode/dev-nosan //caffe2/test/distributed/rpc/jit:dist_autograd_spawn buck test mode/dev-nosan //caffe2/test/distributed/rpc/jit:dist_autograd_spawn_thrift ``` Differential Revision: D5785394 fbshipit-source-id: 335a85424d22f1a83874be81a8139499c9a68ce2

view details

Jerry Zhang

commit sha 7caf3c396ba457fc1fb652d5e0bf121c2711721b

[quant][graphmode][refactor] Change signature of getModuleAccessPath (#32812) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/32812 We'll error out for the case we can't handle inside the function, instead of checking each time in the callsite Test Plan: . Imported from OSS Differential Revision: D20087846 fbshipit-source-id: ae6d33a94adf29c4df86d67783e7ef8753c91f90

view details

peter

commit sha 2a4aad746623018163d943bad9503de99b599186

Don't activate vc env again for cuda with ninja on Windows (#33700) Summary: Possibly get rid of https://github.com/pytorch/pytorch/issues/28271, https://github.com/pytorch/pytorch/issues/27463 and https://github.com/pytorch/pytorch/issues/25393. Pull Request resolved: https://github.com/pytorch/pytorch/pull/33700 Differential Revision: D20089251 Pulled By: ezyang fbshipit-source-id: 0cfe62b869fb874e25f06894aa76fadc44cf6817

view details

Ashkan Aliabadi

commit sha 6aecfd1e804f2be68df6557c43adbe1f2234fe91

Mobile Backend: NHWC memory layout + XNNPACK integration. (#33722) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33722 In order to improve CPU performance on floating-point models on mobile, this PR introduces a new CPU backend for mobile that implements the most common mobile operators with NHWC memory layout support through integration with XNNPACK. XNNPACK itself, and this codepath, are currently only included in the build, but the actual integration is gated with USE_XNNPACK preprocessor guards. This preprocessor symbol is intentionally not passed on to the compiler, so as to enable this rollout in multiple stages in follow up PRs. This changeset will build XNNPACK as part of the build if the identically named USE_XNNPACK CMAKE variable, defaulted to ON, is enabled, but will not actually expose or enable this code path in any other way. Furthermore, it is worth pointing out that in order to efficiently map models to these operators, some front-end method of exposing this backend to the user is needed. The less efficient implementation would be to hook these operators into their corresponding native implementations, granted that a series of XNNPACK-specific conditions are met, much like how NNPACK is integrated with PyTorch today for instance. Having said that, while the above implementation is still expected to outperform NNPACK based on the benchmarks I ran, the above integration would be leave a considerable gap between the performance achieved and the maximum performance potential XNNPACK enables, as it does not provide a way to compute and factor out one-time operations out of the inner most forward() loop. The more optimal solution, and one we will decide on soon, would involve either providing a JIT pass that maps nn operators onto these newly introduced operators, while allowing one-time calculations to be factored out, much like quantized mobile models. Alternatively, new eager-mode modules can also be introduced that would directly call into these implementations either through c10 or some other mechanism, also allowing for decoupling of op creation from op execution. This PR does not include any of the front end changes mentioned above. Neither does it include the mobile threadpool unification present in the original https://github.com/pytorch/pytorch/issues/30644. Furthermore, this codepath seems to be faster than NNPACK in a good number of use cases, which can potentially allow us to remove NNPACK from aten to make the codebase a little simpler, granted that there is widespread support for such a move. Regardless, these changes will be introduced gradually and in a more controlled way in subsequent PRs. Pull Request resolved: https://github.com/pytorch/pytorch/pull/32509 Test Plan: Build: CI Functionality: Not exposed Reviewed By: dreiss Differential Revision: D20069796 Pulled By: AshkanAliabadi fbshipit-source-id: d46c1c91d4bea91979ea5bd46971ced5417d309c

view details

Will Feng

commit sha 36919278cc82cac952a196c094324c9dcb710214

C++ tensor multi-dim indexing: add index() and index_put_() overloads, simple indexing tests, merge with Python indexing path (#32841) Summary: This PR adds the following items: - **1st item**: `ArrayRef<TensorIndex>` and `std::initializer_list<TensorIndex>` overloads for `Tensor::index` and `Tensor::index_put_`, to be used specifically for multi-dim indexing purpose. Design rationale: * C++ `Tensor::index` and `Tensor::index_put_` are both existing tensor APIs, and they currently (before this PR) only accept a list of tensors (i.e. `ArrayRef<Tensor>`) as indices. If we change their signatures to also accept non-tensors as indices (i.e. `ArrayRef<TensorIndex>`, and `TensorIndex` is convertible from `Tensor` / `Slice` / `None` / `Ellipsis`), it would slow down the original code path (since now it has to go through more steps), which is undesirable. To get around this problem, the proposed solution is to keep the original `ArrayRef<Tensor>` overload, and add `ArrayRef<TensorIndex>` and `std::initializer_list<TensorIndex>` overloads to `Tensor::index` and `Tensor::index_put_`. This way, the original code path won’t be affected, and the tensor multi-dim indexing API is only used when the user explicitly pass an `ArrayRef<TensorIndex>` or a braced-init-list of `TensorIndex`-convertible types to `Tensor::index` and `Tensor::index_put_` . Note that the above proposed solution would still affect perf for the user’s original `Tensor::index` or `Tensor::index_put_` call sites that use a braced-init-list of tensors as input, e.g. `tensor.index({...})` or `tensor.index_put_({...}, value)`, since now such function calls would take the multi-dim indexing path instead of the original advanced indexing path. However, there are only two instances of this in our codebase (one in ATen cpp test, one in a C++ API nn init function), and they can be easily changed to explicitly use `ArrayRef<Tensor>` as input (I changed them in this PR). For external user’s code, since this is part of the C++ frontend which is still considered experimental, we will only talk about this change in the release note, and ask users to switch to using `ArrayRef<Tensor>` explicitly if they want to keep using the original advanced indexing code path. - **2nd item**: Mechanisms for parsing `ArrayRef<TensorIndex>` indices and performing indexing operations (mirroring the functions in `torch/csrc/autograd/python_variable_indexing.cpp`). - **3rd item**: Simple tests to demonstrate that the `Tensor::index()` and `Tensor::index_put_()` APIs work. I will add more tests after the first few PRs are reviewed. - **4th item**: Merge Python/C++ indexing code paths, for code simplicity. I tested locally and found that there is no perf regression resulting from the merge. I will get more concrete numbers for common use cases when we settle on the overall design. This PR supersedes https://github.com/pytorch/pytorch/pull/30425. Pull Request resolved: https://github.com/pytorch/pytorch/pull/32841 Differential Revision: D19919692 Pulled By: yf225 fbshipit-source-id: 7467e64f97fc0e407624809dd183c95ea16b1482

view details

peter

commit sha adbe2898706982cd04be469116db0d39e162f3f1

Update MKL to 2020.0.166 for Windows (#33690) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33690 Differential Revision: D20089300 Pulled By: ezyang fbshipit-source-id: 887c006fbdb2c837f0a1c607a196811f44f1fb35

view details

Andrey Malevich

commit sha 65864d3634d9afd16acfee4ded5d01d747f11973

[C2] Small improvement for elementwise_mul operator. (#33537) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33537 Cases of embeddings smaller than 128, we can get a bit more compute by allocating less threads per block. Test Plan: Unit-test, benchmark. Reviewed By: xianjiec Differential Revision: D19969594 fbshipit-source-id: 6cc6b14fc61302804bed9093ea3591f21e3827d8

view details

Andrey Malevich

commit sha 4460c8b034f8fd544ff9c271c4aa21698644d352

[C2] Tiny changes to adagrad to make it slightly better. (#33727) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33727 Some small changes to adagrad (tiny bit faster, though there is more interesting diff in the stack on this). Test Plan: Part of the stack Reviewed By: chocjy Differential Revision: D20029499 fbshipit-source-id: 7f4fddb9288d7881ef54673b17a0e19ef10d64c0

view details

Gerard Goossen

commit sha 7a8b6c2c6bc0057314e0483c689fc4f24fb45808

[pytorch] blas gemm fix for k=0 (#33419) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33419 These conditions are for the specific implementation, the fallback implementation works without these checks. So use that if any of these checks isn't true. ghstack-source-id: 98836075 Test Plan: Previously got error for special case where k=0 which has gone. The error was in some complicated autograd, and I'm not sure how and where an simple regression test should be added. Differential Revision: D19941103 fbshipit-source-id: e1c85d1e75744b1c51ad9b71c7b3211af3c5bcc6

view details

Tongzhou Wang

commit sha 4ef854b4b4108fa03d9d5081d4bf25bdf0717252

Fix potential hang when exiting main process (#33721) Summary: The following script reproduces the hang ```py import multiprocessing, logging logger = multiprocessing.log_to_stderr() logger.setLevel(multiprocessing.SUBDEBUG) import torch class Dataset: def __len__(self): return 23425 def __getitem__(self, idx): return torch.randn(3, 128, 128), idx % 100 ds = Dataset() trdl = torch.utils.data.DataLoader(ds, batch_size=64, num_workers=300, pin_memory=True, shuffle=True) for e in range(1000): for ii, (x, y) in enumerate(trdl): print(f'tr {e: 5d} {ii: 5d} avg y={y.mean(dtype=torch.double).item()}') if ii % 2 == 0: print("="*200 + "BEFORE ERROR" + "="*200) 1/0 ``` The process will hang at joining the putting thread of `data_queue` in **main process**. The root cause is that too many things are put in the queue from the **worker processes**, and the `put` at https://github.com/pytorch/pytorch/blob/062ac6b472af43c9cf83d285e661e24244551f85/torch/utils/data/dataloader.py#L928 is blocked at background thread. The `pin_memory_thread` exits from the set `pin_memory_thread_done_event`, without getting the `(None, None)`. Hence, the main process needs the same treatment as the workers did at https://github.com/pytorch/pytorch/blob/062ac6b472af43c9cf83d285e661e24244551f85/torch/utils/data/_utils/worker.py#L198 . After the patch, the script finishes correctly. Pull Request resolved: https://github.com/pytorch/pytorch/pull/33721 Differential Revision: D20089209 Pulled By: ezyang fbshipit-source-id: e73fbfdd7631afe1ce5e1edd05dbdeb7b85ba961

view details

albanD

commit sha 6bdb59539fcfa9111e411e9dd7f958c1348ba6ac

follow-up test_torch .data removal (#33696) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33696 This changes two tests: - The batchnorm inference cannot change the memory format of the weights as they are 1D. So this is removed. - The batchnorm test now run both in affine and not affine mode. - I added back the test for type errors using .data. In particular, `.data` allows to change the type of a Tensor inplace (very bad, never do it!) but since it is possible, we should test it until .data is removed. cc Enealor who did the first version of the PR. Test Plan: Imported from OSS Differential Revision: D20069241 Pulled By: albanD fbshipit-source-id: a0348f40c44df38d654fb2a2b2b526d9d42f598a

view details

Jeong Ukjae

commit sha fd175fa8a235569bda75718e3f9573e28d889f12

fix bugs in gen_pyi.py (#33748) Summary: This loop should generate type hints for inplace binary operator methods (`binop` variable) but had been using `name` variable. That's why that wrong type hints had been generated. Resolve https://github.com/pytorch/pytorch/issues/33698 --- Current `__init__.pyi` has these type hints. ```python class Tensor: # some codes here... overload def zeros_like_(self, other: Union[Tensor, Number]) -> Tensor: ... overload def zeros_like_(self, value: Number, other: Union[Tensor, Number]) -> Tensor: ... overload def zeros_like_(self, other: Union[Tensor, Number], *, out: Optional[Tensor]=None) -> Tensor: ... overload def zeros_like_(self, value: Number, other: Union[Tensor, Number], *, out: Optional[Tensor]=None) -> Tensor: ... overload def zeros_like__(self, other: Union[Tensor, Number]) -> Tensor: ... overload def zeros_like__(self, value: Number, other: Union[Tensor, Number]) -> Tensor: ... overload def zeros_like__(self, other: Union[Tensor, Number], *, out: Optional[Tensor]=None) -> Tensor: ... overload def zeros_like__(self, value: Number, other: Union[Tensor, Number], *, out: Optional[Tensor]=None) -> Tensor: ... overload def zeros_like___(self, other: Union[Tensor, Number]) -> Tensor: ... overload def zeros_like___(self, value: Number, other: Union[Tensor, Number]) -> Tensor: ... overload def zeros_like___(self, other: Union[Tensor, Number], *, out: Optional[Tensor]=None) -> Tensor: ... overload def zeros_like___(self, value: Number, other: Union[Tensor, Number], *, out: Optional[Tensor]=None) -> Tensor: ... overload def zeros_like____(self, other: Union[Tensor, Number]) -> Tensor: ... overload def zeros_like____(self, value: Number, other: Union[Tensor, Number]) -> Tensor: ... overload def zeros_like____(self, other: Union[Tensor, Number], *, out: Optional[Tensor]=None) -> Tensor: ... overload def zeros_like____(self, value: Number, other: Union[Tensor, Number], *, out: Optional[Tensor]=None) -> Tensor: ... # some codes here... ``` But `__init__.pyi` should generate these type hints. ```python class Tensor: # some codes here... overload def add_(self, other: Union[Tensor, Number]) -> Tensor: ... overload def add_(self, value: Number, other: Union[Tensor, Number]) -> Tensor: ... overload def add_(self, other: Union[Tensor, Number], *, out: Optional[Tensor]=None) -> Tensor: ... overload def add_(self, value: Number, other: Union[Tensor, Number], *, out: Optional[Tensor]=None) -> Tensor: ... # some codes here... overload def div_(self, other: Union[Tensor, Number]) -> Tensor: ... overload def div_(self, value: Number, other: Union[Tensor, Number]) -> Tensor: ... overload def div_(self, other: Union[Tensor, Number], *, out: Optional[Tensor]=None) -> Tensor: ... overload def div_(self, value: Number, other: Union[Tensor, Number], *, out: Optional[Tensor]=None) -> Tensor: ... # some codes here... overload def mul_(self, other: Union[Tensor, Number]) -> Tensor: ... overload def mul_(self, value: Number, other: Union[Tensor, Number]) -> Tensor: ... overload def mul_(self, other: Union[Tensor, Number], *, out: Optional[Tensor]=None) -> Tensor: ... overload def mul_(self, value: Number, other: Union[Tensor, Number], *, out: Optional[Tensor]=None) -> Tensor: ... # some codes here... overload def sub_(self, other: Union[Tensor, Number]) -> Tensor: ... overload def sub_(self, value: Number, other: Union[Tensor, Number]) -> Tensor: ... overload def sub_(self, other: Union[Tensor, Number], *, out: Optional[Tensor]=None) -> Tensor: ... overload def sub_(self, value: Number, other: Union[Tensor, Number], *, out: Optional[Tensor]=None) -> Tensor: ... # some codes here... ``` Pull Request resolved: https://github.com/pytorch/pytorch/pull/33748 Differential Revision: D20090444 Pulled By: ngimel fbshipit-source-id: e4a5dd08126629ec4c54b630a87ee540e669ec9a

view details

Jeong Ukjae

commit sha 819ca2c285d058fc5786d52d02dfc5f73433ae99

add bfloat16 conversion method in type stub (__init__.pyi) (#33747) Summary: Resolve https://github.com/pytorch/pytorch/issues/33699 `torch/__init__.pyi` will be generated like ```python # TODO: One downside of doing it this way, is direct use of # torch.tensor.Tensor doesn't get type annotations. Nobody # should really do that, so maybe this is not so bad. class Tensor: requires_grad: _bool = ... grad: Optional[Tensor] = ... # some methods here... overload def bernoulli_(self, p: _float=0.5, *, generator: Generator=None) -> Tensor: ... def bfloat16(self) -> Tensor: ... def bincount(self, weights: Optional[Tensor]=None, minlength: _int=0) -> Tensor: ... # some methods here... ``` Pull Request resolved: https://github.com/pytorch/pytorch/pull/33747 Differential Revision: D20090316 Pulled By: ngimel fbshipit-source-id: b9ce4c0d4ef720c94ccac0a0342a012e8cf3af0c

view details

Alex Cheparukhin

commit sha ee23944f46a89208ccba26c8fa0c97fe585265ac

[Caffe2] Fix shape inference for element-wise operators (#33431) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33431 Some elementwise operators don't have shape and type inference specified for the output tensor: `BitwiseOr`, `BitwiseAnd`, `BitwiseXor`, `Not`, `Sign`. This change fixes this issue: - For `Not` and `Sign` operators, the output has the same type and shape as the input, so `IdenticalTypeAndShapeOfInput` function is used to specify that. - For bitwise operators created by `CAFFE2_SCHEMA_FOR_BINARY_BITWISE_OP` macro, the type and shape inference rules should be the same as for other binary element-wise operators, so `TensorInferenceFunction(ElementwiseOpShapeInference)` is used to specify that. Also some tests were modified to ensure that the shape and type are inferred (`ensure_outputs_are_inferred` parameter) Test Plan: ``` CAFFE2_ASSERT_SHAPEINFERENCE=1 buck test caffe2/caffe2/python/operator_test:elementwise_ops_test CAFFE2_ASSERT_SHAPEINFERENCE=1 buck test caffe2/caffe2/python/operator_test:math_ops_test ``` Note that the tests have to be executed with `CAFFE2_ASSERT_SHAPEINFERENCE=1` in order to fail upon shape inference failure. Reviewed By: idning Differential Revision: D19880164 fbshipit-source-id: 5d7902e045d79e5669e5e98dfb13a39711294939

view details

Jianyu Huang

commit sha 5ef1c2c5d22560457f5c8d03b6e4fbde1af997f8

Back out "[pt][quant] RNN debug test" (#33750) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33750 Original commit changeset: 8c38d8f067e5 ghstack-source-id: 98911215 Test Plan: CI Differential Revision: D20090521 fbshipit-source-id: 73df43ad60574e44e80b36ebf6392030c3efb66e

view details

Gregory Chanan

commit sha 8196ec0115f903e4eae9a02727f3e9926662e950

Remove some dead THStorage related code. (#33734) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33734 Test Plan: Imported from OSS Differential Revision: D20084030 Pulled By: gchanan fbshipit-source-id: 29aa5459e8ecc8af8af31157797f44057d6a786e

view details

push time in a month

push eventkashif/firedup

Dr. Kashif Rasul

commit sha e14a7183131dcbf054b9dcad88aeaf983a027c38

formatting

view details

push time in a month

startedzykls/whynot

started time in a month

startedopenai/gym3

started time in a month

startedzalandoresearch/CRISP

started time in a month

delete branch zalandoresearch/pytorch-ts

delete branch : g-beats

delete time in a month

push eventzalandoresearch/pytorch-ts

Dr. Kashif Rasul

commit sha 6945aa36ba8bb31795c852cfde12d87a142afd7d

typo

view details

push time in a month

create barnchzalandoresearch/pytorch-ts

branch : g-beats

created branch time in a month

push eventzalandoresearch/pytorch-ts

Dr. Kashif Rasul

commit sha 9ac375a89db68d52ac74f5aa780aa1851c3946a3

log_abs_det doesnt change int he forward

view details

push time in a month

starteddeepmind/reverb

started time in a month

push eventzalandoresearch/pytorch-ts

Dr. Kashif Rasul

commit sha 6efcd6c5b98a3324452c9c1ae457c0c19440c26a

upstream fixes to evaluator and NB output

view details

push time in a month

starteddeepmind/acme

started time in a month

push eventzalandoresearch/pytorch-ts

ssmall41

commit sha deaee145520691cd9b14fc9294f09b47e2343afb

Add imports from holiday.py in unit test module (#14) Fixes failing unit tests due to missing imports.

view details

push time in a month

PR merged zalandoresearch/pytorch-ts

Add imports from holiday.py in unit test module

Fixes failing unit tests due to missing imports.

+3 -1

0 comment

1 changed file

ssmall41

pr closed time in a month

push eventzalandoresearch/pytorch-ts

Kashif Rasul

commit sha d0be718d443f8d676640b3aa75a7a154edad5dce

update torch requirements need it for the mixtureSameFamily

view details

push time in a month

starteduber-research/Map-Elites-Evolutionary

started time in a month

starteduber-research/Synthetic-Petri-Dish

started time in a month

issue closedzalandoresearch/fashion-mnist

What's the mean and std of FMNIST?

Hi I was wondering if anyone has in handy calculated the mean and std of fahsion mnist? Thanks.

closed time in a month

kirk86

issue commentzalandoresearch/fashion-mnist

What's the mean and std of FMNIST?

mean of 73 and std of 90

kirk86

comment created time in a month

startedbayartsogt-ya/albert-mongolian

started time in a month

push eventzalandoresearch/pytorch-ts

Edrin Basha

commit sha 3b3b064c11adac7429deccd259c9a6ff1b622e2d

Add CustomHolidayFeatureSet (#13) * Add CustomHolidayFeatureSet Add CustomHolidayFeatureSet * Add tests for CustomHolidayFeatureSet Add tests for CustomHolidayFeatureSet * Adding to init py Adding to init py

view details

push time in a month

starteddidriknielsen/pixelcnn_flow

started time in a month

push eventkashif/gluon-ts

Kashif Rasul

commit sha 53120d4af666e24fe92a2f5051f865399ad7a7da

Use forecast_start in RForecastPredictor (#798) * fix RForecastPredictor fix pandas Timestamp * use forecast_start helper

view details

Lorenzo Stella

commit sha a55ced0865d669442c7744ab70f736981bf723c9

Update workflows against nightly mxnet (#801)

view details

Kashif Rasul

commit sha 05d23a013d608ec3a157deb3df7428e204dddc79

LSTNet fixes (#791) * initial LSTNet fixes We need to use a Conv2D with appropriate kernel size * some fixes * Update src/gluonts/model/lstnet/_estimator.py Co-Authored-By: Lorenzo Stella <lorenzostella@gmail.com> * fix dimensions * fix typos * another typo * typo * fix rnn length * relax the metric a bit * scaler with keepdims=True * fix comment * add check for rnn length * Update test/model/lstnet/test_lstnet.py Co-authored-by: Lorenzo Stella <lorenzostella@gmail.com> Co-authored-by: Lorenzo Stella <lorenzostella@gmail.com>

view details

Leonardo Araneda Freccero

commit sha a96fed4aa75ffc73d70d76063068e752bd882b8a

Run nightly test (#797) * testing to run build on github * Latest release fetched over Github api The latest release is fetched over the Github API and is then cleaned up and use as the branch parameter in the git clone command. Thereafter the system install gluonts * merged installing of dependencies WIP * debug * debugging * Hopefully now running on windows too * Debug * Fixed typo * fix bug * debug * Splitting tag fetching into two tasks * Changed names and fixed bug * debug * Changed name and setup to run on schedule * Test running on latest release available via pip Debug run * Renamed the workflow files to better reflect what they do * Fixed bug * Fix flask dependency for Unix test * Jobs are now scheduled to run nightly and not on push Co-authored-by: Jasper <schjaspe@amazon.de> Co-authored-by: Lorenzo Stella <lorenzostella@gmail.com>

view details

Lorenzo Stella

commit sha e20c233eabc73a33193de6c1fb73360fafd08d26

Fix LSTNet observed indicator usage (#804) * add observed indicator to blocks inputs * copy test changes from #802 * reduce test cases * use observed indicator as loss weight

view details

Jasper

commit sha b3763494da5169497fc8d41219eff1b80c313c90

Disable TQDM when running on SageMaker. (#810)

view details

Kashif Rasul

commit sha 41bd8a5b869c3f90e0879edb2aacd563c8e3039f

Fix negative binomial's scaling (#719)

view details

Stephan Rabanser

commit sha b42a85f6f01b40a80a425ed831482a485d8e8be8

Add logit normal distribution (#811)

view details

Lorenzo Stella

commit sha 46e22a7555a202b2d25b142d9fdbc0c326563cfd

bound negbin scale above 1 (#814)

view details

Leonardo Araneda Freccero

commit sha 0863e382ee3c2fb610e949be4673939bfaa1bb87

Fix nightly tests of release (#822) * DEBUG - Updated used requirements files for nightly release tests * DEBUG - fixed indentation error * Reenabled nightly schedule Ran the tests on my local github to verify changes. All tests pass and thus this commit reenables the nightly cron schedule that runs the job.

view details

Leonardo Araneda Freccero

commit sha 5ecf0ea772f4109cd937c5a316304e36d38012a8

Fix mxnet nightly tests (#827) * Debugging - added "" in pip install statement * Enabling schedule for MXNet nightly This version functions on local fork

view details

yx1215

commit sha 58727b3e8f3050b32b743de784847d9a8a008ba9

Add examples for forecasting COV19 cases (#809) * change the default value to a scaler instance * add example for forecasting COV19 cases * add example for forecasting COV19 cases * add example for forecasting COV19 cases * modify the example for forecasting COV19 cases Co-authored-by: Lorenzo Stella <lorenzostella@gmail.com>

view details

Konstantinos Benidis

commit sha 3aa027fcb17452d4aeb2521a6045c9749652e60b

Model averaging (#823) * model averaging Co-authored-by: Konstantinos Benidis <kbenidis@amazon.com>

view details

Bernie Wang

commit sha 0d963f7dc55ef866d86e33633a28d57dfab33adb

Mqcnn rts (#668) * Getting MQCNN up to date. Co-authored-by: Jasper Schulz <schjaspe@amazon.de> Co-authored-by: Aaron Spieler <aspiele@amazon.com> Co-authored-by: Lorenzo Stella <lorenzostella@gmail.com>

view details

Caner Turkmen

commit sha a757256fbc03ed45211305277942cc2df27ae7b6

add SampleForecast and Predictor objects for TPPs (#819)

view details

Lorenzo Stella

commit sha 44a70bd578a84bae8c553114ce33c9e39c757364

Update bug_report.md (#835)

view details

Stephan Rabanser

commit sha 3cbb5036e34bc5f3f40a3eff346cb1e2f4c50d42

Add temperature scaling to categorical distrubution (#792)

view details

Stephan Rabanser

commit sha b2b7ed62e5661dd202c025dab2c8c23161f2a4c8

Representation module (#755) * Add representation module * Fix output case for binnings * Add representation tests * Add missing header * Fix docstring indent * Adapt representation concat in hybrid representation * Remove scale storing from Binning * Add dimension expansion representation * Add embedding representation * Add discrete PIT representation * Remove boolean output parameter form representation * Update representations * Update tests * Change representation spec * Update representations to new spec * Update tests to new representation spec * Fix missing return parameter in hybrid rep * Update default expansion axis * Fix hybrid representation axis * Fix dim expansion axis default value in docstring * Change -1 defaults to Optional * Fix dim concat docstring in hybrid representation * Add custom discretization operator * Enable context setting in initialize from dataset * Fix GRB test * Remove exit statement in categorical output * Make custom binning hybridization compatible * Ensure init from array is called correctly from wrappers * Enable representation chaining * Fix representation chain parameter * Update hybrid test to support chaining * Enable post transform for representation chain * Small fixes * Fix embedding docstring * Fix scale computation * Remove NOPScaling * Empty commit

view details

Kashif Rasul

commit sha 4b7c4adb940aae199967ed9e8c356edbad6b1ec7

Merge remote-tracking branch 'upstream/master'

view details

push time in a month

push eventzalandoresearch/pytorch-ts

Dr. Kashif Rasul

commit sha c21a353d5cb627dcbfa95122fdf4f19855b150ab

fix typo

view details

push time in a month

issue closedzalandoresearch/pytorch-ts

Working Example of TransformerTempFlowEstimator

I have been trying to get TransformerTempFlowEstimator working without success. Can you provide an example script? Issues include RuntimeError: Sizes of tensors must match except in dimension 2. Got 1 and 32 in dimension 0 and not understanding how the data loading works for multivariate data. My example below:

from pts.dataset import MultivariateGrouper
import pandas as pd
import torch

from pts.dataset import ListDataset
from pts.model.transformer_tempflow import TransformerTempFlowEstimator
from pts import Trainer

url = "https://raw.githubusercontent.com/numenta/NAB/master/data/realTweets/Twitter_volume_AMZN.csv"
df = pd.read_csv(url, header=0, index_col=0, parse_dates=True)

train_ds = ListDataset(
    [{"start": df.index[0], "target": df.value[:"2015-04-05 00:00:00"]+i}
        for i in range(2)],
    freq="5min"
)

grouper_train = MultivariateGrouper(max_target_dim=2)
gt = grouper_train(train_ds)

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

trainer = Trainer(epochs=10)

estimator = TransformerTempFlowEstimator(input_size=1,
                                         freq="5min",
                                         prediction_length=100,
                                         context_length=4,
                                         target_dim=64,
                                         cardinality=[7, 20],
                                         trainer=trainer)

predictor = estimator.train(training_data=gt)

closed time in a month

AndreRoehrig

issue commentzalandoresearch/pytorch-ts

Working Example of TransformerTempFlowEstimator

fixed by 6749293

AndreRoehrig

comment created time in a month

push eventzalandoresearch/pytorch-ts

Dr. Kashif Rasul

commit sha 674929344baa1ea38223e6767896110eb0ebc216

added multivariate Flow example for issue #12

view details

push time in a month

more