profile
viewpoint

AdrienCorenflos/HawkesProcesses 1

Package to simulate and fot hawkes processes for time arrival phenomenas

AdrienCorenflos/Bayesian-Filtering-and-Smoothing 0

Companion python code for the book Bayesian Filtering and Smoothing by Simo Särkkä

AdrienCorenflos/chess_game 0

Chess Game coded in python

AdrienCorenflos/DeepBSDE 0

Deep BSDE solver in TensorFlow

AdrienCorenflos/DeepHPMs 0

Deep Hidden Physics Models: Deep Learning of Nonlinear Partial Differential Equations

AdrienCorenflos/EchoTorch 0

A Python toolkit for Reservoir Computing and Echo State Network experimentation based on pyTorch. EchoTorch is the only Python module available to easily create Deep Reservoir Computing models.

AdrienCorenflos/fast-soft-sort 0

Fast Differentiable Sorting and Ranking

create barnchAdrienCorenflos/AdrienCorenflos.github.io

branch : master

created branch time in a day

created repositoryAdrienCorenflos/AdrienCorenflos.github.io

Personal website

created time in a day

issue commentgoogle/jax

pmap correct usage

Thanks that's much better indeed. Do you think there's a similar way around irregular splitting?

AdrienCorenflos

comment created time in 4 days

issue commentgoogle/jax

No details in tensorboard trace profile

CPU yes. So that includes my personal tracing?

AdrienCorenflos

comment created time in 6 days

issue commentgoogle/jax

No details in tensorboard trace profile

CPU yes. So that includes my personal tracing

AdrienCorenflos

comment created time in 6 days

push eventAdrienCorenflos/jax

AdrienCorenflos

commit sha 1d7e1d824dc49dcb21c3892b1065ffa82a1e84cd

Fix the parametrisation

view details

push time in 6 days

issue openedgoogle/jax

No details in tensorboard trace profile

Hi,

I'm following the profiling guidelines using tensorboard, but it really doesn't show any details in the trace: https://ibb.co/C7grDt8

Do you know what I could be doing wrong?

Thanks,

Adrien

created time in 6 days

issue commentgoogle/jax

pmap correct usage

Also the above would obviously be better with array split: here's a draft PR for it.

AdrienCorenflos

comment created time in 7 days

PR opened google/jax

Adding support for array split

Adapted the split code to reproduce array_split behaviour

+30 -5

0 comment

2 changed files

pr created time in 7 days

push eventAdrienCorenflos/jax

AdrienCorenflos

commit sha 515de54ace550273a8fdcb5d9007976c520ffe5b

Adding support for array split

view details

push time in 7 days

fork AdrienCorenflos/jax

Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more

fork in 7 days

issue commentgoogle/jax

Padding doesn't work in jitted mode

After #3627, here's the new error message even with static_argnums=(1,):

TypeError: jax.numpy.pad got an unexpected type for 'pad_width': got [[0 0]
 [0 1]] of type <class 'jax.interpreters.xla.DeviceArray'>.

Unlike numpy, jax.numpy requires the 'pad_width' argument to jax.numpy.pad to
be an int, single-element tuple/list with an int element, or tuple/list of int
pairs (each pair a tuple or list); in particular, 'pad_width' cannot be an
array.

If you need to manipulate a pad argument with NumPy, as in the original code,I recommend doing it with np rather than jnp, perhaps like this:

from jax import jit as jjit
import jax.numpy as jnp
from jax import ops
import numpy as np

vals = np.random.randn(50, 100)

def pad_last_dim(array, pad_size):
    ndim = jnp.ndim(array)
    npad = np.zeros((ndim, 2), dtype=np.int32)
    axis = ndim - 1
    npad[axis, 1] = pad_size
    npad = list(map(list, npad))
    return jnp.pad(array, npad, 'constant', constant_values=0)

print(pad_last_dim(vals, 1))  # All good
print(jjit(pad_last_dim, static_argnums=(1,))(vals, 1))

However, it's probably better just to use lists/tuples here:

from jax import jit as jjit
import jax.numpy as jnp
from jax import ops
import numpy as np

vals = np.random.randn(50, 100)

def pad_last_dim(array, pad_size):
    ndim = jnp.ndim(array)
    npad = [[0, pad_size]] * ndim
    return jnp.pad(array, npad, 'constant', constant_values=0)

print(pad_last_dim(vals, 1))  # All good
print(jjit(pad_last_dim, static_argnums=(1,))(vals, 1))

Your solution worked for me, thanks a lot!

AdrienCorenflos

comment created time in 7 days

issue openedgoogle/jax

pmap correct usage

Hi,

What is the correct way of parallelising across devices for batch data that is bigger than the number of devices you have?

So far I am doing something like the following:

from jax import jit, pmap, device_count
import jax.numpy as jnp

@jjit
def fun(z):
    return jnp.sum(z, -1)

__pfun = pmap(fun)

def _pfun(z, n):
    zs = jnp.stack(jnp.split(z, n))
    return __pfun(zs).flatten()

pfun = jjit(_pfun, static_argnums=1)
pfun(x, device_count())

This feels a bit wrong to me to virtually have to split, then join then split again (implicitly in the pmap) then join again.

Thanks,

Adrien

created time in 7 days

issue commentgoogle/jax

Padding doesn't work in jitted mode

Thanks a lot for that Matt. Bit late for me but I'll come back to it tomorrow. I think you should forbid lists and ndarrays tbh. Would be consistent with tf where they consider lists to be a collection of some kind.

In the meantime I like your solution and I'll use it (I don't need the ndarray, just was easier to set the padding)

AdrienCorenflos

comment created time in 7 days

issue openedgoogle/jax

Padding doesn't work in jitted mode

Hi,

Thanks for the work on the library. For a project I'm needing to implement a padding on the last dimension of a tensor. Everything works fine when the function is not jitted, but I get a conversion error when I jit it. It seems to me like it is coming from a bug in the lax_numpy internals (this line), but maybe I'm missing something...

Adrien

The code is as follows:

from jax import jit as jjit
import jax.numpy as jnp
import numpy as np

vals = np.random.randn(50, 100)

def pad_last_dim(array, pad_size):
    ndim = jnp.ndim(array)
    npad = jnp.zeros((ndim, 2), dtype=jnp.int32)
    axis = ndim - 1
    npad = ops.index_update(npad, ops.index[axis, 1], pad_size)
    return jnp.pad(array, npad, 'constant', constant_values=0)

print(pad_last_dim(vals, 1))  # All good
print(jjit(pad_last_dim)(vals, 1))  # raises

created time in 8 days

fork AdrienCorenflos/normalizing-flows

Implementation of normalizing flows in TensorFlow 2 including a small tutorial.

fork in 8 days

fork AdrienCorenflos/Kalman-and-Bayesian-Filters-in-Python

Kalman Filter book using Jupyter Notebook. Focuses on building intuition and experience, not formal proofs. Includes Kalman filters,extended Kalman filters, unscented Kalman filters, particle filters, and more. All exercises include solutions.

fork in 9 days

issue commentgoogle/jax

Compute gradient during the forward pass

Thanks for all that Matt. Was a bit busy lately, I'll give it a try asap

AdrienCorenflos

comment created time in 14 days

push eventAdrienCorenflos/particles

AdrienCorenflos

commit sha ac92a3e69e142b0fb065877a97802082942fa267

Support 1D

view details

push time in 17 days

fork AdrienCorenflos/fast-soft-sort

Fast Differentiable Sorting and Ranking

fork in 17 days

push eventJTT94/filterflow

AdrienCorenflos

commit sha 94ac82875800bd30db527a0432a25580d13ea124

delete SEIR

view details

push time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha 083615adec06c67ddd43786b227053f91fcce2b0

Stay in log-space

view details

push time in a month

issue openedgoogle/jax

Compute gradient during the forward pass

Hi,

I have a specific use case where I can compute the gradient of my function efficiently during the forward pass. What is the preferred way of implementing the custom gradient in this case?

Thanks

What I have tried (reproducing a similar example without the real maths):

import jax
from jax.lax import while_loop
import jax.numpy as jnp
from functools import partial


def example_loop_fwd(x, n, seed):
    return _example_loop_fwd(x, n, seed)


def _example_loop_fwd(x, n, seed):
    key = jax.random.PRNGKey(seed)
    grads = jnp.ones_like(x)
    def cond_fun(vars):
        _, _, _, i = vars
        return i < n

    def body_fun(vars):
        k, y, g, i = vars
        k, subkey = jax.random.split(k)
        uniforms = jax.random.uniform(subkey, shape=x.shape, minval=0.5, maxval=1.5)
        y = y * uniforms
        return key, y, g * uniforms, i + 1

    _, res, g, _ = while_loop(cond_fun, body_fun, (key, x, grads, 0))
    return jnp.sum(res), g


@partial(jax.custom_vjp, nondiff_argnums=(1, 2))
def example_loop(x, n, seed=0):
    res, g = _example_loop_fwd(x, n, seed)
    return res

def example_loop_bwd(n, seed, g, dres):
    return (g * dres,)


example_loop.defvjp(example_loop_fwd, example_loop_bwd)

created time in a month

push eventAdrienCorenflos/particles

AdrienCorenflos

commit sha f00fbe460048352d0a81e3dd31c3bfaf960f27c8

Quick example of outer seeding

view details

push time in a month

issue commentnchopin/particles

multiSMC seeding does not work with scipy frozen distribution

Just to comment on what Nicolas said, when freezing a scipy distribution, you not only freeze the parameters, you also freeze the numpy random state that came with it (see this line in scipy source code).

So now how does that affect the multiprocessing? Fundamentally there are two conflicting phenomena at stake here: global state and pickling. The way numpy seeding works is by modifying a global object generator object (you can find it under np.random.mtrand._rand). Now before forking new processes and running the function you want to parallelize, multiprocessing pickles the arguments, and at some point ends up pickling the global random state registered within the distribution. When you now unpickle the arguments in their respective processes, the reference to the global state is lost and you end up having two states, one local to your distribution, and the "new" global state that is specific to the process. Calling np.random.seed changes the latter, and not the former, hence the observed behaviour. This is probably more easily seen on the minimal example below:

import multiprocessing
import numpy as np
import scipy.stats as st

loc = 0.
scale = 1.

my_dist = st.norm(loc, scale)

class seeder(object):
    def __init__(self, dist):
        self.dist = dist

    def __call__(self, seed):
        np.random.seed(seed)
        print("global seed:", np.random.get_state()[1][0])
        print("local seed:", self.dist.random_state.get_state()[1][0])
        return self.dist.rvs()

seeded_dist = seeder(my_dist)

np.random.seed(0)

seeds = [1, 2, 3, 4]
with multiprocessing.Pool(len(seeds)) as p:
    res = p.map(seeded_dist, seeds)

print(res)

Le sam. 6 juin 2020 à 15:12, Nicolas Chopin notifications@github.com a écrit :

Frozen distributions are evil... OK, I will close this issue when this point is properly documented somewhere in the package.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/nchopin/particles/issues/19#issuecomment-640067232, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEYGFZZEEKG7OMDHTROHHW3RVJFF5ANCNFSM4NSMMYEQ .

hai-dang-dau

comment created time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha 4988f56746001937a8764cdadad96eb7b4a1c2c9

remove API key

view details

push time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha 297be2a3b33ff7eb93f8ce2dd7f22110f2c29682

Delete deprecated notebooks

view details

push time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha aa4c0a1034edea53b80885eb4ea6fb2ee7d7e23f

remove identifying info

view details

AdrienCorenflos

commit sha 4f78de2c3c494a4ad62674bf23c3cd80adb48af1

Merge remote-tracking branch 'origin/master'

view details

push time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha 1fc6af85b16e54f22dc25bd374fb1524b4072380

Update README.md

view details

push time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha 1ff4eaa033af5f54720e3da09da3469aeb3a2666

Update README.md

view details

push time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha f57fe0afb6df6a7d573298757f053ecf93f4002b

Update README.md

view details

push time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha eb1bee74ad599ac1c337fb07b054b5231a6d38ab

Update README.md

view details

push time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha aafd1b72c2a258f806b052ebfb1bc3b255f95ab5

Update README.md

view details

push time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha e85455517524893897c5e08328003e47eb4076c5

modif README

view details

push time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha 930afcf1c84fdfd71d830fc36a914a8e0315e6cb

modif README

view details

push time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha a419dcf72051d7ed6ca3c8fbaf243ee2ab212378

modif README

view details

push time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha b96775d7420b46222b6cac37a72d0d0c2d498ea6

modif README

view details

push time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha 9adf43caaf9d445e6b3873cd9927f7148aac8b59

Added a proper README

view details

AdrienCorenflos

commit sha b66b7ce9e6fe11d383e1e7d7ae5318d53b4d2e2e

Merge remote-tracking branch 'origin/master'

view details

push time in a month

fork AdrienCorenflos/smooth-ot

Python implementation of smooth optimal transport.

fork in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha 75747bd81d5ee88d048d833042418d7135751caa

Fix point cloud optimization for no intermediate resampling

view details

push time in a month

issue commenttensorflow/tensorflow

tf.Module break gradient registration

Yes you're right that was me being stupid. Sorry to have made you lost time

AdrienCorenflos

comment created time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha c9e1c76c081f859b617ad41dbf5fad00b455f882

Add illustration of resampling schemes

view details

push time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha 3293e359007ed9785a10e421db344a962fa9f1fc

optimized imports

view details

AdrienCorenflos

commit sha 4519e889b21a3894c9af4b8cc4cf8e5486921aa1

Auxiliary not implemented

view details

AdrienCorenflos

commit sha b64e37a950ac6661cfb6c52ed138c2cbe5323d18

Clean SEIR

view details

AdrienCorenflos

commit sha fdb18241f435e9e55205525cefda2d2e7e05b1a0

deletion

view details

AdrienCorenflos

commit sha d34cf818c5cca8115635d1d1a67bed4c94d83b88

deletion

view details

AdrienCorenflos

commit sha 571ea8d4ce043314cac3430431742e144f748843

deletion

view details

AdrienCorenflos

commit sha 140432eb7b9a1348f1f0e550e1eb36965561ab2d

top level imports

view details

AdrienCorenflos

commit sha 08e4e895c1de3c6ef878d2bd1e1bc6b8be6f90e9

added a tutorial

view details

AdrienCorenflos

commit sha e900aa10341a44238df1b120b1b5dac5d3b95d13

added a tutorial

view details

AdrienCorenflos

commit sha 4cbaf5f8a125137d32991d69cfe0b62cc2363414

Merge branch 'master' into cleanup-for-submission # Conflicts: # scripts/simple_linear_common.py

view details

AdrienCorenflos

commit sha 2f42f3d2b6b1eece9507ad525b7ca59b27684da5

Add notebooks Fix bug in scaling for 1D states

view details

push time in a month

push eventJTT94/filterflow

anon284

commit sha 40853980df32720788aa23e0222afbe6d6bd4d66

add flag for data

view details

anon284

commit sha 320b2d6f9207a1302ccffb305b0527dfd1a8bc11

Merge branch 'master' of https://github.com/JTT94/filterflow

view details

anon284

commit sha 77dedddf6b7f57cf170fcb61b5bed64549b76e29

update file paths

view details

AdrienCorenflos

commit sha bbd18ff6b6fd9442a5bcbcaaa5585e06fc2de79b

test for global opt

view details

AdrienCorenflos

commit sha e289d1f670bd78e43fad22b81aeb616a3b69635a

test for global opt

view details

AdrienCorenflos

commit sha 05b3640e0853edcd0dcb90db5808aecf4d9138b9

Merge remote-tracking branch 'origin/master'

view details

AdrienCorenflos

commit sha 441ef1aa4c26713a2dfa6ecc5ede3682a160244e

test for global opt

view details

AdrienCorenflos

commit sha 391263ab0e6007741dc49a1133c1d99cecf8ae98

update .sh

view details

AdrienCorenflos

commit sha 6ce9e095261e5de2ad30c229da9bd71b5ff83350

test for global opt

view details

AdrienCorenflos

commit sha 20623f2a1a8ff64828cbdb44147b70476cbf5ff0

Duplicate lines

view details

AdrienCorenflos

commit sha 084b3ec04a4fcd2b925339342be8ae2d3237d634

smaller state - changed formatting

view details

AdrienCorenflos

commit sha b9d5f31610890a7ff310545b7f476690633420f1

rollback

view details

anon284

commit sha bb0f2ac0e0e7c7b275d4b403cc89b25c350d2087

new proposal run

view details

anon284

commit sha edf2e587b6a66bb554714b41c467628c42344373

new run

view details

anon284

commit sha 47b3b8361ad66934967664e4981fb218fc91e655

Merge branch 'master' of https://github.com/JTT94/filterflow

view details

anon284

commit sha d8c8d052faffb99eae7f364dac1037f542eda684

add write csv

view details

AdrienCorenflos

commit sha 45877664ce6dd64c58310b1da8dad1ea83c476c4

fix scripts

view details

AdrienCorenflos

commit sha 1acea73e63cf0ebd33455c757a2a8f9ef880d639

Merge remote-tracking branch 'origin/master'

view details

AdrienCorenflos

commit sha 4cbaf5f8a125137d32991d69cfe0b62cc2363414

Merge branch 'master' into cleanup-for-submission # Conflicts: # scripts/simple_linear_common.py

view details

AdrienCorenflos

commit sha 2f42f3d2b6b1eece9507ad525b7ca59b27684da5

Add notebooks Fix bug in scaling for 1D states

view details

push time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha 45877664ce6dd64c58310b1da8dad1ea83c476c4

fix scripts

view details

AdrienCorenflos

commit sha 1acea73e63cf0ebd33455c757a2a8f9ef880d639

Merge remote-tracking branch 'origin/master'

view details

push time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha b9d5f31610890a7ff310545b7f476690633420f1

rollback

view details

push time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha 084b3ec04a4fcd2b925339342be8ae2d3237d634

smaller state - changed formatting

view details

push time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha 20623f2a1a8ff64828cbdb44147b70476cbf5ff0

Duplicate lines

view details

push time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha 6ce9e095261e5de2ad30c229da9bd71b5ff83350

test for global opt

view details

push time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha 391263ab0e6007741dc49a1133c1d99cecf8ae98

update .sh

view details

push time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha 441ef1aa4c26713a2dfa6ecc5ede3682a160244e

test for global opt

view details

push time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha bbd18ff6b6fd9442a5bcbcaaa5585e06fc2de79b

test for global opt

view details

AdrienCorenflos

commit sha e289d1f670bd78e43fad22b81aeb616a3b69635a

test for global opt

view details

AdrienCorenflos

commit sha 05b3640e0853edcd0dcb90db5808aecf4d9138b9

Merge remote-tracking branch 'origin/master'

view details

push time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha e900aa10341a44238df1b120b1b5dac5d3b95d13

added a tutorial

view details

push time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha 08e4e895c1de3c6ef878d2bd1e1bc6b8be6f90e9

added a tutorial

view details

push time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha 571ea8d4ce043314cac3430431742e144f748843

deletion

view details

AdrienCorenflos

commit sha 140432eb7b9a1348f1f0e550e1eb36965561ab2d

top level imports

view details

push time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha d34cf818c5cca8115635d1d1a67bed4c94d83b88

deletion

view details

push time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha b64e37a950ac6661cfb6c52ed138c2cbe5323d18

Clean SEIR

view details

AdrienCorenflos

commit sha fdb18241f435e9e55205525cefda2d2e7e05b1a0

deletion

view details

push time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha 4519e889b21a3894c9af4b8cc4cf8e5486921aa1

Auxiliary not implemented

view details

push time in a month

create barnchJTT94/filterflow

branch : cleanup-for-submission

created branch time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha 10ffb4e77d129f0bb6e1cb1822212903c71e23bb

Stochastic volatility model

view details

push time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha 95a429a6311b867d379a0a117905afc8a6a4a097

Illustration on simple 2D example

view details

push time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha dd183587609399c69a649eb07eaafdf87125544d

Stochastic volatility model

view details

AdrienCorenflos

commit sha b1cd3fe5e64be72470caa18e866a8418882b39b1

Merge remote-tracking branch 'origin/master'

view details

push time in a month

PR opened nchopin/particles

Bug fix

Undefined variable in MV Stoch vol

+1 -1

0 comment

1 changed file

pr created time in a month

push eventAdrienCorenflos/particles

Nicolas Chopin

commit sha cfa30723808d83e25edf828318ebd21a799a59c9

distributions: added DiscreteUniform, documented Categorical

view details

Nicolas Chopin

commit sha faf9047a988fb9f1c2a0faf67c0a143c2d37fad1

distributions: added DiscreteUniform, Prod, improved doc at places

view details

Nicolas Chopin

commit sha 4bbad401c907fbf0990c334deb2d389a99eb7c70

Merge branch 'master' into experimental

view details

Nicolas Chopin

commit sha f621d7640a207d1e904c6026c18fb89019df0253

Merge branch 'master' into experimental

view details

Nicolas Chopin

commit sha f85c3adf4099462981db7dd6d8cc13de5d69abd7

* Fixed issue #14: book/mle/mle_neuro.py: added thaldata.csv dataset * added missing plots in book/smc_samplers/logistic_reg.py * tentative implementation of conditional distributions in Prob

view details

Nicolas Chopin

commit sha e201825a3fb2bdd498c68e0168f69d24aedab111

Merge remote-tracking branch 'adrien/master' into extra

view details

Nicolas Chopin

commit sha 0cc7f2cdcb736f67c9ece4ce7cb9d6fdf4c5e332

Small changes after Adrien's commit: + utils.py constant max_int32 now called MAX_INT32 + put back original version of searchsorted, needs further investigation

view details

Nicolas Chopin

commit sha 7c41a47d07fe4aedd548f175a556869da110bd07

Merge branch 'extra' (Adrien's modifications) into experimental

view details

Nicolas Chopin

commit sha 79498bfdde18a7d4610b9375f078dfc2bd216418

* distributions: added IID , removed dim keword from loc-scale distributions * book/smc_samplers: cosmetic changes

view details

Nicolas Chopin

commit sha 5f9106d7e69b66d60fa47b6e8bead8ca867ad364

book/pmcmc: cosmetic change for some plots

view details

Nicolas Chopin

commit sha 0ec81ea047b707833d4bad3e709ea9e48a4ef16f

book/smc_samplers: added missing plots

view details

Nicolas Chopin

commit sha 8245a69b4f3a7ce0453adfab2091d3add59e3bfa

costemic changes for book plots: * mle / neuro * pmcmc / ecological

view details

Nicolas Chopin

commit sha e0c16f6ac4ca013ca27e503f2a096f04da2389d4

book plots: a few more cosmetic changes

view details

Nicolas Chopin

commit sha 2c60d46ebc0bafd7c67b89b5e5e96eafaf057a5c

book/smoothing: more cosmetic changes

view details

Nicolas Chopin

commit sha 65066b890fbd371c929e474d82f0bd5c1aed2f1c

book/sqmc: fix plot Gordon

view details

Nicolas Chopin

commit sha d3cff217cb54a0e121b7926f932d28c2fa589325

book/sqmc: fix plot Gordon

view details

Nicolas Chopin

commit sha db7ae39332b8d40e2cb2227af7a2887b2f73ded7

Merge branch 'experimental' of https://github.com/nchopin/particles into experimental

view details

Nicolas Chopin

commit sha 538209bd868237b230bb5b490ce8e1c99e99963e

Fix #16 (smoothing, backward_sampling_qmc)

view details

Nicolas Chopin

commit sha 713662c7f095f3e92cc3e6fedf693a9f469f203f

mcmc plot

view details

Nicolas Chopin

commit sha e46b3a75a8bcfd16571b447a755bde3a1a62dab6

book/smoothing/offline_smoothing.py: increase number of runs, and N, smal adjustments

view details

push time in a month

push eventAdrienCorenflos/particles

AdrienCorenflos

commit sha 5b5484e3f816bc3e888953d40daf66ee0986e9c8

undefined variable

view details

push time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha c7077853b042da7d6571da2fa6d09b042d42c027

change latex to CVS

view details

AdrienCorenflos

commit sha 4793fe23c4d67c32c768c18f20caf4123a102a24

Merge remote-tracking branch 'origin/master'

view details

push time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha 5e2db666ee693e3e7ce6a6159436f2f806b261b4

change latex to CVS

view details

push time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha 66e1a8d426b9ffb2efe1db05bf63c0a4b4e68b9c

optimal proposal example - functional form of locally optimal proposal to be learned

view details

AdrienCorenflos

commit sha 965f4625d460e3df27a235945f14208af675f2dd

script

view details

push time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha 60d904a1f8579f851f200929d98dab7a36a8abcf

add examples

view details

push time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha 8828fa433e3071ec7eb7fe3c863b7f9f6d65b557

scale by dimension

view details

push time in a month

issue commenttensorflow/tensorflow

tf.range + for x,y in dataset issue

Hi,

This is probably not monitored actively, but I am facing the same issue and I managed to hack my way through:

import tensorflow as tf
dataset = tf.data.Dataset.from_tensor_slices(([1., 2, 3], [4., 5, 6]))

with tf.device('/GPU:0'):
    var = tf.ones([500, 50])

@tf.function
def f():
  with tf.device('/CPU:0'):
      iterator = iter(dataset)
  for e in tf.range(3):
      for x, y in iterator:
          z = tf.multiply(x, y, name='mul')
          tf.print(tf.reduce_sum(z * var))
f()
SSSxCCC

comment created time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha aed0c8fdd41877aff9ef48ac892f31bc4bd38c57

adding a batched smooth MLE

view details

push time in a month

issue openedtensorflow/tensorflow

tf.Module break gradient registration

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Yes
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Colab default

Describe the current behavior When using operations within a tf.Module init, the gradient is broken

Describe the expected behavior The gradient should still be registered properly

Standalone code to reproduce the issue https://colab.research.google.com/drive/1X3DTc5-E-WadufSHUbVfBnVFEPQ0iOSR?usp=sharing

created time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha 7cc22fdc41f0262453fea13e0e106211c4b5deaf

finalzing examples

view details

push time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha 7c969eef9ec3af03de5d143698b29372be2ffb79

change seed to center chart

view details

AdrienCorenflos

commit sha 56ad0e1b44990326d07dacf49c282bead6ddad81

Merge remote-tracking branch 'origin/master'

view details

push time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha 5e0eabd30eb1b17a192278e323c0325a3dd61945

change seed to center chart

view details

AdrienCorenflos

commit sha 68a25bf8051d679a68e61cb9162b72588e8a5c52

Merge remote-tracking branch 'origin/master' # Conflicts: # scripts/run_smoothness.sh

view details

push time in a month

push eventJTT94/filterflow

JTT94

commit sha 9b93283329e177fce06286bf8a20cf7044ce0830

revert to simple starting epsilon

view details

JTT94

commit sha b4c79d81047f9e4ca1898c56e12c7e5a2ff42c0c

try fix sinkhorn diameter

view details

JTT94

commit sha 89f39aaf86f4c67b49eedc7fad516d5b4acd7405

add tqdm

view details

JTT94

commit sha 5836f063105678542dff890681d0378a28cf1a2c

no diameter

view details

JTT94

commit sha ab942bde31d80fce3347e59f0ccfda8afa9f690c

add epsilon tempering

view details

anon284

commit sha 9094cadd03235126c634bb56d472f7d309f51dd4

updatre bash

view details

JTT94

commit sha 723532552fcf2ba2967067dfd276bc8c9d0a0b78

revert to when it was working

view details

JTT94

commit sha 9f2f6665550270dc862be29953c38ef72f044272

revert to when it was working

view details

JTT94

commit sha 2629c67bc6e4d929b7e819f68cc9531ea210a120

add seed to proposal

view details

JTT94

commit sha 13183354600dab5175209a56453c4a6474508b17

add seed to proposal

view details

JTT94

commit sha f1a69244b9313506a01a6456de3b605d03240a51

add seed to proposal

view details

JTT94

commit sha a3f375247788fbf8b9c3d27022be047ac3cb338a

check smoothness script

view details

JTT94

commit sha 625de5b5668e1aafb86689bf65863f6ed38fb04f

check smoothness script

view details

AdrienCorenflos

commit sha 0263f5917eee1e24fb8e0c101c2c1eb017cfaae9

adapt examples a bit

view details

AdrienCorenflos

commit sha bb4a884e9508b6b486e79aa2433fb7274d38c168

put scaling back

view details

AdrienCorenflos

commit sha dfebce6d51c1e5c82a9447adc8d88d56a539d227

add XLA, refactor to use flags

view details

AdrienCorenflos

commit sha 9695f1f0d15f0059859f415824095fcedea91ab2

variational scripts refactoring

view details

AdrienCorenflos

commit sha 50fc30014558830963746064d7ae44ad4faec45d

changing the example a bit

view details

AdrienCorenflos

commit sha f8fcd8e4ab336a5bdb43866f753b162c638ca635

Modifying examples as per Arnaud's comments

view details

AdrienCorenflos

commit sha a5db588666bda73b5997937139499bede0b402e7

Modifying examples as per Arnaud's comments

view details

push time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha bc1d8617757443e078338a80f5c8c801074d3efd

adding ess profile

view details

push time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha b9bc8cea1c7ed71a94ce92791bcf3b3df588e0af

Add average ESS stat on the state

view details

push time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha fe85d7b0456b67c0f0b04ae332cd0ad9f40b78f0

variance adjusted with bigger reg

view details

push time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha 1d7f4ef4b76c4cbd6c4b26503a554cf055bb1546

MLE theta errors table

view details

push time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha 932176c58d064eec365ad0334ac078c47c8ea142

changing mesh size

view details

push time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha a5db588666bda73b5997937139499bede0b402e7

Modifying examples as per Arnaud's comments

view details

push time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha f8fcd8e4ab336a5bdb43866f753b162c638ca635

Modifying examples as per Arnaud's comments

view details

push time in a month

issue commenttensorflow/tensorflow

Sampling uniforms is slow

Hi,

As an FYI, this behaviour persists when asking for larger samples. This is a table of the log-time (log seconds) it took to sample a uniform of a certain log-size. For most practical purposes (<1M samples) tensorflow is severely underperforming. Note that this is not improved by decorating the call to random nor is by using stateless sampling. I tried using THREEFRY to see if the otherhead was coming from PHILOX, but it raises.

log-size torch tensorflow
0 -5.62487 -3.92051
1 -5.59297 -3.92344
1.30103 -5.57734 -3.92785
1.62325 -5.52077 -3.9329
1.94448 -5.51241 -3.91702
2.26245 -5.43938 -3.92665
2.57864 -5.29286 -3.92046
2.89432 -5.12171 -3.85816
3.21032 -4.90156 -3.8801
3.52621 -4.64741 -3.80525
3.84205 -4.35693 -3.75417
4.15788 -4.04589 -3.64094
4.47368 -3.7245 -3.37539
4.78947 -3.41472 -3.21007
5.10526 -3.08295 -2.8371
5.42105 -2.70882 -2.63069
5.73684 -2.43453 -2.39153
6.05263 -2.13015 -2.09891
6.36842 -1.8209 -1.799
6.68421 -1.44797 -1.45077
7 -1.14071 -1.10838
AdrienCorenflos

comment created time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha 50fc30014558830963746064d7ae44ad4faec45d

changing the example a bit

view details

push time in a month

startedJTT94/filterflow

started time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha 9695f1f0d15f0059859f415824095fcedea91ab2

variational scripts refactoring

view details

push time in a month

issue openedtensorflow/tensorflow

Mixing XLA and non XLA autograph triggers retracing

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): True
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux (Colab, I'm not sure what the os is)
  • TensorFlow installed from (source or binary): Colab
  • TensorFlow version (use command below): 2.2.0

Describe the current behavior Mixing XLA (experimental_compile) and non XLA functions results in constant retracing

Describe the expected behavior Functions inside an XLA function should inherit the option, XLA functions inside non XLA ones shouldn't retrace.

This is particularly important when relying on third-party libraries making use of the functionality.

Standalone code to reproduce the issue https://colab.research.google.com/drive/1PrKsKSiKyGjjmX8Itub_BKe-SEihuWMM?usp=sharing

created time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha dfebce6d51c1e5c82a9447adc8d88d56a539d227

add XLA, refactor to use flags

view details

push time in a month

create barnchJTT94/filterflow

branch : stateless_sampling

created branch time in a month

create barnchJTT94/filterflow

branch : bug-fix

created branch time in a month

push eventJTT94/filterflow

AdrienCorenflos

commit sha 11491ada84389111690065d4ab214e7bdf91debf

Revert "Deprecated argument in while_loop" This reverts commit 6410d276

view details

AdrienCorenflos

commit sha 0ccdc723a8f3f80c9715acdc32130c194c05a114

Merge remote-tracking branch 'origin/master'

view details

push time in a month

push eventJTT94/filterflow

anon284

commit sha 1364e1cd7d34615fcf607c0d5a66126e748a74eb

plots

view details

anon284

commit sha cae7cf5f154f3dfc0a57420ca34a2f88421b4eb6

add reqs

view details

JTT94

commit sha 4c25319945a2200b01d12bb2d05420669f6337be

Merge branch 'master' into experiments

view details

JTT94

commit sha 2f245e339286387dc6e96ec9ea06e051616771a7

add mle

view details

JTT94

commit sha 289869820424e32ca849ce511d04645986964f1f

add seir in models

view details

JTT94

commit sha e1c8938b13bdceac221a5e5b0699bc29155cac26

save fig on

view details

JTT94

commit sha 16b1a2ff6aefd76be0551ea559981dd02c93570b

save params

view details

AdrienCorenflos

commit sha 40393bf2ba44e60f9cf8660518cc2e761a7d8c2c

savefig with supported name

view details

AdrienCorenflos

commit sha 6c6827ff2f6f5059c5bb8ccfc3a15fcf65946700

Merge remote-tracking branch 'origin/experiments' # Conflicts: # scripts/simple_linear_smoothness.py

view details

AdrienCorenflos

commit sha 5b88be90a7d433b0e8a2b62abe1d17a01732f4f3

scaling the particles to have epsilon mean the same thing no matter the inputs

view details

AdrienCorenflos

commit sha e2369be634be85099873c1555bc3b20ed59f87bd

scaling wrt stddev

view details

AdrienCorenflos

commit sha ea6d209d88231a2e02c53c3f3b11af1750bd2ca2

Merge remote-tracking branch 'origin/master' # Conflicts: # scripts/simple_linear_smoothness.py

view details

push time in a month

issue commenttensorflow/probability

Outdated documentation

FYI I have tested this on colab (with global and local being either first or second element) but it doesn't work:

import tensorflow as tf
import tensorflow_probability as tfp

dist = tfp.distributions.Normal(0., 1.)

def random_fun(global_seed):
    local_seed = tf.Variable(0)
    @tf.function
    def random_fun_inner(global_seed):
        seed = [local_seed.assign_add(1), global_seed]
        x = dist.sample(seed=seed)
        seed = [local_seed.assign_add(1), global_seed]
        y = dist.sample(seed=seed)
        return x + y
    reset_seed = local_seed.assign(0)
    with tf.control_dependencies([reset_seed]):
        return random_fun_inner(global_seed)
AdrienCorenflos

comment created time in 2 months

issue commenttensorflow/probability

Outdated documentation

To be honest, ideally I would like to be able to pass a random state, a la numpy in a sense. Is tfp gonna support the interface?

It feels a bit artificial to be constructing something like this (if I understand what you are suggesting):

def random_fun(global_seed):
    local_seed = tf.Variable(0)
    @tf.function
    def random_fun_inner(global_seed):
        x = dist.sample(seed=(global_seed, local_seed.add_assign(1))
        y = dist.sample(seed=(global_seed, local_seed.add_assign(1))
        return x + y
    reset_seed = local_seed.assign(0)
    with tf.control_dependencies([reset_seed]):
        return random_fun_inner(global_seed)

Something like this would feel much more natural as a user interface:

@tf.function
def random_fun(generator):
    x = dist.sample(generator=generator)
    y = dist.sample(generator=generator)
    return x + y

And it would be the job of the generator to increment itself.

AdrienCorenflos

comment created time in 2 months

issue commenttensorflow/probability

Outdated documentation

tf.random_gamma has not been part of the interface for a while. You should replace with the snippet I provided.

Also you may want to add an autograph decorator to the example, otherwise it's a bit misleading.

I am aware of the random seed thingy, it's actually hurting me crucially (the tf.random.set_seed operation is super slow and scales linearly with the number of operations in the graph) so I was looking at your stuff to try and hack my way through the issue by feeding the seed as a variable but it's not working (gen_random_ops needs a genuine int as a seed and won't take a tensor). That's how I saw this outdated doc.

AdrienCorenflos

comment created time in 2 months

issue openedtensorflow/probability

Outdated documentation

Hi,

This doc page is outdated: https://www.tensorflow.org/probability/api_docs/python/tfp/util/SeedStream By the way, correcting the code in your example, this does not return 0.5 all the time in eager mode.

def broken_beta(shape, alpha, beta, seed):
  x = tf.random.gamma(shape, alpha, seed=seed)
  y = tf.random.gamma(shape, beta, seed=seed)
  return x / (x + y)

created time in 2 months

issue commentgoogle/edward2

Cannot pass a random variable to TransformedDistribution

Ah yes that sounds tricky if you don't want to break the open close principles.

Also even doing it on a case by case basis may prove complicated. For example, someone coded a distribution by subclassing the Transformed Distribution instead of composing it (I might send a PR for that one...), so you wouldn't be able to check the bases of the classes not their attributes either...

AdrienCorenflos

comment created time in 2 months

pull request commentnchopin/particles

A few bug fixing

I squashed the commit history, it should be more readable now

AdrienCorenflos

comment created time in 2 months

more