profile
viewpoint

dendisuhubdy/ccw1_trainstats 4

Theano and Pytorch - Training Statistics and Graph Visualization

mila-iqia/COVI-AgentSim 4

Covid-19 spread simulator with human mobility and intervention modeling.

mila-iqia/COVI-ML 2

Risk model training code for Covid-19 tracing application.

madarez/Asynchronous-Programming-in-Python 1

Cheatsheet for Asynchronous Programming in Python delivered by Deven Parekh

deekshaarya4/Post_training 0

Reproducible figures for "Post Training in Deep Learning"

HugoCote/Assignment-1-Part-2 0

For the course IFT6135

HugoCote/Assignment-1-Part-3 0

For the course IFT6135

joedsilva/AIDA 0

Source code repository for AIDA

push eventmila-iqia/COVI-AgentSim

Rez GodarzvandChegini

commit sha 769e97403888ad90c201ca9ef554f3cdbe9ccc67

add missing imports

view details

Rez GodarzvandChegini

commit sha d4a89ec9aa795584d5383632aac0218a5b5ebda3

follow upstream's removal of Event and log_static_info

view details

Rez GodarzvandChegini

commit sha da91dea3705c76a147af9e6dd0bc53d840e635a2

re-associate city attributes to district

view details

Rez GodarzvandChegini

commit sha 9ef369d097d0056618b9461629559e9fc76cbd87

add missing next_activity to track_mobility

view details

Rez GodarzvandChegini

commit sha 7a99c49dd5591bf0b0a200e2311f459a84ba5902

allow to have human_id and human's name separately

view details

Rez GodarzvandChegini

commit sha 698110a2aa1acecd36487ec1f5cb1076bf1e71c8

implement the logic for detecting if location is in current district

view details

Rez GodarzvandChegini

commit sha 87a09afc56138141c19a1ea5f2df2481114f43d7

make a universal location to district mapper and rewrite the city splitter to leverage this method to associate each location with each district

view details

Rez GodarzvandChegini

commit sha aa6bdefabcdd7ca46c7cdca526d79ce4d92d1c86

creating static incrementing id for classes

view details

Rez GodarzvandChegini

commit sha 58fc01a19056253927fe485b272789cfc1f270ab

re-associate city attributes to district

view details

Rez GodarzvandChegini

commit sha 14be7b4d89492d8fdc74e9d1137c45066fd26acc

call simpy's step() directly instead of super

view details

Rez GodarzvandChegini

commit sha 5f0d2024a14ff36bc8cc0eb5b9e6c651b90e0ae8

passing integer based keys as sorted map's keys

view details

Rez GodarzvandChegini

commit sha de1da27b5fd99095ea7cb6a1c420037c114f29e8

add missing sleep import

view details

Rez GodarzvandChegini

commit sha ae21d1398db41fac9101c5332ced8e627ce4b7e6

reimplement temp dir for backed file to avoid multiple processes may end up removing the same location and throwing errorrs

view details

Rez GodarzvandChegini

commit sha 4c83e01fedf3e6007892877aef720e5e45336241

substitute cur's syntax with txn's

view details

Rez GodarzvandChegini

commit sha 2a504af76909bf1a21c39623605cb859e6ba9e47

add missing Location import for typing

view details

Rez GodarzvandChegini

commit sha 84253552825c53b5f1aeb6400dad81c8bd4a8790

add missing district_queues for queuing humans to districts also, allow no inbounding humans to be queued for the district

view details

Rez GodarzvandChegini

commit sha 936e849929fdc6334bcc5c5b0a2f951d0ac58d8d

reimplement simpy's timeout with relative wait than absolute time

view details

Rez GodarzvandChegini

commit sha 9306d3dd810a7db86910d9b42c503408a37ef1d5

allow nullable covid spread or intervention start time

view details

Rez GodarzvandChegini

commit sha 66935029fd146a14c4e40cd5b9dbd9313f830741

fixed unused district argument on human.run

view details

push time in 15 hours

PublicEvent

push eventmila-iqia/COVI-AgentSim

Rez GodarzvandChegini

commit sha 3da3ad9e36bed31e9d9f326c7f4d70fb54291011

remove city's transformer engine by default

view details

push time in 8 days

push eventmila-iqia/COVI-AgentSim

Rez GodarzvandChegini

commit sha f9fd8e3525a65661a9f943bd106b6bfad4cd8880

Revert multiprocessing config updates and adds them to core.yaml This reverts yaml updates of commit 2886f39df1939c4cacf5ee2dd84f40bce1dc62a9.

view details

push time in 8 days

push eventmila-iqia/COVI-AgentSim

Rez GodarzvandChegini

commit sha 59efb5f25e77d5981f87ac8474333f3f7b133ce6

comment out collection_server code parts

view details

push time in 8 days

push eventmila-iqia/COVI-AgentSim

Rez GodarzvandChegini

commit sha bc68da79a3304ad0e47fd9efc410040e19cb778e

comment out DummyMemManager and InferenceClient initialization

view details

push time in 8 days

push eventmila-iqia/COVI-AgentSim

Rez GodarzvandChegini

commit sha f52565b2591f1e539d0522e006a1a355cb6d6eed

merged updated upstream and stashed changes

view details

push time in 8 days

push eventmila-iqia/COVI-AgentSim

Akshay Patel

commit sha 7e65d62de71126c979a1454b06bab163ca125946

Merge remote-tracking branch 'origin' into akptl

view details

Akshay Patel

commit sha 864c59ee3a6278c73fca4e603bbd764f2f029dfa

plotting again

view details

Akshay Patel

commit sha 7fccf8a9622a2ec454d7bc74783fdbbd8fc0a869

data

view details

Martin Weiss

commit sha ccf62f4361b084e564f1789f4e9d80eca8bc3015

improved comparison

view details

Akshay Patel

commit sha be53c8de0f65027750fd0252c72cf8abf9ce4aa8

Merge pull request #72 from mila-iqia/validation improved comparison

view details

Martin Weiss

commit sha a81eff73f2419374f64f85863c54a8e3135d8ab6

Merge branch 'akptl' of https://github.com/mila-iqia/covi-simulator into akptl

view details

Akshay Patel

commit sha 3392b29bfb34f8012d602f1a75c3f25346012c55

plotting quebec and multiple sims

view details

Akshay Patel

commit sha d601eecfbe84ab203652c0a843fff2c07466feae

comments

view details

Akshay Patel

commit sha d209425b5179ba463229a3784b53acfb6e7ace6b

hospital usage

view details

Martin Weiss

commit sha df5592c0e3d8059e1ec22ced590fbcab5bb64a6c

Merge branch 'akptl' of https://github.com/mila-iqia/covi-simulator into akptl

view details

Akshay Patel

commit sha 7f7fe7c25b8d343b0b429ab5e36f1c944159805c

hospital utilization changes

view details

Akshay Patel

commit sha 96915a4a7aa3cc9a1c14aabff56c34218b521d93

hospital n_patients

view details

Akshay Patel

commit sha 50da9cdba4181d625f4743ae232dcfdd2efb1d5c

plotting

view details

Martin Weiss

commit sha 16055e3ccfe0db821e0a30fa7709c8804ddcb813

Merge branch 'akptl' of https://github.com/mila-iqia/covi-simulator into akptl

view details

nrahaman

commit sha 4862b36e97fafed487aa78d3559d22b84a812f3b

- config for iter2 dataset printing

view details

Martin Weiss

commit sha 4b97558b00a2565aca7d06484cc9db225f927267

progress on validation

view details

Akshay Patel

commit sha 57707bdb6a5d2d054ae78219333a2df48c62d763

plotting updates continuous error with smoothing

view details

Akshay Patel

commit sha b0cf9234ad96d3af0aba863248d2ca61a1032588

plotting again

view details

Akshay Patel

commit sha ad6857daac90acd6d6e8b17ba90eabaa4f533a33

data

view details

Martin Weiss

commit sha bff944b95e18a78d82e231c122ac5b31bc227a35

improved comparison

view details

push time in 8 days

PublicEvent

push eventmadarez/madarez.github.io

madarez

commit sha 4894d2378d7f2d48537d9b0771facd6a9d10e876

Sourced from Cayman template

view details

push time in 9 days

push eventmadarez/madarez.github.io

madarez

commit sha fe21817472c0893335d86b25fde601f432969dfb

Set theme jekyll-theme-cayman

view details

push time in 9 days

issue openedjnwatson/py-lmdb

Documentation Enhancement Suggestion for overwrite tag of transaction's put

Affected Operating Systems

  • MacOS

Affected py-lmdb Version

1.0.0

py-lmdb Installation Method

pip install lmdb

Distribution name and LMDB library version

(0, 9, 24)

Describe Your Problem

Reading the documentation on transaction's put, for a db that is opened with dupsort=True and, two puts with the same key, say k:v1 and k:v2, I got the impression that the mapped value would change according to the following table:

overwrite=True overwrite=False
dupdata=True v1,v2 v1
dupdata=False v2 v1

However, I'm empirically, getting the following behaviour:

overwrite=True overwrite=False
dupdata=True v1,v2 v1
dupdata=False v1,v2 v1
import tempfile
import lmdb
import struct

over=True
dup=False

fp = tempfile.TemporaryDirectory()
env = lmdb.Environment(path=fp.name, max_dbs=1)
db = env.open_db(b'scratch', dupsort=True)
pack_item = lambda x: x.to_bytes(4, 'big')

def append(key, value):
  with env.begin(db=db, write=True) as txn:
    print(txn.put(key=pack_item(key), value=pack_item(value), dupdata=dup, overwrite=over, db=db))
    print(txn.id(), txn.stat(db))

append(8,1)
append(8,2)

with env.begin(db=db, write=True) as txn:
  print(txn.id(), txn.stat(db)) # shows two stored entries

Describe What You Expected To Happen

I was expecting the documentation to describe the behaviour of overwrite=True and dupdata=False under the description of the overwrite tag of transaction's put as a separate case.

created time in a month

issue commentjnwatson/py-lmdb

dupfixed affecting append to go through in non-dupsort dbs

Thank you for your quick reply, @jnwatson . I missed to acknowledge that I understood that setting the pack_item with the 'big' endian returns True. What I was trying to communicate is that regardless of the order of the put-appends, if the pack_item is:

  1. x.to_bytes(4, 'little'): the second put-append always returns False.
  2. x.to_bytes(4, 'big'): the second put-append always returns True.

I expected that the lexicographical order of the bytes objects of 7 and 8 should result in one of the two being larger and the other being smaller. Therefore, the sorted keys property shouldn't have been violated with one of the two runs above. Strangely, I'm seeing any order succeeds with 'big endian', but no order is valid for 'little endian'. I hope I'm not missing something trivial here, and, if so, I apologize in advance for it.

P.S.: Just to further clarify: I'm not really trying to append in the sense of extending the values of an existing key. <details><summary>Side Note Here:</summary>Although, I'm interested in using the append property on dupkeys later on. But I suppose that's not a problem as dupsort keys have their own subdb that they put values to, and I just need to get the <i>b'scratch'</i> db's key order right. But I understand that's a different discussion. </details>

madarez

comment created time in a month

issue commentjnwatson/py-lmdb

dupfixed affecting append to go through in non-dupsort dbs

@jnwatson , I tried all the permutations with the appending the key-value pairs above. In fact, much simpler experiment fails above:

import tempfile
import lmdb
import struct
fp = tempfile.TemporaryDirectory()
env = lmdb.Environment(path=fp.name, max_dbs=3)
db = env.open_db(b'scratch', integerkey=True, integerdup=True, dupfixed=True) # here

def append(key, value):
  pack_item = lambda x: struct.pack('I', x)
  with env.begin(db=db, write=True) as txn:
    print(txn.id(), txn.stat(db))
    return txn.put(key=pack_item(txn.id()), value=pack_item(value), append=True, db=db)

append(8,88) # returns True
append(7,77) # returns False

This seems to set an unpredictable order between 7 and 8 as integer keys.

I understand that when IntegerKey is False, the sorting is lexicographical. But I fail to identify how I can make my incremental integer keys to qualify for an appendable increasing key in their bytes representation. I see that the upstream LMDB docs indicate that,

MDB_INTEGERKEY Keys are binary integers in native byte order, either unsigned int or size_t, and will be sorted as such. The keys must all be of the same size

I tried coming up with a reconstruction on how the orders are set up via lambda x : bin(int(struct.pack('>I',x).hex(), 16))[2:]. This passes one other test that I've been busy with, but it obviously can't explain comparisons like above and I haven't extensively tested it. So, I was just hoping that you'd provide me with an implementation pointer that I could track to get the IntegerKey orders right.

If the worst comes to the worst, I can move away from having integerKeys to regular one to avoid these ordering issues. Strangely though, when I remove the IntegerKey flag, anyway that I run the experiments above, they pass with not returning False puts.

madarez

comment created time in a month

push eventrllabmcgill/final-project-rez_godarzvandchegini6

madarez

commit sha 59690153fee8987e8748bfa31a8ec4de66acd637

Update README.md

view details

push time in a month

startedlorenzodifuccia/safaribooks

started time in 2 months

GollumEvent

issue openedjnwatson/py-lmdb

'integerkey' switch in main open_db doesn't resolve sub-db names

Affected Operating Systems

  • MacOS

Affected py-lmdb Version

  • 1.0.0

py-lmdb Installation Method

  • pip install lmdb

Using bundled or distribution-provided LMDB library?

  • Bundled

Distribution name and LMDB library version

  • (0, 9, 24)

Machine "free -m" output

Memory: 5.47G of 16.38G free

vm_stat
Mach Virtual Memory Statistics: (page size of 4096 bytes)
Pages free:                                3610.
Pages active:                           1180342.
Pages inactive:                         1156361.
Pages speculative:                        23751.
Pages throttled:                              0.
Pages wired down:                       1006480.
Pages purgeable:                         109014.
"Translation faults":                1076335062.
Pages copy-on-write:                   41865122.
Pages zero filled:                    560193302.
Pages reactivated:                     75828700.
Pages purged:                          79323676.
File-backed pages:                       244541.
Anonymous pages:                        2115913.
Pages stored in compressor:             8871248.
Pages occupied by compressor:            823393.
Decompressions:                       118306660.
Compressions:                         156423612.
Pageins:                               42526337.
Pageouts:                                669951.
Swapins:                              160882156.
Swapouts:                             164856494.

Other important machine info

N/A

Describe Your Problem

I tried define integer-based keys for the main db in order to name integer-based names for sub-dbs. The lmdb sometimes works out without any issues, and it sometimes runs into segmentation faults. It's as if there're some information around integer-based keys when used for the name of the sub-dbs that I cannot locate in the docs.

Errors/exceptions Encountered

Take the following snippet for example:

import tempfile
import lmdb
import struct
fp = tempfile.TemporaryDirectory()
env = lmdb.Environment(path=fp.name, max_dbs=3)
pack_uint = lambda x: struct.pack('I', x)
db_name = pack_uint(7)
main_db = env.open_db(None , integerkey=True) # env.open_db(None)
db = env.open_db(db_name, integerkey=True)

with env.begin(db=db, write=True) as txn:
    txn.put(key=pack_uint(8), value=pack_uint(9), append=True, db=db)
    txn.put(key=pack_uint(88), value=pack_uint(99), append=True, db=db)

fp.cleanup()

Running the above snippet in a loop causes some of the python runs to fail, e.g.,

for i in {1..10}; do python3 -q -X faulthandler above_snippet.py; done
Fatal Python error: Segmentation fault

Current thread 0x000000010b6f95c0 (most recent call first):
  File "kk.py", line 13 in <module>
Segmentation fault: 11
Fatal Python error: Segmentation fault

Current thread 0x000000010f7a85c0 (most recent call first):
  File "kk.py", line 13 in <module>
Segmentation fault: 11

In the case above, the above snippet failed only twice out of the five run times. What I noticed as well was that all the failings happen on the second append and the first one always returns True.

Let's take a similar example:

import tempfile
import lmdb
import struct
fp = tempfile.TemporaryDirectory()
env = lmdb.Environment(path=fp.name, max_dbs=6)
pack_item = lambda x: struct.pack('I', x)
db_name = pack_item(7)
db = env.open_db(db_name, integerkey=True)

def append(value):
  with env.begin(db=db, write=True) as txn:
    return txn.put(key=pack_item(txn.id()), value=pack_item(value), append=True, db=db)

assert append(8) == True
assert append(9) == True

main_db = env.open_db(None , integerkey=True) # env.open_db(None)
with env.begin(db=main_db, buffers=True) as txn:
  assert txn.stat(main_db)['entries'] == 1

db = env.open_db(db_name, integerkey=True)
with env.begin(db=db, buffers=True) as txn:
  assert txn.stat(db)['entries'] == 2

fp.cleanup()

In this snippet, I don't encounter a segmentation fault, but the second assert fails to see the appended 8. However, after inspecting it, in the failed cases txn.stat(db) is empty whose 'entries' is actually zero. This time around, this case fails regardless of the second append in the second snippet above.

Now, if you change main_db = env.open_db(None) in either snippet, there will be no problem anymore. That's what caused me to believe that this has something to do with the integerkey switch.

Describe What You Expected To Happen

I expected both snippets to succeed consistently across multiple runs.

Describe What Happened Instead

The Python process crashed (Segmentation fault: 11) in the first snippet and the second assertion fails in the second example.

created time in 2 months

issue openedjnwatson/py-lmdb

dupfixed affecting append to go through in non-dupsort dbs

Affected Operating Systems

  • MacOS

Affected py-lmdb Version

1.0.0 & 0.98; Please note that I don't believe this is relevant to #242 as the following case fails for lmdb==0.98 as well.

py-lmdb Installation Method

pip install lmdb

Distribution name and LMDB library version

(0, 9, 24)

Describe Your Problem

I have a db that does not use duplicate keys. When I run the following script the last two puts do not go through. However, when I remove dupfixed=True tag in the db definition, all of the puts go through.

import tempfile
import lmdb
import struct
fp = tempfile.TemporaryDirectory()
env = lmdb.Environment(path=fp.name, max_dbs=3)
db = env.open_db(b'scratch', integerkey=True, integerdup=True, dupfixed=True) # here

def append(key, value):
  pack_item = lambda x: struct.pack('I', x)
  with env.begin(db=db, write=True) as txn:
    print(txn.id(), txn.stat(db))
    return txn.put(key=pack_item(txn.id()), value=pack_item(value), append=True, db=db)

append(7,77) # returns True
append(8,88) # returns False
append(9,99) # returns False

Describe What You Expected To Happen

I expected the db to ignore the dup options as the db is not duplicate sort, similar to how whether integerdup is set or not doesn't affect the operation of the db as a non-dupsort db. It's as if passing dupfixed=True, enables and disables append flags.

I know it's not the way a user is expected to use the db, but I just thought I should report it as a minor bug.

created time in 2 months

more