profile
viewpoint
Fei Hu feihugis @Microsoft California, USA Microsoft

CODAIT/deep-histopath 99

A deep learning approach to predicting breast tumor proliferation scores for the TUPAC16 challenge

CODAIT/graph_def_editor 15

GraphDef Editor: A port of the TensorFlow contrib.graph_editor package that operates over serialized graphs

feihugis/CF-2 1

Group Repo

feihugis/academicpages.github.io 0

Github Pages template for academic personal websites, forked from mmistakes/minimal-mistakes

feihugis/addons 0

Useful extra functionality for TensorFlow 2.0 maintained by SIG-addons

feihugis/awesome-deep-vision 0

A curated list of deep learning resources for computer vision

feihugis/awesome-nlp 0

:book: A curated list of resources dedicated to Natural Language Processing (NLP)

fork feihugis/mesh

Mesh TensorFlow: Model Parallelism Made Easier

fork in 8 days

startedtensorflow/mesh

started time in 8 days

push eventfeihugis/tensorflow

Chen Lei

commit sha 13d59c2d1fccecc6343965cef89464229d00db21

Update stale.yml

view details

Tare Gaskin

commit sha e0b43845d711e9dc520b9b6716ff89c3b4cd631f

Sign compare warning fixes batch 2

view details

Andrew Audibert

commit sha 6166444602c0ddab5e5e7ff129113341a99bd98c

[tf.data] Add prefetch benchmark. This CL adds a benchmark to test the performance of the prefetch dataset transformation. PiperOrigin-RevId: 316605096 Change-Id: Iaf5d8f8c3afba6e51a53805afe5bc978916ff01e

view details

Tare Gaskin

commit sha 52e1dba6b14da82ddd30344526e13557cf33cc32

getting rid of pesky lingering commits

view details

A. Unique TensorFlower

commit sha 7ee9571a8e127b39a4a8a01016e40105b7613bbf

Integrate LLVM at https://github.com/llvm/llvm-project/commit/1a7f115dce22 PiperOrigin-RevId: 316608084 Change-Id: I6df721c782e850371267e6a2fba7eda96d3b1610

view details

A. Unique TensorFlower

commit sha e74010b4e803f5f47f6c013a2932172e4beae72a

Clarify the documentation for PermuteDimensions PiperOrigin-RevId: 316616603 Change-Id: Iccfbd986276688bdc25b6757cdee6806f0b587d6

view details

A. Unique TensorFlower

commit sha 42a734170dae2942fcf553ccf5480fd48840795a

tf.numpy: Change a bunch of ops to handle unknown shapes. Fix logic in sort_ops to handle 1D values. PiperOrigin-RevId: 316620637 Change-Id: Iedc2ba8aad7673bbe210661bb741bf0660f047aa

view details

Mehdi Amini

commit sha 3fa71a90282e984050d25f4ea61e4bbe8b75bd12

Bump open source llvm revision to 1a7f115dce22b2c09fdd4f7f79d24da5de6eaef8 PiperOrigin-RevId: 316622447 Change-Id: I4f0e46be7b6a3e2624034cab02e06a1f8e86a1a0

view details

A. Unique TensorFlower

commit sha b03a4dbbd6a679d02437708e210f115525ed20f1

tf.numpy: Add module comments. PiperOrigin-RevId: 316626561 Change-Id: I2a750832ccb5461f01c5bf0a948bc15190f4fd39

view details

Anjali Sridhar

commit sha 5107743c47cff6980ebd68d61931bc8b5c3c6a87

Add VariablePolicy field to the DistributedVariable class as part of an internal refactor that will allow us to attach different policies to a DistributedVariable. PiperOrigin-RevId: 316629999 Change-Id: I20160480b0678657198112adaa61ad7a47823cbd

view details

Thai Nguyen

commit sha e887f933e4fd4965742944668f94255e9a603533

Add the missing versions to runtime version This cl also add a new unit test to ensure new version got reflected in runtime version. PiperOrigin-RevId: 316630739 Change-Id: I139ecf3077b2eec9bcc13ea5bb9030199d29203a

view details

Thai Nguyen

commit sha ec0e105c6fe537969a736ddb546c277ae18b9282

Fix build failure of list_flex_ops_main in OSS The cc_binary required --config=monolithic which can't be passed into a native.genrule. Using tf_cc_binary solves the build failure. PiperOrigin-RevId: 316631689 Change-Id: Ia706d532578ccbf5bc8f172f6344f166d05531fb

view details

Jing Pu

commit sha d43a0150f891b938dfa4247744e4d18e2e696e06

Add a pattern to legalize hlo.reduce to tf.Min. PiperOrigin-RevId: 316632939 Change-Id: I7fbc90c1a75e5cc8bafb6b87475284de6ebe91a7

view details

YoungSeok Yoon

commit sha 44db81e3241f98e61d386aeb8b1ee0dee33e04b6

Fix broken hyperlinks in guide docs PiperOrigin-RevId: 316635046 Change-Id: I604b94075e2e10520bbfb2885089e534f3a649cd

view details

Terry Heo

commit sha aa99cf218c8bf13aeb15e64ec4c62ea14ecb5753

Enable flex delegate on tensorflow.lite.Interpreter Python package Usually, flex delegate is enabled by symbol override of AcquireFlexDelegate() function. But this approach doesn't work well with shared library. Since pywrap_tensorflow_internal.so is available for tensorflow PIP, I've made the following changes to enable flex delegate. - Included flex delegate module to the pywrap_tensorflow_internal.so. This file already contains most TF internal logic and having TFLite flex delegate impacts about 72K to the output. - Added new function of TF_AcquireFlexDelegate() in the delegate module. - Updated logic in AcquireFlexDelegate() of interpreter_builder.cc to check the availability of pywrap_tensorflow_internal.so and lookup the TF_AcquireFlexDelegate() symbol to enable flex delegate. Also updated python/lite_flex_test.py since flex delegate is supported with Python API PiperOrigin-RevId: 316636275 Change-Id: I13a3246f27860ac0551fb04d81a84d4e82997ebc

view details

Pavithra Vijay

commit sha 8950c470bb11a9b94c0dd08d73156008dfac60c9

Remove automatic control dep wrapping from layers in v2. PiperOrigin-RevId: 316638920 Change-Id: Iad14b1a4b0b14052f34784401b375a14b49a7641

view details

A. Unique TensorFlower

commit sha e2b5397f126ba9cbc76a840ea0a46331e0f10897

Update GraphDef version to 434. PiperOrigin-RevId: 316639748 Change-Id: I2f62575a1ffdf72dbbafd5a2d6a10ae2a64d4b7c

view details

A. Unique TensorFlower

commit sha d2cba310e80fc545cb0f8075d32335c170d547f2

compat: Update forward compatibility horizon to 2020-06-16 PiperOrigin-RevId: 316639760 Change-Id: I5bfbc17f255457595771a2a4636abd59ee03feb1

view details

Yong Tang

commit sha 83f19c6a9e84fc6971ad0a7df5874603237a595f

Fix unknown output shape issue in autograph for tf.equal This PR tries to address the issue raised in 40471 where the output shape of an autograph consists of tf.equal could not inference correctly. Specifically `x.shape == [None, 10, 1]` and `y.shape == [None, 1, 4]` only yield `shape == None` (should be `shape == [None, 10, 4]`). The reason was that the shape inbference function for equal didn't capture the cases where both x and y's dim are None. This PR fixes the issue. This PR fixes 40471. Signed-off-by: Yong Tang <yong.tang.github@outlook.com>

view details

A. Unique TensorFlower

commit sha d52f3465f56882ad169759a942448843d1b4b589

Remove automatic control dep wrapping from layers in v2. PiperOrigin-RevId: 316652071 Change-Id: I90d3568fa727c8370de1f20e35742efbd9d615ac

view details

push time in 8 days

issue commenttensorflow/runtime

Build on MacOS

Received some emails about how to make the build work on MacOS. Here is the changes I made: https://github.com/feihugis/runtime/commit/bd1a3a4aef141145c9093f795635b4d6e88d3539. Hope it can help the MacOS users. I would be glad to submit a PR to make the build work on MacOS if needed.

feihugis

comment created time in 9 days

startedgoogle/sentencepiece

started time in 10 days

fork feihugis/fairseq

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.

fork in 11 days

startedpytorch/fairseq

started time in 11 days

fork feihugis/text-to-text-transfer-transformer

Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"

https://arxiv.org/abs/1910.10683

fork in 11 days

startedgoogle-research/text-to-text-transfer-transformer

started time in 11 days

startedmicrosoft/DirectML

started time in 12 days

startedNVIDIA/spark-rapids

started time in 13 days

push eventfeihugis/dgl

Chao Ma

commit sha f8ae6350b1af26951f10026b80abbddde664cd9c

[DGL-KE] Add METIS preprocessing pipeline (#1365) * update metis * update * update dataloader * update dataloader * new script * update * update * update * update * update * update * update * update dataloader * update * update * update * update * update

view details

Chao Ma

commit sha 635dfb4a591c23adda9019cb79b081773feb558b

[DGL-KE] Add license to every file header (#1368) * update metis * update * update dataloader * update dataloader * new script * update * update * update * update * update * update * update * update dataloader * update * update * update * update * update * update * update * Add license to every filer header

view details

Quan (Andy) Gan

commit sha 0a51dc5435bc49075605bffbaa20ca2eaec7782c

[Bug] Fix dsttype in GraphSAGE minibatch model (#1371) * fix for new ntype API for blocks * adding two new interfaces

view details

Quan (Andy) Gan

commit sha 4af02022b21d80a70defc21c947f5883e590403c

[Bug] Multiple fixes (#1374) * multiple fixes * lint * lint x2

view details

xiang song(charlie.song)

commit sha 1b9bc16b1a57c6e7957c3fd74d89d1b206fc5bb8

[KG]Update README and config (#1375) * Update README and config * upd Co-authored-by: Ubuntu <ubuntu@ip-172-31-60-78.ec2.internal>

view details

Mufei Li

commit sha 0f40c6e49c48834d8bebfaaaeea7a489f1ae9899

[Hetero] Replace card with num_nodes Co-authored-by: Minjie Wang <wmjlyjemaine@gmail.com>

view details

Minjie Wang

commit sha 2cdc4d3c1d17e1ac8be1c44ca54d772084c76f18

[Doc] Patch tutorial (#1380) * patched 1_first * done 2_basics * done 4_batch * done 1_gcn, 9_gat, 2_capsule * 4_rgcn.py * revert * more fix

view details

Minjie Wang

commit sha 856de790e86aa1e02ffdea599de2473883e7bb89

Update README Update README with both latest and stable doc links

view details

Da Zheng

commit sha 10253a5c0764f8f9c767ae00fbb362ff3cb21e0f

[BUGFIX] Fix is_multigraph in the construction from scipy coo matrix (#1357) * fix is_multigraph in from_coo. * add tests for partition. * fix. * Revert "add tests for partition." This reverts commit cb8c8555da3e0c70a482c2d639adce2943475bfc. * fix everywhere from_scipy_sparse_matrix is used.

view details

Da Zheng

commit sha a0721405cf4d069c31c00382bc6361c7fc2933d2

[BUGFIX] don’t import dgl in the package. (#1382) * fix dgl data. * remove more. * fix. * fix. Co-authored-by: Ubuntu <ubuntu@ip-172-31-16-150.us-west-2.compute.internal>

view details

Quan (Andy) Gan

commit sha d27b4859127f4a45afd52fea70b1b5bc60d1affd

[BUG] Fix remove_edges crashing with empty edge ID tensors (#1384)

view details

Quan (Andy) Gan

commit sha b9c65e91ef78b6adcc50250b64dcd91b459d67f6

[BUG] Another fix on remove edges when all edges are removed (#1386) * [BUG] Another fix on remove edges when all edges are removed * fix mxnet

view details

Adam J. Stewart

commit sha 08fcda325b18bf8abe232885feff88044b85b306

[Doc] Fix link to backends docs (#1376) Co-authored-by: Tong He <hetong007@gmail.com>

view details

Jinjing Zhou

commit sha 1e27e90e4e542dd566a4e1461583faa09d482545

[MXNet] Patch mxnet unittest (#1395) * patch * turn off tf test * skip test * fix

view details

Jinjing Zhou

commit sha ebda932d2f0a3d55f1127410b695ba1245bcd277

[CI] Tensorflow ci (#1397) * fix * fix

view details

Quan (Andy) Gan

commit sha bbfff8ce76a74f101787253ff3cb085eeb4d38ac

[Feature] Casting between DGLGraph and DGLHeteroGraph (#1391) * [Feature] Casting between DGLGraph and DGLHeteroGraph * lint * address comments Co-authored-by: Minjie Wang <wmjlyjemaine@gmail.com> Co-authored-by: Jinjing Zhou <VoVAllen@users.noreply.github.com>

view details

Minjie Wang

commit sha 3efb5d8ecf7d748655e2199d120a40888ece2282

[NN] Add HeteroGraphConv module for cleaner module definition (#1385) * Add HeteroGraphConv * add custom aggregator; some docstring * debugging * rm print * fix some acc bugs * fix initialization problem in weight basis * passed tests * lint * fix graphconv flag; add error message * add mxnet heteroconv * more fix for mx * lint * fix torch cuda test * fix mx test_nn * add exhaust test for graphconv * add tf heteroconv * fix comment

view details

xiang song(charlie.song)

commit sha 67cb7a43a064d2d2017d793660e426ac9b37013e

[Feature] Deprecate multigraph (#1389) * Deprecate multi-graph * Handle heterograph and edge_ids * lint * Fix * Remove multigraph in C++ end * Fix lint * Add some test and fix something * Fix * Fix * upd * Fix some test case * Fix * Fix Co-authored-by: Ubuntu <ubuntu@ip-172-31-51-214.ec2.internal> Co-authored-by: Jinjing Zhou <VoVAllen@users.noreply.github.com> Co-authored-by: Minjie Wang <wmjlyjemaine@gmail.com>

view details

Zihao Ye

commit sha af61e2fbb45c53df3ff2a91f81a2026a8f66d90d

[Feature] Support nn modules for bipartite graphs. (#1392) * init gat * fix * gin * 7 nn modules * rename & lint * upd * upd * fix lint * upd test * upd * lint * shape check * upd * lint * address comments * update tensorflow Co-authored-by: Quan Gan <coin2028@hotmail.com> Co-authored-by: Jinjing Zhou <VoVAllen@users.noreply.github.com> Co-authored-by: Minjie Wang <wmjlyjemaine@gmail.com>

view details

Quan (Andy) Gan

commit sha e3a9a6bba873fe2cac4d96ec4b72f58ca8223479

add an optional include_dst_in_src argument (#1401)

view details

push time in 13 days

starteddevendrachaplot/Neural-SLAM

started time in 16 days

push eventfeihugis/tensorflow_notes

Fei Hu

commit sha d4611faa726f558f576adc19bc9953fc779cd1de

Add more notes

view details

push time in 17 days

startedgperftools/gperftools

started time in 20 days

push eventfeihugis/runtime

TFRT team

commit sha 3189f4e5ce9e06dcb8be30787331be8f75e6d50f

Integrate LLVM at https://github.com/llvm/llvm-project/commit/2e499eee5884 PiperOrigin-RevId: 312297705

view details

Rachel Lim

commit sha 7a1f80c94c7e898aec576bb264cf5241757bb9c0

[TFRT + tf.data] Update BatchDataset to override GetNextUntyped() instead of GetNext(). This also updates the implementation to decouple the metadata derivation, tensor allocation, and copying for each component of the output. PiperOrigin-RevId: 312338922

view details

Rachel Lim

commit sha 93dc8a63cca46688c36859f6baee14756a554059

[TFRT + tf.data] (1) Remove GetNext() and IterationResult, and rename GetNextUntyped() to GetNext() and IterationResultUntyped to IterationResult. (2) Update Iterator and Dataset base classes to be type-erased. (3) Remove unused helper functions. PiperOrigin-RevId: 312360851

view details

Dong Lin

commit sha be04273a4f8bdd83d4d2e85861dd23103605b479

[TFRT:Data] Add FilterDataset that can handle asynchronous EOF. FilterDataset takes a user-defined function. The user-defined function takes value from the underlying iterator and returns a boolean value. FilterDatasetIterator forwards the value from the underlying iterator to the GetNext(...) caller iff the user-defined function returns true when applied to the value. FilterDatasetIterator::GetNext(...) is non-blocking and it ensures in-order delivery. PiperOrigin-RevId: 312374130

view details

Kuangyuan Chen

commit sha 355932b5522ca48b7c0a4f85785db29ccb1979f2

Add a custom type corert.string for string in CoreRT dialect. PiperOrigin-RevId: 312381734

view details

Jing Dong

commit sha 543664055f3494577351f8102e121aa27e55d193

Change TFRT kernels to consistently take ExecutionContext instead HostContext. This gives TFRT kernels a consistent interface, as otherwise, some kernels would take HostContext and others would take ExecutionContext. PiperOrigin-RevId: 312392470

view details

Christian Sigg

commit sha 8361cba0113b2bd1b1e4d7612531ff8ed51e9b34

Add test for tracing api. PiperOrigin-RevId: 312470927

view details

Kuangyuan Chen

commit sha a3a8608931d03972edd30f544d1e61832846fac9

Allow empty array to be used as ArrayRef of any type. PiperOrigin-RevId: 312495048

view details

Kuangyuan Chen

commit sha 209255924fa9e677a8da94d27402f988e08cef0a

Add support to emit string type attribute in BEF. PiperOrigin-RevId: 312495083

view details

Qiao Zhang

commit sha b7b771d93957b19d3a8cbabc42e064a318ebd186

Change OpHandler::CopyDeviceTensorToHost to return HostTensor rather than DenseHostTensor since a host tensor can be a StringHostTensor. PiperOrigin-RevId: 312528997

view details

Kuangyuan Chen

commit sha e03829fa862602638a0c7caa237b2df12e7e4aaf

Add string dtype support in OpAttr. PiperOrigin-RevId: 312551195

view details

Rachel Lim

commit sha 76173a8eb9e8fa6a13a243f5306bf9db0593f951

[TFRT + tf.data] Remove type specialization on IteratorGetNext and MakeIteratorFromDataset kernels PiperOrigin-RevId: 312699615

view details

Rachel Lim

commit sha 5691c85a9b5d5099478b572ee4be24adedbace63

[TFRT + tf.data] Remove type specialization from MapDataset and PrefetchDataset. PiperOrigin-RevId: 312729014

view details

Rachel Lim

commit sha 584a52b0baa57b067a633f7a13ceb5e0cd533a3e

[TFRT + tf.data] Remove type specialization from RepeatDataset PiperOrigin-RevId: 312751882

view details

Dong Lin

commit sha b501511e139665c13574cc42ac643f0ed8a4121d

[TFRT:Data] Update RepeatDataset to handle asynchronous EOF. PiperOrigin-RevId: 312757523

view details

TFRT team

commit sha 4bf7d8f4d92548db623070362cc53039a1ed8d39

Integrate LLVM at https://github.com/llvm/llvm-project/commit/1108f5c737db PiperOrigin-RevId: 312775865

view details

TFRT team

commit sha 5edcaf25259452cb33309ebb2978b633683d89ee

LLVM commit version updated to: 1108f5c737dbdab0277874a7e5b237491839c43a PiperOrigin-RevId: 313271086

view details

TFRT team

commit sha 91e3b6e3dc193c2803b8a9754687550ae49a6abc

Add tensor metadata details to op_dispatch tracing output. PiperOrigin-RevId: 313274949

view details

Ce Zheng

commit sha f071065b42d9639a128e22ce036451aa9af04ab9

Add DEBUG_PRINT in BEFExecutor when an error happened. PiperOrigin-RevId: 313296729

view details

Chuanhao Zhuge

commit sha 78b94d1517aeab6ba36ae0e282d046eca55507c5

Support building TFRT with GCC. PiperOrigin-RevId: 313313013

view details

push time in 22 days

startedbaidu-research/NCRF

started time in 22 days

push eventfeihugis/tensorflow

TensorFlower Gardener

commit sha 58740bfbd00bf5a427da7dfe6f6f143afe64b3a3

Merge pull request #39952 from vnvo2409:gcs-registration PiperOrigin-RevId: 313835518 Change-Id: I004d4b0927194571eb1d696c89a681ca7df6f3af

view details

Revan Sopher

commit sha 9cd1555bd297d8140534a69f5d5a8aa59ca1b056

Disable saved_mode_test on Cloud TPU. PiperOrigin-RevId: 313835836 Change-Id: Idbc66938bad01cf41dd042c0e05551cd79db8717

view details

TensorFlower Gardener

commit sha 0e7721a8e9bdd1566b2b3d75a634c81d7045293b

Merge pull request #38111 from antmicro:tf-demo-update PiperOrigin-RevId: 313836766 Change-Id: Id99c8902114127e3525be4098acf1786d212a4d8

view details

Ken Franko

commit sha 261921358e6891eb87ca495f597f158f4aa15093

Handle output from OutsideCompiled parallel_execute regions. Adds ops to send/recv data from host -> device when there are outputs from the OutsideCompiled cluster. _TPUCompileMlir placeholder ops are also added to be replaced later because host side comm ops require the program_key as input. This handles when a the result from OutsideCompiled cluster was originally returned from the TPU cluster. PiperOrigin-RevId: 313840240 Change-Id: I2af37282309dd0998f0c15c0954a855b7bc0ac63

view details

A. Unique TensorFlower

commit sha 3b2674dce3c5b4be01176646b573d3281d78bed0

Fix typo in tf.where documentation. PiperOrigin-RevId: 313842387 Change-Id: I255dfad74a2ddc80373504569b07c39636d90cf1

view details

Nick Kreeger

commit sha 7d0ab6178803ea00b1693f8628e5914c46ebde44

Add special "recording" SimpleMemoryAllocator class to help with logging tail allocations. This new helper class will enable TFLM to log and record where the allocations in the shared arena are going. A future change will use this new class in a special "recording" MicroAllocator subclass. All these logging mechanisms will be opt-in by code. PiperOrigin-RevId: 313843072 Change-Id: I3fc9205e475e89b4a3795c3cc79c31d2166da2c8

view details

Akshay Modi

commit sha efa880137ed3e00d7b1178abc95ec58f2108749a

Move linspace tests to their own file. PiperOrigin-RevId: 313843608 Change-Id: Ifdae2ac60a721795151124a6a2ee643cb0e527ec

view details

A. Unique TensorFlower

commit sha 8c63be3940e1e83f2c7e1b6ebfe63d823dbc0f22

Internal change PiperOrigin-RevId: 313850352 Change-Id: I89584b0bcb4409eb74d21e31fb0eb68844186707

view details

Jiho Choi

commit sha 8b3f0347e74bc69206f5ee36ccccd61ce6fec26f

Register the semantic stats as internal. PiperOrigin-RevId: 313853594 Change-Id: I4e5ced627e8705cae77230671f70065e3ed25191

view details

Robert Suderman

commit sha a5bd187cce379a14d8cbf2a0387778b821dc714b

Add xla_hlo.dynamic_iota for non-static cases of iota Existing xla_hlo.iota does not cover all use cases. Added an xla_hlo.iota operation that supports a dynamially shaped output. This matches the behavior for dynamic_broadcast_in_dim. PiperOrigin-RevId: 313854741 Change-Id: Idf8361984d48e30eac9fb22ef3b54b178d925f0d

view details

Raman Sarokin

commit sha a6a3a48679f2e43047ae764478026171217bd35e

Added missing resource types to arguments. Image2DArray/Image3D/ImageBuffer. PiperOrigin-RevId: 313858546 Change-Id: I5a83491728c7f6709994464186725649ad81e3c7

view details

Eugene Zhulenev

commit sha 677c1960415d77eba6cd075a8d8fae994a0c730f

Fix inlined function logging PiperOrigin-RevId: 313858692 Change-Id: I8823363003eef3a9bf0f7f66322537f2dc3fc8de

view details

Andy Ly

commit sha 5a62ab6215361d665e9655966b074de1af71f3c7

Update TPUExtractHeadTailOutsideCompilation pass in preparation for tail extraction (NFC). This simplifies and updates some test cases, and extract some reused logic used by tail extraction. PiperOrigin-RevId: 313859255 Change-Id: I35bb385c0a76aae54cc7836db8a8f549cd9b86ff

view details

Andy Ly

commit sha f90c649e28a736ef7f1f9349f72b2d8d1afaa906

Uniformly import and export _Arg node/FunctionDef arg attributes. In a Function Graph (Graph generated from a Function/FunctionDef), it is possible to have other attributes on the generated _Arg nodes. These attributes are either modeled as fields in FunctionDef ('_resource_arg_unique_id' attributes are stored as FunctionDef::map<uint32, uint32> resource_arg_unique_id) or explicitly in FunctionDef::map<uint32, ArgAttrs> arg_attr. When converting a FunctionDef to a Graph (in import), these attributes are added to generated _Arg node attributes. Some of these attributes should be preserved for downstream users. Currently only '_resource_arg_unique_id' is being imported with special handling. This change unifies and imports any _Arg attribute that is not a shape inference based attribute or _Arg op def attribute. On export, attributes of the 'tf' dialect ('tf.' prefix) are added back. For the main function Graph, the attributes are simply added back to the generated _Arg node. For other functions, as a FunctionDef is created instead, '_resource_arg_unique_id' is handled differently, specifically adding it's content to FunctionDef::map<uint32, uint32> resource_arg_unique_id while all other attribute are added to FunctionDef::map<uint32, ArgAttrs> arg_attr. PiperOrigin-RevId: 313859301 Change-Id: I3bb37bb63cc4d401d628c08989900524d0db0572

view details

Guangda Lai

commit sha 85396efcd31fd77fe264f9088d7bd8c92abc7b60

Make tf.If work with ConcreteFunction. PiperOrigin-RevId: 313860966 Change-Id: I1fccdaf06802511a7020a4045751cdd6b6821687

view details

A. Unique TensorFlower

commit sha 66529c35a76cc775605e904ab21b1663b0e7db8e

Add timeout to collective ops to detect deadlocks. The timeout is set as an argument to a collective op. When non zero value, a completion timeout is set to detect staleness. If a timeout goes off, the execution is aborted through a DEADLINE_EXCEEDED error. PiperOrigin-RevId: 313861868 Change-Id: I7fee45736608ad7fbcc9dd980db2fd302c9cb4df

view details

Jiri Simsa

commit sha 80671523fcb8f9c377b2ab893bb7dcc7a4446386

[tf.data] Remove misleading documentation. PiperOrigin-RevId: 313862461 Change-Id: I19720b5a90c251f45ab5bc4d90028481b8964f20

view details

Mehdi Amini

commit sha 02dc6f8dce33e6320b0041379c27e0c4fbf80554

Bump the ruy repository reference. PiperOrigin-RevId: 313866050 Change-Id: I6a3c97d6f4e74c6078eb3bcc1607e51fc1f4d784

view details

Jiri Simsa

commit sha 8be4d61574f29568c8699708d88945b441bfd317

[tf.data] Explicitly colocate prefetch dataset op with its input as this collocation only happens automatically in graph mode. PiperOrigin-RevId: 313867950 Change-Id: I88962b96f208b6d9019e0a117715f74efc8fdc67

view details

A. Unique TensorFlower

commit sha d99affc72b541e0eaef6ab0c0d433c695163eec1

Go: Update generated wrapper functions for TensorFlow ops. PiperOrigin-RevId: 313873341 Change-Id: I4ac3ffcf5fc5ed5b1444fc92b2d87988724c310e

view details

push time in 23 days

startedopenai/gpt-3

started time in 25 days

startedTencent/TNN

started time in 25 days

starteduber/neuropod

started time in a month

issue openedfeihugis/blog-comments

Git Notes | Fei's Blog

http://www.feihugis.com/2017/07/26/Git_notes/

Merge a remote pull to the current branch Add the following script to ~/.gitconfig 12[alias]pr = "!f() { git fetch ${2:-upstream} pull/$1/head:pr/$1 && git checkout pr/$1; &#125

created time in a month

startedmicrosoft/onnxruntime

started time in a month

startedmicrosoft/AirSim

started time in a month

issue closedtensorflow/tensorflow

DynamicPaddedBatchDatasetOp for tf.data

<em>Please make sure that this is a feature request. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:feature_template</em>

System information

  • TensorFlow version (you are using): tf-nightly
  • Are you willing to contribute it (Yes/No): Yes. If this feature is agreed, I'm happy to implement it.

Describe the feature and the current behavior/state.

tf.data provides a padded_batch transformation, but it only supports a fixed batch size. In some use cases (e.g. model inference), dynamic_padded_batch will bring some extra performance benefits.

For example, for the named entity recognition model (e.g. BiLSTM-based, BERT-based), the input sentences/tokens will be required to be padded and then batched before feed into the model inference. If we use the fixed batch size, the short sentences and the long sentences may be combined together in a batch, which will add many unnecessary padding elements to the short sentences. For this kind of cases, dynamic_padded_batch transformation will be more efficient. It enables users to input a customized dynamic_batch_function to control how to split the batches. In this case, the sentences with similar lengths may be put together as a batch; If the next sentence is too long, it will be placed into the next batch; With this, the number of padding elements in each batch can be as small as possible.

In summary, DynamicPaddedBatchDatasetOp will provide users with more flexibility to control how to split the batch.

Will this change the current api? How?

It will not change the current api; another new operation DynamicPaddedBatchDatasetOp will be added;

Who will benefit with this feature?

The users who use tf.data to build the data pipeline.

Any Other info.

cc: @jsimsa @aaudiber

closed time in a month

feihugis

issue commenttensorflow/tensorflow

DynamicPaddedBatchDatasetOp for tf.data

Closed this issue as it has been resolved.

feihugis

comment created time in a month

issue commenttensorflow/tensorflow

DynamicPaddedBatchDatasetOp for tf.data

Thanks very much @jsimsa! Yeah, group_by_window can be used here. The example in the official Transformer model is exactly what I want do. Thanks again!

feihugis

comment created time in a month

issue openedtensorflow/tensorflow

DynamicPaddedBatchDatasetOp for tf.data

<em>Please make sure that this is a feature request. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:feature_template</em>

System information

  • TensorFlow version (you are using): tf-nightly
  • Are you willing to contribute it (Yes/No): Yes. If this feature is agreed, I'm happy to implement it.

Describe the feature and the current behavior/state.

tf.data provides a padded_batch transformation, but it only supports a fixed batch size. In some use cases (e.g. model inference), dynamic_padded_batch will bring some extra performance benefits.

For example, for the named entity recognition model (e.g. BiLSTM-based, BERT-based), the input sentences/tokens will be required to be padded and then batched before feed into the model inference. If we use the fixed batch size, the short sentences and the long sentences may be combined together in a batch, which will add many unnecessary padding elements to the short sentences. For this kind of cases, dynamic_padded_batch transformation will be more efficient. It enables users to input a customized dynamic_batch_function to control how to split the batches. In this case, the sentences with similar lengths may be put together as a batch; If the next sentence is too long, it will be placed into the next batch; With this, the number of padding elements in each batch can be as small as possible.

In summary, DynamicPaddedBatchDatasetOp will provide users with more flexibility to control how to split the batch.

Will this change the current api? How?

It will not change the current api; another new operation DynamicPaddedBatchDatasetOp will be added;

Who will benefit with this feature?

The users who use tf.data to build the data pipeline.

Any Other info.

cc: @jsimsa @aaudiber

created time in a month

startedhmemcpy/milewski-ctfp-pdf

started time in a month

push eventfeihugis/feihugis.github.io

Fei Hu

commit sha 0e13b8ecdf376b38585f67710b685c080acfb45f

Site updated: 2020-05-25 23:56:30

view details

push time in a month

push eventfeihugis/feihugis.github.io

Fei Hu

commit sha 335ff7d49f49597bb6ea592dfccfdea7d675f907

First commit

view details

Fei Hu

commit sha 2ffc5f83fcbf43c5f18f37513f6dca33e6f228f7

Site updated: 2020-04-04 23:07:52

view details

Fei Hu

commit sha 63c1c1b64356bd057dc46644e5b94ce8121534d0

Site updated: 2020-04-05 11:37:39

view details

Fei Hu

commit sha ff73b5fafd1a37ec9978fe6ce4b40aa471036153

Site updated: 2020-05-25 23:17:36

view details

push time in a month

startedWenDesi/lihang_book_algorithm

started time in a month

Pull request review commentIBM/MAX-Nucleus-Segmenter

add bandit

 def hook(images, augmenter, parents, default):         mask = det.augment_image(mask.astype(np.uint8),                                  hooks=imgaug.HooksImages(activator=hook))         # Verify that shapes didn't change-        assert image.shape == image_shape, "Augmentation shouldn't change image size"-        assert mask.shape == mask_shape, "Augmentation shouldn't change mask size"

Or, we can raise an error as other changes if the check fails. Either works for me.

bdwyer2

comment created time in a month

starteddatawhalechina/pumpkin-book

started time in a month

create barnchfeihugis/runtime

branch : mac-experiment

created branch time in a month

push eventfeihugis/feihugis.github.io

Fei Hu

commit sha bf6ec563eb4260a1a1a30d7038519f541806211a

Site updated: 2020-05-20 16:08:05

view details

push time in a month

push eventfeihugis/feihugis.github.io

Fei Hu

commit sha a1428f7cb51e6f09105248f13581c52150f75c5d

Site updated: 2020-05-20 16:07:40

view details

push time in a month

issue commenttensorflow/runtime

Build on MacOS

The above issue is fixed by adding OP_ATTR_TYPE(SSIZE_T, ssize_t) to op_attr_type.def. Now bazel build -c opt //tools:bef_executor completed succesfully.

feihugis

comment created time in a month

issue openedtensorflow/runtime

Build on MacOS

I tried to make runtime build work on MacOS. Most steps works except the following linking error:

INFO: Analyzed target //tools:bef_executor (0 packages loaded, 0 targets configured).
INFO: Found 1 target...
ERROR: /Users/feihu/Documents/GitHub/runtime/tools/BUILD:86:1: Linking of rule '//tools:bef_executor' failed (Exit 1): cc_wrapper.sh failed: error executing command 
  (cd /private/var/tmp/_bazel_feihu/049199822dda0c989ae40a6b0d7df7c6/sandbox/darwin-sandbox/2572/execroot/tf_runtime && \
  exec env - \
    APPLE_SDK_PLATFORM=MacOSX \
    APPLE_SDK_VERSION_OVERRIDE=10.15 \
    PATH=/Users/feihu/python-env/tensorflow/bin:/usr/local/Cellar/llvm/10.0.0_3/bin/:/usr/local/Cellar/gcc/9.3.0_1/bin/:/Users/feihu/miniconda3/bin:/Users/feihu/miniconda3/condabin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/TeX/texbin:/opt/X11/bin:/Library/Apple/usr/bin:/Users/feihu/bin:./node_modules/.bin \
    XCODE_VERSION_OVERRIDE=11.4.1.11E503a \
  external/local_config_cc/cc_wrapper.sh -lc++ -fobjc-link-runtime -o bazel-out/darwin-opt/bin/tools/bef_executor -Wl,-force_load,bazel-out/darwin-opt/bin/libbasic_kernels_alwayslink.lo bazel-out/darwin-opt/bin/libbasic_kernels.a bazel-out/darwin-opt/bin/tools/libbef_executor_lib.a bazel-out/darwin-opt/bin/libbef_executor_driver.a bazel-out/darwin-opt/bin/libbefexecutor.a bazel-out/darwin-opt/bin/libmetrics_api.a bazel-out/darwin-opt/bin/external/llvm-project/mlir/libIR.a -Wl,-force_load,bazel-out/darwin-opt/bin/libhostcontext_alwayslink.lo -Wl,-force_load,bazel-out/darwin-opt/bin/libcore_runtime_alwayslink.lo -Wl,-force_load,bazel-out/darwin-opt/bin/libdata_alwayslink.lo bazel-out/darwin-opt/bin/libdata.a -Wl,-force_load,bazel-out/darwin-opt/bin/libsimple_tracing_sink_alwayslink.lo bazel-out/darwin-opt/bin/libsimple_tracing_sink.a -Wl,-force_load,bazel-out/darwin-opt/bin/libtensor_alwayslink.lo -Wl,-force_load,bazel-out/darwin-opt/bin/libtest_kernels_alwayslink.lo bazel-out/darwin-opt/bin/libtest_kernels.a -Wl,-force_load,bazel-out/darwin-opt/bin/backends/common/libeigen_kernels_alwayslink.lo -Wl,-force_load,bazel-out/darwin-opt/bin/backends/cpu/libcore_runtime_alwayslink.lo -Wl,-force_load,bazel-out/darwin-opt/bin/backends/cpu/libtest_ops_alwayslink.lo bazel-out/darwin-opt/bin/backends/cpu/libtest_ops.a bazel-out/darwin-opt/bin/backends/common/libtest_metadata_functions.a -Wl,-force_load,bazel-out/darwin-opt/bin/backends/cpu/libtf_ops_alwayslink.lo bazel-out/darwin-opt/bin/backends/cpu/libtf_ops.a bazel-out/darwin-opt/bin/backends/cpu/libcore_runtime.a bazel-out/darwin-opt/bin/backends/common/libtf_metadata_functions.a bazel-out/darwin-opt/bin/backends/common/libtf_dnn_ops_util.a bazel-out/darwin-opt/bin/backends/common/libeigen_kernels.a bazel-out/darwin-opt/bin/backends/common/libeigencompat.a bazel-out/darwin-opt/bin/external/mkl_dnn/libmkldnn_single_threaded.a bazel-out/darwin-opt/bin/libcore_runtime.a bazel-out/darwin-opt/bin/libtracing.a bazel-out/darwin-opt/bin/libtensor.a bazel-out/darwin-opt/bin/libhostcontext.a bazel-out/darwin-opt/bin/libsupport.a bazel-out/darwin-opt/bin/third_party/llvm_derived/libostream.a bazel-out/darwin-opt/bin/external/llvm-project/mlir/libSupport.a bazel-out/darwin-opt/bin/external/llvm-project/llvm/libsupport.a bazel-out/darwin-opt/bin/external/llvm-project/llvm/libdemangle.a bazel-out/darwin-opt/bin/external/zlib/libzlib.a -headerpad_max_install_names -pthread -ldl -lm -no-canonical-prefixes '-mmacosx-version-min=10.15')
Execution platform: @local_config_platform//:host

Use --sandbox_debug to see verbose messages from the sandbox cc_wrapper.sh failed: error executing command 
  (cd /private/var/tmp/_bazel_feihu/049199822dda0c989ae40a6b0d7df7c6/sandbox/darwin-sandbox/2572/execroot/tf_runtime && \
  exec env - \
    APPLE_SDK_PLATFORM=MacOSX \
    APPLE_SDK_VERSION_OVERRIDE=10.15 \
    PATH=/Users/feihu/python-env/tensorflow/bin:/usr/local/Cellar/llvm/10.0.0_3/bin/:/usr/local/Cellar/gcc/9.3.0_1/bin/:/Users/feihu/miniconda3/bin:/Users/feihu/miniconda3/condabin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/TeX/texbin:/opt/X11/bin:/Library/Apple/usr/bin:/Users/feihu/bin:./node_modules/.bin \
    XCODE_VERSION_OVERRIDE=11.4.1.11E503a \
  external/local_config_cc/cc_wrapper.sh -lc++ -fobjc-link-runtime -o bazel-out/darwin-opt/bin/tools/bef_executor -Wl,-force_load,bazel-out/darwin-opt/bin/libbasic_kernels_alwayslink.lo bazel-out/darwin-opt/bin/libbasic_kernels.a bazel-out/darwin-opt/bin/tools/libbef_executor_lib.a bazel-out/darwin-opt/bin/libbef_executor_driver.a bazel-out/darwin-opt/bin/libbefexecutor.a bazel-out/darwin-opt/bin/libmetrics_api.a bazel-out/darwin-opt/bin/external/llvm-project/mlir/libIR.a -Wl,-force_load,bazel-out/darwin-opt/bin/libhostcontext_alwayslink.lo -Wl,-force_load,bazel-out/darwin-opt/bin/libcore_runtime_alwayslink.lo -Wl,-force_load,bazel-out/darwin-opt/bin/libdata_alwayslink.lo bazel-out/darwin-opt/bin/libdata.a -Wl,-force_load,bazel-out/darwin-opt/bin/libsimple_tracing_sink_alwayslink.lo bazel-out/darwin-opt/bin/libsimple_tracing_sink.a -Wl,-force_load,bazel-out/darwin-opt/bin/libtensor_alwayslink.lo -Wl,-force_load,bazel-out/darwin-opt/bin/libtest_kernels_alwayslink.lo bazel-out/darwin-opt/bin/libtest_kernels.a -Wl,-force_load,bazel-out/darwin-opt/bin/backends/common/libeigen_kernels_alwayslink.lo -Wl,-force_load,bazel-out/darwin-opt/bin/backends/cpu/libcore_runtime_alwayslink.lo -Wl,-force_load,bazel-out/darwin-opt/bin/backends/cpu/libtest_ops_alwayslink.lo bazel-out/darwin-opt/bin/backends/cpu/libtest_ops.a bazel-out/darwin-opt/bin/backends/common/libtest_metadata_functions.a -Wl,-force_load,bazel-out/darwin-opt/bin/backends/cpu/libtf_ops_alwayslink.lo bazel-out/darwin-opt/bin/backends/cpu/libtf_ops.a bazel-out/darwin-opt/bin/backends/cpu/libcore_runtime.a bazel-out/darwin-opt/bin/backends/common/libtf_metadata_functions.a bazel-out/darwin-opt/bin/backends/common/libtf_dnn_ops_util.a bazel-out/darwin-opt/bin/backends/common/libeigen_kernels.a bazel-out/darwin-opt/bin/backends/common/libeigencompat.a bazel-out/darwin-opt/bin/external/mkl_dnn/libmkldnn_single_threaded.a bazel-out/darwin-opt/bin/libcore_runtime.a bazel-out/darwin-opt/bin/libtracing.a bazel-out/darwin-opt/bin/libtensor.a bazel-out/darwin-opt/bin/libhostcontext.a bazel-out/darwin-opt/bin/libsupport.a bazel-out/darwin-opt/bin/third_party/llvm_derived/libostream.a bazel-out/darwin-opt/bin/external/llvm-project/mlir/libSupport.a bazel-out/darwin-opt/bin/external/llvm-project/llvm/libsupport.a bazel-out/darwin-opt/bin/external/llvm-project/llvm/libdemangle.a bazel-out/darwin-opt/bin/external/zlib/libzlib.a -headerpad_max_install_names -pthread -ldl -lm -no-canonical-prefixes '-mmacosx-version-min=10.15')
Execution platform: @local_config_platform//:host

Use --sandbox_debug to see verbose messages from the sandbox
Undefined symbols for architecture x86_64:
  "tfrt::OpAttrType tfrt::GetOpAttrType<long>()", referenced from:
      tfrt::MetadataFnImpl<llvm::Expected<tfrt::TensorMetadata> (*)(tfrt::TensorMetadata const&, tfrt::TensorMetadata const&, tfrt::OpAttrsRef const&), &(tfrt::TfConvOpMd(tfrt::TensorMetadata const&, tfrt::TensorMetadata const&, tfrt::OpAttrsRef const&))>::Invoke(tfrt::ExecutionContext const&, llvm::ArrayRef<tfrt::TensorMetadata>, tfrt::OpAttrsRef const&, llvm::MutableArrayRef<tfrt::TensorMetadata>) in libtf_metadata_functions.a(metadata_functions_07e51e054eb6b4d322b1a31b487edf2d.o)
      tfrt::MetadataFnImpl<llvm::Expected<tfrt::TensorMetadata> (*)(tfrt::TensorMetadata const&, tfrt::OpAttrsRef const&), &(tfrt::TfMaxPoolOpMd(tfrt::TensorMetadata const&, tfrt::OpAttrsRef const&))>::Invoke(tfrt::ExecutionContext const&, llvm::ArrayRef<tfrt::TensorMetadata>, tfrt::OpAttrsRef const&, llvm::MutableArrayRef<tfrt::TensorMetadata>) in libtf_metadata_functions.a(metadata_functions_07e51e054eb6b4d322b1a31b487edf2d.o)
      tfrt::MetadataFnImpl<llvm::Expected<tfrt::TensorMetadata> (*)(tfrt::OpAttrsRef const&), &(tfrt::CreateFromScalarMD(tfrt::OpAttrsRef const&))>::Invoke(tfrt::ExecutionContext const&, llvm::ArrayRef<tfrt::TensorMetadata>, tfrt::OpAttrsRef const&, llvm::MutableArrayRef<tfrt::TensorMetadata>) in libtest_metadata_functions.a(test_ops_6eeba64ad03c479f6c4e94bde85f3ffc.o)
      tfrt::MetadataFnImpl<llvm::Expected<tfrt::TensorMetadata> (*)(tfrt::TensorMetadata const&, tfrt::OpAttrsRef const&), &(tfrt::BroadcastMD(tfrt::TensorMetadata const&, tfrt::OpAttrsRef const&))>::Invoke(tfrt::ExecutionContext const&, llvm::ArrayRef<tfrt::TensorMetadata>, tfrt::OpAttrsRef const&, llvm::MutableArrayRef<tfrt::TensorMetadata>) in libtest_metadata_functions.a(test_ops_6eeba64ad03c479f6c4e94bde85f3ffc.o)
      tfrt::MetadataFnImpl<llvm::Expected<tfrt::TensorMetadata> (*)(tfrt::OpAttrsRef const&), &(tfrt::CreateDenseTensorMD(tfrt::OpAttrsRef const&))>::Invoke(tfrt::ExecutionContext const&, llvm::ArrayRef<tfrt::TensorMetadata>, tfrt::OpAttrsRef const&, llvm::MutableArrayRef<tfrt::TensorMetadata>) in libtest_metadata_functions.a(test_ops_6eeba64ad03c479f6c4e94bde85f3ffc.o)
      tfrt::MetadataFnImpl<llvm::Expected<tfrt::TensorMetadata> (*)(tfrt::TensorMetadata const&, tfrt::TensorMetadata const&, tfrt::OpAttrsRef const&), &(tfrt::CreateCooTensorMD(tfrt::TensorMetadata const&, tfrt::TensorMetadata const&, tfrt::OpAttrsRef const&))>::Invoke(tfrt::ExecutionContext const&, llvm::ArrayRef<tfrt::TensorMetadata>, tfrt::OpAttrsRef const&, llvm::MutableArrayRef<tfrt::TensorMetadata>) in libtest_metadata_functions.a(test_ops_6eeba64ad03c479f6c4e94bde85f3ffc.o)
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
Target //tools:bef_executor failed to build
INFO: Elapsed time: 1.278s, Critical Path: 0.65s
INFO: 0 processes.
FAILED: Build did NOT complete successfully

Two changes are added to the code to fix the incompatibility between int64_t and ssize_t on MacOS:

  1. attribute_utils.h#L324: reinterpret_cast<const int64_t*>(bytes + header.shape_offset) -> reinterpret_cast<const ssize_t*>(bytes + header.shape_offset)

  2. kernels.cc#L146: TensorMetadata metadata(DType(DType::String), shape.GetValue<int64_t>()); -> TensorMetadata metadata(DType(DType::String), shape.GetValue<ssize_t>());;

I'm not sure if the error is related to the above code changes. Any suggestions/comments are appreciated!

created time in a month

starteddionhaefner/pyhpc-benchmarks

started time in 2 months

pull request commenttensorflow/tensorflow

Make keras model load compatible with old version of models

@k-w-w Could you please take a look at this PR when you get a chance?

feihugis

comment created time in 2 months

startedalibaba/graph-learn

started time in 2 months

fork feihugis/TurboTransformers

a fast and user-friendly tool for transformer inference on CPU and GPU

fork in 2 months

startedTencent/TurboTransformers

started time in 2 months

startedbytedance/effective_transformer

started time in 2 months

startedUbpa/USTC_CG

started time in 2 months

startedmrry/mrry.github.io

started time in 2 months

startedllvm/llvm-project

started time in 2 months

fork feihugis/SystemProgramming

Angrave's Crowd-Sourced System Programming Book used at UIUC

https://github.com/angrave/SystemProgramming/wiki

fork in 2 months

startedangrave/SystemProgramming

started time in 2 months

fork feihugis/runtime

A performant and modular runtime for TensorFlow

fork in 2 months

startedtensorflow/runtime

started time in 2 months

push eventfeihugis/feihugis.github.io

Fei Hu

commit sha b29105e240483bcb16c2512f3655fa0c1296f8b2

Site updated: 2020-04-29 08:22:03

view details

push time in 2 months

push eventfeihugis/feihugis.github.io

Fei Hu

commit sha e61201b63d743601aedc11ad8875b28022d98639

Site updated: 2020-04-28 23:51:45

view details

push time in 2 months

push eventfeihugis/feihugis.github.io

Fei Hu

commit sha 0326c231890496be33093813aa2598630cbfe70e

Site updated: 2020-04-28 23:14:21

view details

push time in 2 months

push eventfeihugis/feihugis.github.io

Fei Hu

commit sha 8d181515a3cdda56861cbd4aaab8fbab4a3352dc

Site updated: 2020-04-28 23:03:16

view details

push time in 2 months

startedgoogle/iree

started time in 2 months

push eventfeihugis/feihugis.github.io

Fei Hu

commit sha 5b82ece651d4df3bf877129dd7a181c51738e7e8

Site updated: 2020-04-27 09:49:02

view details

push time in 2 months

push eventfeihugis/feihugis.github.io

Fei Hu

commit sha 1c278bb9e74630897cd18c2ddc34f76b3ea5cb8a

Site updated: 2020-04-26 23:12:58

view details

push time in 2 months

push eventfeihugis/feihugis.github.io

Fei Hu

commit sha 48273631e0a29a3fe505f09725d19074490be1e1

Site updated: 2020-04-26 23:07:59

view details

push time in 2 months

issue openedfeihugis/blog-comments

How TensorFlow Python APIs are generated | Fei's Blog

http://www.feihugis.com/2020/04/26/How-TensorFlow-Python-APIs-are-generated/

TensorFlow Python APIs are automatically generated by Pybind11 and some utility scripts. This blog introduces how these Python APIs are generated.

created time in 2 months

push eventfeihugis/feihugis.github.io

Fei Hu

commit sha ed6196245c18e59aaa030478be4d277173c06480

Site updated: 2020-04-26 22:39:20

view details

push time in 2 months

push eventfeihugis/feihugis.github.io

Fei Hu

commit sha e3303a86809ea62e075d4dbb348def289e24af40

Site updated: 2020-04-26 22:31:16

view details

push time in 2 months

push eventfeihugis/feihugis.github.io

Fei Hu

commit sha c296bf088436e5d3e889489778b23c7273a1bccc

First commit

view details

Fei Hu

commit sha af68f464ab811626ef6a369a9c5df9676a3c2d1f

Site updated: 2020-04-04 16:43:37

view details

Fei Hu

commit sha 54e28938cb0c3947c91fb1bf7cc46a2ee63d5a1d

Site updated: 2020-04-04 19:38:50

view details

Fei Hu

commit sha 1cf412ac8b666ce08428bfb34e3799f85cbb121b

Site updated: 2020-04-04 19:58:13

view details

Fei Hu

commit sha d67a8f3b7de34fcf9983a3ae4bb3180b02b1ebd5

Site updated: 2020-04-04 21:21:36

view details

Fei Hu

commit sha 7b0735d4eb2c7ea26854a19b54ab3689b6ec3d6e

Site updated: 2020-04-26 22:29:08

view details

push time in 2 months

startedfluid-dev/hexo-theme-fluid

started time in 2 months

startedCODAIT/covid-notebooks

started time in 2 months

issue commentIBM/MAX-Image-Resolution-Enhancer

Terminate called after throwing an instance of 'std::bad_alloc'

@nghiaht Thanks for letting us know your solution. Glad the issue is resolved.

nghiaht

comment created time in 2 months

issue commentIBM/MAX-Image-Resolution-Enhancer

Terminate called after throwing an instance of 'std::bad_alloc'

@nghiaht Could you try increasing the docker memory as here?

nghiaht

comment created time in 2 months

starteddavisyoshida/tf2-gradient-checkpointing

started time in 2 months

issue openedtensorflow/tensorflow

TensorRT Converter could not work well with Bert model

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): No
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Centos 7
  • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device:
  • TensorFlow installed from (source or binary): binary
  • TensorFlow version (use command below): tf-nightly(2.2.0-dev20200420)
  • Python version: 3.6
  • Bazel version (if compiling from source):
  • GCC/Compiler version (if compiling from source):
  • CUDA/cuDNN version: 10.1
  • GPU model and memory: P100

Describe the current behavior

I tried to run the Bert-NER inference using TensorRT. I converted the model using the below code:

def convert(input_saved_model_dir, output_saved_model_dir):
    params = DEFAULT_TRT_CONVERSION_PARAMS._replace(
        max_batch_size=512,
        maximum_cached_engines=16)
    converter = tf.experimental.tensorrt.Converter(
        input_saved_model_dir=input_saved_model_dir, conversion_params=params)
    converter.convert()

    def input_fn():
        for _ in range(100):
            yield np.ones(shape=[32, 128], dtype=np.int32), \
                  np.ones(shape=[32, 128], dtype=np.int32), \
                  np.ones(shape=[32, 128], dtype=np.int32)
    converter.build(input_fn=input_fn)
    converter.save(output_saved_model_dir)

However, I met a few issues:

  1. The shape for the batch dimension is not correct in the converted model; The original model could work with the same input data; The error log is in below:
2020-04-20 15:18:03.775850: W tensorflow/core/framework/op_kernel.cc:1751] OP_REQUIRES failed at trt_engine_op.cc:572 : Invalid argument: Input shapes are inconsistent on the batch dimension, for StatefulPartitionedCall/model_1/bert_model/embedding_postprocessor/TRTEngineOp_0_0: [[32,128,768], [32,128,768], [1,128,768]]
Traceback (most recent call last):
  File "/home/feihu/miniconda3/lib/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/home/feihu/miniconda3/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/feihu/.vscode-server/extensions/ms-python.python-2020.3.71659/pythonFiles/lib/python/debugpy/no_wheels/debugpy/__main__.py", line 45, in <module>
    cli.main()
  File "/home/feihu/.vscode-server/extensions/ms-python.python-2020.3.71659/pythonFiles/lib/python/debugpy/no_wheels/debugpy/../debugpy/server/cli.py", line 429, in main
    run()
  File "/home/feihu/.vscode-server/extensions/ms-python.python-2020.3.71659/pythonFiles/lib/python/debugpy/no_wheels/debugpy/../debugpy/server/cli.py", line 266, in run_file
    runpy.run_path(options.target, run_name=compat.force_str("__main__"))
  File "/home/feihu/miniconda3/lib/python3.6/runpy.py", line 263, in run_path
    pkg_name=pkg_name, script_name=fname)
  File "/home/feihu/miniconda3/lib/python3.6/runpy.py", line 96, in _run_module_code
    mod_name, mod_spec, pkg_name, script_name)
  File "/home/feihu/miniconda3/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/feihu/github_projects/Named-Entity-Recognition/src/trt/trt_savedmodel_convert.py", line 28, in <module>
    convert(input_saved_model_dir, output_saved_model_dir)
  File "/home/feihu/github_projects/Named-Entity-Recognition/src/trt/trt_savedmodel_convert.py", line 21, in convert
    converter.build(input_fn=input_fn)
  File "/home/feihu/py-virtualenv/bert-compression-ner/lib/python3.6/site-packages/tensorflow/python/compiler/tensorrt/trt_convert.py", line 1174, in build
    func(*map(ops.convert_to_tensor, inp))
  File "/home/feihu/py-virtualenv/bert-compression-ner/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 1612, in __call__
    return self._call_impl(args, kwargs)
  File "/home/feihu/py-virtualenv/bert-compression-ner/lib/python3.6/site-packages/tensorflow/python/eager/wrap_function.py", line 247, in _call_impl
    args, kwargs, cancellation_manager)
  File "/home/feihu/py-virtualenv/bert-compression-ner/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 1649, in _call_impl
    return self._call_flat(args, self.captured_inputs, cancellation_manager)
  File "/home/feihu/py-virtualenv/bert-compression-ner/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 1746, in _call_flat
    ctx, args, cancellation_manager=cancellation_manager))
  File "/home/feihu/py-virtualenv/bert-compression-ner/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 609, in call
    ctx=ctx)
  File "/home/feihu/py-virtualenv/bert-compression-ner/lib/python3.6/site-packages/tensorflow/python/eager/execute.py", line 60, in quick_execute
    inputs, attrs, num_outputs)
tensorflow.python.framework.errors_impl.InvalidArgumentError:  Input shapes are inconsistent on the batch dimension, for StatefulPartitionedCall/model_1/bert_model/embedding_postprocessor/TRTEngineOp_0_0: [[32,128,768], [32,128,768], [1,128,768]]
         [[node StatefulPartitionedCall/model_1/bert_model/embedding_postprocessor/TRTEngineOp_0_0 (defined at /github_projects/Named-Entity-Recognition/src/trt/trt_savedmodel_convert.py:14) ]] [Op:__inference_pruned_26783]
  1. The saved model could not be larger than 2GB; Otherwise, there will be an error from protobuf about GraphDef object could not exceed the size of 2GB;

  2. The converted model could not work under the distributed strategy.

created time in 2 months

startedCTCaer/jc_toolkit

started time in 3 months

pull request commenttensorflow/tensorflow

Make keras model load compatible with old version of models

@tanzhenyu kindly remind that the comments have been addressed here (https://github.com/tensorflow/tensorflow/commit/49b07e664590f73942bcbaf1c378e59ccab9f04b). Could you take a look when you get a chance? Hope this PR can be merged before the official release of TF-2.2.

feihugis

comment created time in 3 months

issue openedfeihugis/blog-comments

Zeppelin Connects Hive | Fei's Blog

http://www.feihugis.com/2018/01/03/Zeppelin-Connects-Hive/

Make sure that Hive can be accessed remotely using HiveServer2* bash-4.2$ beeline * beeline> !connect jdbc:hive2://svr-A3-A-U2:10000 hive hive * if unable to access, check the following configu

created time in 3 months

issue openedfeihugis/blog-comments

Connect Spark with Cloudera Yarn Cluster | Fei's Blog

http://www.feihugis.com/2018/04/14/Connect-Spark-with-Cloudera-Yarn-Cluster/

Cloudera automatically installs Spark cluster, but it is not easy to update Spark version by Cloudera. There is anothersolution for users to run the latest version of Spark on Cloudera Yarn cluster.

created time in 3 months

issue openedfeihugis/blog-comments

Install the standalone Spark Cluster | Fei's Blog

http://www.feihugis.com/2018/07/16/Install-the-standalone-Spark-Cluster/

Checklist Ensure all nodes can resolve each other by hostnames/ips Enable SSH with no need of password Install JDK on each node Export JAVA_HOME and SPARK_HOME in ~/.bashrc on each node1234export

created time in 3 months

issue openedfeihugis/blog-comments

Notes for TensorFlow Dev | Fei's Blog

http://www.feihugis.com/2020/01/04/Notes-for-TensorFlow-Dev/

Turn on VLOG123456789// Otherwise, set TF_CPP_MIN_VLOG_LEVEL environment to update minimum log level// of VLOG, or TF_CPP_VMODULE to set the minimum log level for individual// translation units.#defin

created time in 3 months

issue openedfeihugis/blog-comments

Yayoi Kusama Infinity Mirrors Exhibition | Fei's Blog

http://www.feihugis.com/2017/04/15/Yayoi-Kusama-Infinity-Mirrors-Exhibition/#more

Yayoi Kusama: Infinity Mirrors is a celebration of the legendary Japanese artist’s sixty-five-year career and promises to be one of 2017s essential art experiences. Visitors will have the unprecedente

created time in 3 months

starteditanium-cxx-abi/cxx-abi

started time in 3 months

startedzylo117/Yet-Another-EfficientDet-Pytorch

started time in 3 months

pull request commenttensorflow/tensorflow

Make keras model load compatible with old version of models

@tanzhenyu The unit test is added here(https://github.com/tensorflow/tensorflow/pull/38339/commits/49b07e664590f73942bcbaf1c378e59ccab9f04b). Could you please take a look when you have time?

feihugis

comment created time in 3 months

startedIBM/pytorch-large-model-support

started time in 3 months

startedfrreiss/text-extensions-for-pandas

started time in 3 months

startedonnx/onnx-mlir

started time in 3 months

push eventfeihugis/tensorflow

Elena Zhelezina

commit sha a858c19b0c10a89639a4897155952d8c3bbd26de

New implementation of TANH/Sigmoid 16-bit activation functions using LUT. We think the reference functions for 16-bit activation are too complex for efficient implementation on resource constrained platforms and propose to replace the functions with a lookup table approach as follows: First rescale the input data to fixed range of -10.7 to +10.7 Use a 256-entry lookup table for Sigmoid followed by linear interpolation to efficiently derive the result. The Sigmoid LUT table is used for the TANH function, because tanh(x) = 2*sigmoid(2*x) -1 and we take into account the symmetry is taked. The proposed reference kernel implementation also has higher accuracy than the existing one. On the current functions we measure a difference of up to 6.3 for sigmoid and 11.7 for tanh in quantized units compared to the floating point reference implementation over the 16-bit input range (representing -8.0 to +8.0). For the implementation of this patch we see the error reduced to less than 1.5 quantized units compared to floating point reference for both tanh and sigmoid. Change-Id: I4d1406928db65740c1750c9cd7bfffab30771419

view details

Elena Zhelezina

commit sha 279f9264c0503b975ee91e6070f8aed2698b51b6

Small improvement to TANH/Sigmoid implementation. Change-Id: Ia9fa7e70e15a5174a045ee5f98cf4f78e6a43ef6

view details

Elena Zhelezina

commit sha eaac6ea535cd2be0b33b0a2cd6664daab096364b

Addressed review comments for TANH/Sigmoid function.

view details

Eugene Kuznetsov

commit sha 1087b24938690a5d747a6e51e6d00a10d039e898

Enabling the kernel Relu for float16

view details

Elena Zhelezina

commit sha 38eeb4f5d18c6772886b1f41093f4681bb522108

Addressed reviewer comments.

view details

Elena Zhelezina

commit sha e8ea83ab58aa63d9b0b86ff26e30293017022029

Moved implementation of Tanh/Sigmoid to integer_reference_ops per discussion.

view details

Elena Zhelezina

commit sha 00879a5cdf00ffbfa2c02d1ff75e09f1e5569d88

Tidy up.

view details

tomas

commit sha 5581555fa7cdd178a685335e948b8f4d084d18be

Added See also for 'slice' and 'strided_slice'

view details

tomas

commit sha 92844103fe7a02e120f0d49084f03d8ed7322537

Added see-also for 'tf.shape' and 'tf.rank'

view details

tomas

commit sha c56d857f52f01c5179aa37135e6158ee5efcfe92

Added see-also for 'tf.scan' and 'tf.map_fn'

view details

tomas

commit sha f5c2011ad9f046d7583d4ea55670c3ea98b15f14

Added see-also for 'tf.ones', 'tf.zeroes', 'tf.zeroes_like', 'tf.fill', 'tf.eye', 'tf.one_hot'

view details

tomas

commit sha 7ae67823305a3b8394caaa5ba915d57d25bed0b7

corrected spelling mistakes

view details

tomas

commit sha 88150396fd7e500dbb6f30f563ae2bb6e241032a

Added see-also for 'tf.unique' and 'tf.unique_with_counts'

view details

tomas

commit sha ace0c9c7223c9eba08c4c3d9e0584facd4551f61

added periods

view details

Elena Zhelezina

commit sha 9140684f7adaddcdb3a377bbe62e4556bbfd4b44

Fix for unused variable warning.

view details

ngc92

commit sha c4c48648df6838046040c3f1713f34d2eca68f91

added pathlib.Path support to keras: keras.Model.{save,load}_weights keras.callbacks.{CSVLogger,ModelCheckpoint,TensorBoard} keras.utils.{get_file,plot_model} uses new utility function keras.utils.io_utils.path_to_string that is a no-op for older versions of python, and converts PathLike objects to strings in newer versions

view details

ngc92

commit sha b83ea9c7d09870e5ef836c38e3263dbf7c933068

fixed missing import in test

view details

Koan-Sin Tan

commit sha 347fef645bda9466147042aca3de6ce7f8fecd0a

[tflite] make label_image build label_image depends on old gpu:gl_delegate, which no long built. update the dependency to use use gpu:delegate

view details

Elena Zhelezina

commit sha 913a78794dd01b5f7e7bdb36fd7f566712fc11b3

Fix for the error with buildifier.

view details

ngc92

commit sha 8504f4bea5dcd7d7b9fd194161f2ebe28acb1d6a

fixed test

view details

push time in 3 months

push eventfeihugis/tensorflow

feihugis

commit sha 49b07e664590f73942bcbaf1c378e59ccab9f04b

Add the test case

view details

push time in 3 months

issue closedtensorflow/tensorflow

Do not get performance improvement on tf.matmul when building with AVX2

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): No
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): CentOS 7
  • TensorFlow version (use command below): v1.14
  • Python version: 3.6.5
  • Bazel version (if compiling from source): 0.26.1
  • GCC/Compiler version (if compiling from source): 8.3.1

Describe the current behavior Run the benchmarking tests for tf.matmul operation under different TensorFlow packages:

  1. the pip package v1.14 released by Tensorflow. This package is not compiled with AVX2 as indicated by this log info tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
  2. built the TensorFlow package with AVX2 from source using the command bazel build -c opt --copt=-march=native //tensorflow/tools/pip_package:build_pip_package.

However, there is no big performance difference between the above two packages:

  1. tensorflow-v1.14-cpu 8192 x 8192 matmul took: 1.54 sec, 713.30 G ops/sec

  2. build from source with AVX2 8192 x 8192 matmul took: 1.60 sec, 687.73 G ops/sec

Describe the expected behavior The benchmark with the second package should be faster than the first one.

Code to reproduce the issue Borrow the benchmark code from here.

import os
import sys
import tensorflow as tf
import time

n = 8192
dtype = tf.float32

matrix1 = tf.Variable(tf.ones((n, n), dtype=dtype))
matrix2 = tf.Variable(tf.ones((n, n), dtype=dtype))
product = tf.matmul(matrix1, matrix2)


# avoid optimizing away redundant nodes
config = tf.ConfigProto(graph_options=tf.GraphOptions(optimizer_options=tf.OptimizerOptions(opt_level=tf.OptimizerOptions.L0)))
sess = tf.Session(config=config)

import os
import sys
import tensorflow as tf
import time

n = 8192
dtype = tf.float32

matrix1 = tf.Variable(tf.ones((n, n), dtype=dtype))
matrix2 = tf.Variable(tf.ones((n, n), dtype=dtype))
product = tf.matmul(matrix1, matrix2)


# avoid optimizing away redundant nodes
config = tf.ConfigProto(graph_options=tf.GraphOptions(optimizer_options=tf.OptimizerOptions(opt_level=tf.OptimizerOptions.L0)))
sess = tf.Session(config=config)


sess.run(tf.global_variables_initializer())
iters = 10

# pre-warming
sess.run(product.op)

start = time.time()
for i in range(iters):
  sess.run(product.op)
end = time.time()
ops = n**3 + (n-1)*n**2 # n^2*(n-1) additions, n^3 multiplications
elapsed = (end - start)
rate = iters*ops/elapsed/10**9
print('\n %d x %d matmul took: %.2f sec, %.2f G ops/sec' % (n, n,
                                                            elapsed/iters,
                                                            rate,))

closed time in 3 months

feihugis

issue commenttensorflow/tensorflow

Do not get performance improvement on tf.matmul when building with AVX2

@yifeif @penpornk Thanks very much for your detailed answers! Good to know these information. Close this issue as it has been resolved.

feihugis

comment created time in 3 months

startedoneapi-src/oneDNN

started time in 3 months

startedyedf/handy

started time in 3 months

pull request commenttensorflow/tensorflow

Make keras model load compatible with old version of models

@tanzhenyu The following code can be used to reproduce the issue:

  • Step 1: use tf-1.2.1 to create an old model:
def create_model_tf12(model_file):
  from tensorflow.contrib.keras.python.keras.models import Sequential
  from tensorflow.contrib.keras.python.keras.layers import Dense, Embedding
  model = Sequential()
  model.add(Embedding(1000, 64, input_length=10))
  model.save(model_file)
  • Step 2: use tf-nightly to load the model:
def load_mode_tf_nightly(model_file):
  model = tf.keras.models.load_model(model_file)

Can you be more specific on 1) what was the original config,

The original config from tf-1.2.1 is below:

{
  'class_name': 'Sequential', 
  'config': [
    {
      'class_name': 'Embedding', 
      'config': {
        'name': 'embedding_1', 
        'trainable': True, 
        'batch_input_shape': [None, 10], 
        'dtype': 'int32', 
        'input_dim': 1000, 
        'output_dim': 64, 
        'embeddings_initializer': {'class_name': 'RandomUniform', 'config': {'minval': 0, 'maxval': None, 'seed': None, 'dtype': 'float32'}}, 
        'embeddings_regularizer': None, 
        'activity_regularizer': None, 
        'embeddings_constraint': None, 
        'mask_zero': False, 
        'input_length': 10
      }
    }
  ]
}

The original config from tf-nightly using the python code is below:

{
  'class_name': 'Sequential', 
  'config': {
    'name': 'sequential', 
    'layers': [
       {
         'class_name': 'InputLayer', 
         'config': {
           'batch_input_shape': [None, 10], 
           'dtype': 'float32', 
           'sparse': False, 
           'ragged': False, 
           'name': 'embedding_input'}
       },
       {
         'class_name': 'Embedding', 
         'config': {
           'name': 'embedding', 
           'trainable': True, 
           'batch_input_shape': [None, 10], 
           'dtype': 'float32', 
           'input_dim': 1000, 
           'output_dim': 64, 
           'embeddings_initializer': {'class_name': 'RandomUniform', 'config': {'minval': -0.05, 'maxval': 0.05, 'seed': None}}, 
           'embeddings_regularizer': None, 
           'activity_regularizer': None, 
           'embeddings_constraint': None, 
           'mask_zero': False, 
           'input_length': 10}
       }
    ]
  }
}
  1. what error message does it provide,

The error message is

Traceback (most recent call last):
  File "/home/feihu/miniconda3/lib/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/home/feihu/miniconda3/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/feihu/.vscode-server/extensions/ms-python.python-2020.3.71659/pythonFiles/lib/python/debugpy/no_wheels/debugpy/__main__.py", line 45, in <module>
    cli.main()
  File "/home/feihu/.vscode-server/extensions/ms-python.python-2020.3.71659/pythonFiles/lib/python/debugpy/no_wheels/debugpy/../debugpy/server/cli.py", line 429, in main
    run()
  File "/home/feihu/.vscode-server/extensions/ms-python.python-2020.3.71659/pythonFiles/lib/python/debugpy/no_wheels/debugpy/../debugpy/server/cli.py", line 266, in run_file
    runpy.run_path(options.target, run_name=compat.force_str("__main__"))
  File "/home/feihu/miniconda3/lib/python3.6/runpy.py", line 263, in run_path
    pkg_name=pkg_name, script_name=fname)
  File "/home/feihu/miniconda3/lib/python3.6/runpy.py", line 96, in _run_module_code
    mod_name, mod_spec, pkg_name, script_name)
  File "/home/feihu/miniconda3/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "tf_explore/keras_load_model.py", line 46, in <module>
    load_mode_tf21(SEQ_MODEL_TF12_FILE)
  File "tf_explore/keras_load_model.py", line 30, in load_mode_tf21
    model = tf.keras.models.load_model(model_file)
  File "/home/feihu/py-virtualenv/tf-nightly/lib/python3.6/site-packages/tensorflow/python/keras/saving/save.py", line 184, in load_model
    return hdf5_format.load_model_from_hdf5(filepath, custom_objects, compile)
  File "/home/feihu/py-virtualenv/tf-nightly/lib/python3.6/site-packages/tensorflow/python/keras/saving/hdf5_format.py", line 170, in load_model_from_hdf5
    custom_objects=custom_objects)
  File "/home/feihu/py-virtualenv/tf-nightly/lib/python3.6/site-packages/tensorflow/python/keras/saving/model_config.py", line 55, in model_from_config
    return deserialize(config, custom_objects=custom_objects)
  File "/home/feihu/py-virtualenv/tf-nightly/lib/python3.6/site-packages/tensorflow/python/keras/layers/serialization.py", line 173, in deserialize
    printable_module_name='layer')
  File "/home/feihu/py-virtualenv/tf-nightly/lib/python3.6/site-packages/tensorflow/python/keras/utils/generic_utils.py", line 340, in deserialize_keras_object
    config, module_objects, custom_objects, printable_module_name)
  File "/home/feihu/py-virtualenv/tf-nightly/lib/python3.6/site-packages/tensorflow/python/keras/utils/generic_utils.py", line 302, in class_and_config_for_serialized_keras_object
    for key, item in cls_config.items():
AttributeError: 'list' object has no attribute 'items'
  1. what are alternative solutions towards this?

Do you mean if there are any other solution to fix this issue?

And also a unit test would be desired as well. Will add a unit test soon.

feihugis

comment created time in 3 months

pull request commenttensorflow/tensorflow

Make keras model load compatible with old version of models

Not sure if this is desired. Can't you just update your config?

Thanks for review, @tanzhenyu!

We can update our model to work with tf-2.1. However, I think it will save time for users if Keras API in tf-2.x can keep compatible with the old models generated by tf-1.x. Otherwise, there may be some other users who met similar issues with ours.

Note that tf-2.0 can load the above model file but some recent changes in tf-2.1 break it. I'm not sure if it is on purpose, or maybe forgot some edge cases.

feihugis

comment created time in 3 months

issue commenttensorflow/tensorflow

Keras Model Errors on Loading - 'list' object has no attribute 'items' with TF 2.1

@jvishnuvardhan @tripathysa #38339 has been submitted to fix this issue.

tripathysa

comment created time in 3 months

PR opened tensorflow/tensorflow

Make keras model load compatible with old version of models

This PR makes tf.keras.models.load_model(...) compatible with the old versions of keras models (e.g. keras-version == 2.0.6).

Fix #38135

+9 -0

0 comment

1 changed file

pr created time in 3 months

push eventfeihugis/tensorflow

ANSHUMAN TRIPATHY

commit sha ed043aec4962dfdc3c58e2ad90dacb557dafcf4e

Lite: ResizeTensor Dim size check added to avoid reallocation if no change

view details

ANSHUMAN TRIPATHY

commit sha 8c350bc3d1650877f68444e2e93a6cd28a3fc7b3

[1] Review comments handled

view details

frreiss

commit sha 759668b698ebab2fce119c9dde5aed8e11282d6e

Refactor lmdb_dataset_op.cc into .cc and .h files

view details

frreiss

commit sha 0f35429fb614d26aa61a98e845ab193ee245063b

Add tests of LMDBDatasetOp

view details

frreiss

commit sha ed3fda4e15cce3951bfa81d3e20974c784f35df4

Merge branch 'master' of https://github.com/tensorflow/tensorflow into issue-data-lmdb-test

view details

frreiss

commit sha 34306af78d7421005087d1514c0cfb0ccbb07d2d

Refactor tests to new format

view details

frreiss

commit sha 03140b35d960ddead99886c05fb4ef0a77f30b1f

Switch to using CreateTensors function Switch to using CreateTensors function Fix typo

view details

frreiss

commit sha b04ebed43ddc6b59e59196dfd8e1eacc72347b32

Use factory method to create PosixFileSystem instance

view details

frreiss

commit sha 2b9ccaa4c2f0e3be568f20c22b570e42d0a02fa9

Merge branch 'master' of https://github.com/tensorflow/tensorflow into issue-data-lmdb-test

view details

frreiss

commit sha ab26059f8e9c4fb3610e3ca6ef92e5d53cd6e93a

Update test case to work with latest version of test harness Update test case to work with latest version of test harness Change field name to follow code standards

view details

frreiss

commit sha 0f2c182bea074f2adbfc2abf4e1ea1c5d5eb0152

Add additional direct dependencies to test target

view details

Hristo Vrigazov

commit sha 3d4d6d635eaf92332278f0782ffa243bf9c8fb21

LinspaceND tests and implementation

view details

Hristo Vrigazov

commit sha 14db8b9587b82e1b23e9ab96fdf7363f69ad8bfc

Address linter warnings

view details

Hristo Vrigazov

commit sha 1d8691503cf5d6cdee084e44b6f9491b6ccf1369

Merge branch 'master' into linspace_nd

view details

frreiss

commit sha d7b8a972565eb7d915fb29aaeb4c42b7b8154b51

Merge branch 'master' of https://github.com/tensorflow/tensorflow into issue-data-lmdb-test

view details

frreiss

commit sha 58c063956b9aa6f9e84780145f7bbad423b4654a

Update code to work with latest version of test harness

view details

Hristo Vrigazov

commit sha 87cf8a4e59a72867f149ad6f0737a59412909938

Avoid using np.linspace for assertion since an older version could be used

view details

frreiss

commit sha d6e143302f94fa68dc6799e5abc57196fa624aa8

Rename argument to match underlying API

view details

Hristo Vrigazov

commit sha 674b92e6d1cd88a51512e27378932482041d4ed4

Doctest for linspace is correctly formatted.

view details

Hristo Vrigazov

commit sha 7a68af78ec7303abc0b3943650eed158e37c3b65

Fix line too long linter error

view details

push time in 3 months

more