profile
viewpoint
Feng Liu liufengdb Google Inc. Mountain View Senior Software Engineer at Google Brain

liufengdb/mlir 1

"Multi-Level Intermediate Representation" Compiler Infrastructure

liufengdb/hadoop 0

Mirror of Apache Hadoop

liufengdb/kubernetes 0

Container Cluster Manager from Google

liufengdb/lihang-code 0

《统计学习方法》的代码实现

liufengdb/lihang_book_algorithm 0

致力于将李航博士《统计学习方法》一书中所有算法实现一遍

liufengdb/llvm-project 0

The LLVM Project is a collection of modular and reusable compiler and toolchain technologies. Note: the repository does not accept github pull requests at this moment. Please submit your patches at http://reviews.llvm.org.

liufengdb/prometheus 0

The Prometheus monitoring system and time series database.

liufengdb/protobuf 0

Protocol Buffers - Google's data interchange format

liufengdb/pytorch 0

Tensors and Dynamic neural networks in Python with strong GPU acceleration

issue commenttensorflow/tensorflow

Error. Converter does not support Quantization NN with 'tanh' activation

I will create a fix internally and will push it to open source.

mr-goldhands

comment created time in a month

push eventllvm/llvm-project

Feng Liu

commit sha 5c9c4ade9d1269e83fdf8e5d8f62b376a76da2b0

Add the inline interface to the shape dialect This patch also fixes a minor issue that shape.rank should allow returning !shape.size. The dialect doc has such an example for shape.rank. Differential Revision: https://reviews.llvm.org/D85556

view details

push time in 2 months

issue commenttensorflow/model-optimization

[Feature request or potential bug] Override of default default_8bit_quantize_layout_transform

Let's make the doc clear and close the issue.

alessandroaimar

comment created time in 2 months

issue commenttensorflow/model-optimization

How to user quantize to imporve inference performance on tensorflow-serving?

@ZhiyiLan I think we should verify that deployed models are quantized, so the tensorflow-serving are running integer binaries. Could you provide more information about the deployed model? So steps to reproduce the results would be great!

ZhiyiLan

comment created time in 2 months

issue commenttensorflow/model-optimization

Optimizing models that use TensorFlow Addons activations, layers, etc

@willbattel If I understand correclty, tf-addon provides a way to add custom ops, etc via some Python api?

willbattel

comment created time in 2 months

fork liufengdb/tensorflow-mnist-convnets

Neural nets for MNIST classification, simple single layer NN, 5 layer FC NN and convolutional neural networks with different architectures

https://ksopyla.com/category/tensorflow/

fork in 2 months

fork liufengdb/TensorFlow-Examples

TensorFlow Tutorial and Examples for Beginners (support TF v1 & v2)

fork in 2 months

issue commenttensorflow/tensorflow

Concat op not quantized

I changed the op spec of concat, so the uint8 scheme doesn't require same input/ouput scales anymore. Please check it again.

ppatrikg

comment created time in 2 months

Pull request review commenttensorflow/tensorflow

Fix invalid fusion of Matmul and Mul

 inline bool TFPaddingIsSameOrValid(Operation *op, StringAttr *padding) { /// Returns whether the given `a` and `b` have broadcast-compatible /// types. bool IsBroadcastableElementsAttrs(mlir::Attribute a, mlir::Attribute b);+bool IsDimensionsDegenerateExceptLastOne(mlir::Attribute val);+bool IsDimensionsDegenerateExceptLastOne(const ArrayRef<int64_t> elements_shape);

nit, don't need the const

ghost

comment created time in 3 months

more