profile
viewpoint
Gaurav Jain jaingaurav Google Mountain View, CA gauravjain.org Software Engineer at Google Brain

jaingaurav/Diamond 0

Diamond is a python daemon that collects system metrics and publishes them to Graphite (and others). It is capable of collecting cpu, memory, network, i/o, load and disk metrics. Additionally, it features an API for implementing custom collectors for gathering metrics from almost any source.

jaingaurav/django-grappelli 0

A jazzy skin for the Django Admin-Interface (official repository).

jaingaurav/docs 0

TensorFlow documentation

jaingaurav/dotfiles 0

YADR - The best vim,git,zsh plugins and the cleanest vimrc you've ever seen

jaingaurav/example 0

Go example projects

jaingaurav/models 0

Models and examples built with TensorFlow

jaingaurav/pyutmp 0

Python binding to Un*x UTMP functionality

pull request commenttensorflow/tensorflow

Avoid undefined behavior by union type punning in round_to_bfloat16

@N-Dekker: I think I'd hold off just a bit until @rmlarsen completes the work. He said he might be able to fix this issue when doing that work.

If the performance improvement is isolated, maybe you can start proposing that in eigen right away?

N-Dekker

comment created time in 2 days

pull request commenttensorflow/tensorflow

Small update to tf.vectorized_map() and code syntax in docs in control_flow_ops.py

It's complaining about line 342 being too long. I think you just need to move the last work to the next line.

8bitmp3

comment created time in 2 days

PR closed tensorflow/tensorflow

Reviewers
Avoid undefined behavior by union type punning in round_to_bfloat16 cla: yes comp:core ready to pull size:S

Use std::memcpy instead of union based type punning, to avoid undefined behavior. See also C++ Core Guidelines: "Don't use a union for type punning" https://github.com/isocpp/CppCoreGuidelines/blob/v0.8/CppCoreGuidelines.md#c183-dont-use-a-union-for-type-punning

+6 -10

13 comments

1 changed file

N-Dekker

pr closed time in 3 days

pull request commenttensorflow/tensorflow

Avoid undefined behavior by union type punning in round_to_bfloat16

Apologies @N-Dekker but we're actively reworking this code to use the Eigen implementation. As a result, we'll address this casting behavior when performing that work.

N-Dekker

comment created time in 3 days

pull request commenttensorflow/tensorflow

Avoid undefined behavior by union type punning in round_to_bfloat16

@N-Dekker: Some of the failures seem unrelated. However, given that we're unlikely to use absl in Eigen an alternate fix without absl would be preferred.

N-Dekker

comment created time in 4 days

pull request commenttensorflow/tensorflow

Avoid undefined behavior by union type punning in round_to_bfloat16

@N-Dekker: I believe there is a ubuntu sanity check failure due to the ordering of the build deps. Could you please address that?

N-Dekker

comment created time in 5 days

pull request commenttensorflow/tensorflow

Avoid undefined behavior by union type punning in round_to_bfloat16

Yep I believe that should work

N-Dekker

comment created time in 5 days

Pull request review commenttensorflow/tensorflow

Correctly handle resource variables in control_dependencies

 def __init__(self, name, collections=None, capture_by_value=None):     self.inputs = []     self.outputs = []     self.control_outputs = []-    self.control_captures = set()+    self.control_captures = object_identity.ObjectIdentitySet()

Why is this change needed? We try to avoid using ObjectIdentitySet internally due to the performance overhead, so if possible it would be best to avoid using it.

lgeiger

comment created time in 5 days

Pull request review commenttensorflow/tensorflow

Correctly handle resource variables in control_dependencies

 def my_func(pred, tensor):     """     if control_inputs is None:       return self._ControlDependenciesController(self, None)++    # Imported here to avoid circular dependency.+    from tensorflow.python.ops import resource_variable_ops  # pylint: disable=g-import-not-at-top

Could we have further explanation of why the hasattr check was not sufficient?

lgeiger

comment created time in 5 days

pull request commenttensorflow/tensorflow

Avoid undefined behavior by union type punning in round_to_bfloat16

@N-Dekker: Yeah I'd recommend updating both Half.h & BFloat16.h.

We're trying to merge your changes but there seem to be some BUILD file issues. I think you need to include the absl dependency else we are getting errors like: 'absl/base/casts.h': No such file or directory

N-Dekker

comment created time in 6 days

pull request commenttensorflow/tensorflow

Avoid undefined behavior by union type punning in round_to_bfloat16

@N-Dekker: Could you also make this change in Eigen? The relevant file is https://bitbucket.org/eigen/eigen/src/default/Eigen/src/Core/arch/Default/Half.h.

N-Dekker

comment created time in 6 days

Pull request review commenttensorflow/tensorflow

Simplify calls to .executing_eagerly()

 def convert_to_tensor(value,   # TODO(b/142518781): Fix all call-sites and remove redundant arg   preferred_dtype = preferred_dtype or dtype_hint   if isinstance(value, EagerTensor):-    if ctx is None:-      ctx = context.context()-    if not ctx.executing_eagerly():+    if not (ctx or context).executing_eagerly():

Could you revert this part of the PR. I'm not fond of calling executing_eagerly on the object in one case and the module in the other case.

lgeiger

comment created time in 17 days

PR closed tensorflow/tensorflow

Reviewers
hello cla: no

helo

+1518942 -1055395

3 comments

16674 changed files

1NarayanThapa

pr closed time in 21 days

pull request commenttensorflow/tensorflow

hello

This does not look like a valid PR

1NarayanThapa

comment created time in 21 days

push eventjaingaurav/dotfiles

Gaurav Jain

commit sha 58198cdb20dfdb3c82862822b97d36cb78a2019e

Default syntastic to python 3

view details

push time in 22 days

push eventjaingaurav/dotfiles

Gaurav Jain

commit sha 2f9e49497ac2b46b3de6df4c62b449e32853a1be

Add gdb config

view details

Gaurav Jain

commit sha c37669fffc0248d5dff3af0675533981a04d6ff2

Set vim sessions directory

view details

push time in 22 days

push eventjaingaurav/dotfiles

Philip Pries Henningsen

commit sha 8820dc8a3e9de3e7c23281e4e87fce75b590b35a

translate -fg, -bg and -attr options into -style options

view details

Jason Cooke

commit sha d4803c24ee5c992746695057d29c8cf901d03854

docs: fix typo

view details

Luiz Gonzaga dos Santos Filho

commit sha a95d41e4704c13f2da1f98dd152873dfa26fcdd5

Merge pull request #824 from Jason-Cooke/patch-1 docs: fix typo

view details

Luiz Gonzaga dos Santos Filho

commit sha 7253fedaaa3ceae8ed212e0d56ec9a573c57cdd1

Merge pull request #820 from ppries/2.9-compatibility Translate tmux -fg, -bg and -attr options into -style options

view details

Alan Yee

commit sha a388f91aad87113a58a842bf4dc2d515860cd8c6

Update vimrc Disable modelines as a security precaution

view details

Volodymyr Shcherbinin (vovin)

commit sha 6731927a84f6725aecaf4bfb43be6f59a917a265

Fix MacVim installation

view details

Luiz Gonzaga dos Santos Filho

commit sha f4c343e00e00a094260036823647b960901158b4

Merge pull request #827 from vovinacci/fix-macvim-install Fix MacVim installation

view details

Luiz Gonzaga dos Santos Filho

commit sha 96ddeb8d131a863596da5bd549455267dcba3e0a

Merge pull request #825 from alanyee/master Update vimrc

view details

Luiz Gonzaga dos Santos Filho

commit sha b7b3682b404d05786c9261472e97eb089be95a6a

Add github action for stale issues

view details

Chris

commit sha 71b0ad5de55f42a108fde7a242d26ad92dc8b1d0

Update vundle repos. Update scrooloose repos as they seem to have changed maintainers.

view details

Luiz Gonzaga dos Santos Filho

commit sha b0a060c1d039709352a4c184a843b46c078ea7d6

Merge pull request #834 from chrischen/update-vundle Update vundle repos.

view details

Gaurav Jain

commit sha b9227d4d038304da5c4cc6dd64b54906fec25e7c

Update installer to use personal repo

view details

Gaurav Jain

commit sha 0dc7b9b3ffc397b1e881c95473cb5d339b7eaa45

Remove ruby plugins for vim

view details

Gaurav Jain

commit sha 135f502bc2c1a408aeeaaf2d881310555bdeefb4

Add iTerm2 shell integration

view details

Gaurav Jain

commit sha 3885e5f935954826ded45a180a805d0a1413ba5c

Add custom vim options

view details

Gaurav Jain

commit sha f2ebe417f670e0c164fae64a8f8f00f5e798ac82

Apply Solarized Dark theme to all profiles

view details

Gaurav Jain

commit sha 22574f0bb9ee57be1437a08055f07a706776ea39

Set zsh theme to agnoster and add timestamp to prompt

view details

Gaurav Jain

commit sha 46da3d1ec27d9ed2698423f2c1cca721fd177105

Install hg and add prompt

view details

Gaurav Jain

commit sha 83c29876cb7f074248d08908002ba7747d1e5339

Include digits in hg branch name

view details

Gaurav Jain

commit sha 7cc3af58c25d3fe0b5208aaa9e78ffc446d3b190

Set ctrlp_root_markers to .ctrlp

view details

push time in 22 days

Pull request review commenttensorflow/tensorflow

Type annotations for tf.saturate_cast & tf.constant ops

 def __init__(self, op, value_index, dtype):       raise TypeError("op needs to be an Operation: %s" % op)     self._op = op     self._value_index = value_index-    self._dtype = dtypes.as_dtype(dtype)+    self._dtype = dtypes.as_dtype(dtype) # type: DataType

Why is this type annotation needed? as_dtype is used in many places so it'd be nice to get the annotations all over the place.

rahul-kamat

comment created time in a month

Pull request review commenttensorflow/tensorflow

Type annotations for tf.saturate_cast & tf.constant ops

 def cast(x, dtype, name=None):     return x  +SaturateCastDType = TypeVar("SaturateCastDType",+                            dtypes.UInt8, dtypes.UInt16, dtypes.UInt32,

Rather than having to list out all the types, can we have the API support merging groups of types such as complex/real etc?

rahul-kamat

comment created time in a month

pull request commenttensorflow/tensorflow

Export tf.math.reduce_all first

@guillaumekln: No need. Seems like the golden update was not needed.

guillaumekln

comment created time in a month

pull request commenttensorflow/tensorflow

Export tf.math.reduce_all first

@guillaumekln: I believe you might need to update the goldens for this API change:

https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/api/tests

guillaumekln

comment created time in a month

PR closed tensorflow/tensorflow

Reviewers
Uniform indentation with other functions. cla: yes comp:ops size:XS

Uniform indentation with other functions.

+5 -1

1 comment

1 changed file

jayhpark530

pr closed time in a month

pull request commenttensorflow/tensorflow

Uniform indentation with other functions.

Since the line fits within 80 columns this formatting is not needed. Hence the formatter did not change it. I believe we only split lines for arguments if all the arguments can't fit within 80 columns.

jayhpark530

comment created time in a month

pull request commenttensorflow/tensorflow

with_values for SparseTensor

@edloper @penpornk: Could you please take a look here?

ngc92

comment created time in a month

pull request commenttensorflow/tensorflow

Update doc of `global_norm`

@findmyway: What's wrong with the documentation? It is correctly documenting squared operation vs multiplication.

findmyway

comment created time in a month

Pull request review commenttensorflow/tensorflow

Fix deprecation message from GatherV2Grad

 def _GatherV2Grad(op, grad):   # For axis 0 gathers, build an appropriately shaped IndexedSlices.   if axis_static == 0:     if context.executing_eagerly():-      params_tail_shape = params_shape.cpu()[1:]+      params_tail_shape = array_ops.identity(params_shape)[1:]

@alextp: Is this CPU placement necessary? I looked at the history of this code and couldn't quite figure out why the CPU placement is needed.

yongtang

comment created time in 2 months

Pull request review commenttensorflow/tensorflow

Fix deprecation message from GatherV2Grad

 def _GatherV2Grad(op, grad):   # For axis 0 gathers, build an appropriately shaped IndexedSlices.   if axis_static == 0:     if context.executing_eagerly():-      params_tail_shape = params_shape.cpu()[1:]+      params_tail_shape = array_ops.identity(params_shape)[1:]

Doesn't this need a with ops.device("cpu"): as well, otherwise this is not forcing it onto the CPU

yongtang

comment created time in 2 months

Pull request review commenttensorflow/tensorflow

Fix the issue of tf.divide's return value is not a tensor

 def divide(x, y, name=None):     # override names. Use a dummy class to track the runtime division behavior     return DivideDelegateWithName(x, name) / y   else:+    if not (isinstance(x, ops.Tensor)  or isinstance(y, ops.Tensor)):+      if sys.version_info.major < 3:+        return _truediv_python2(x, y)+      else:+        return _truediv_python3(x, y)     return x / y

Why do you need to check tensor_util.is_tensor(y)?

yongtang

comment created time in 2 months

Pull request review commenttensorflow/tensorflow

Fix the issue of tf.divide's return value is not a tensor

 def divide(x, y, name=None):     # override names. Use a dummy class to track the runtime division behavior     return DivideDelegateWithName(x, name) / y   else:+    if not (isinstance(x, ops.Tensor)  or isinstance(y, ops.Tensor)):+      if sys.version_info.major < 3:+        return _truediv_python2(x, y)+      else:+        return _truediv_python3(x, y)     return x / y

Please remove the comment or simply to simple say we do the convert in order always return a Tensor. Also, I think the code should be:

tensor_util.is_tensor(x):
  x = ops.convert_to_tensor(x)
yongtang

comment created time in 2 months

Pull request review commenttensorflow/tensorflow

Fix the issue of tf.divide's return value is not a tensor

 def divide(x, y, name=None):     # override names. Use a dummy class to track the runtime division behavior     return DivideDelegateWithName(x, name) / y   else:+    if not (isinstance(x, ops.Tensor)  or isinstance(y, ops.Tensor)):+      if sys.version_info.major < 3:+        return _truediv_python2(x, y)+      else:+        return _truediv_python3(x, y)     return x / y

Okay I think I understand this PR better, I was misreading the if condition.

Wouldn't it be simpler to simply call convert to tensor on x and then let the operator overloads kick in? That's basically what each of those function do in the first place.

yongtang

comment created time in 2 months

Pull request review commenttensorflow/tensorflow

Fix the issue of tf.divide's return value is not a tensor

 def divide(x, y, name=None):     # override names. Use a dummy class to track the runtime division behavior     return DivideDelegateWithName(x, name) / y   else:+    if not (isinstance(x, ops.Tensor)  or isinstance(y, ops.Tensor)):+      if sys.version_info.major < 3:+        return _truediv_python2(x, y)+      else:+        return _truediv_python3(x, y)     return x / y

I'm sorry, I may be missing something here, but is the logic above saying that we'll just use _truediv_python for non-tensor inputs? If so, then the test case shouldn't pass, since we should get a python value back.

I'm probably misreading the code. If so, please document or restructure to help clarify.

yongtang

comment created time in 2 months

Pull request review commenttensorflow/tensorflow

Fix the issue of tf.divide's return value is not a tensor

 def testConsistent(self):     # Consistent with desire to get numerator     self.assertAllEqual(tf_result, expanded_nums) +  def testWithPythonValue(self):+    # Test case for GitHub issue 39475:+    # https://github.com/tensorflow/tensorflow/issues/39475+    x = math_ops.divide(5,  2)+    self.assertTrue(isinstance(x, ops.Tensor))+

Nit: remove extra blank line

yongtang

comment created time in 2 months

Pull request review commenttensorflow/tensorflow

Fix the issue of tf.divide's return value is not a tensor

 def testConsistent(self):     # Consistent with desire to get numerator     self.assertAllEqual(tf_result, expanded_nums) +  def testWithPythonValue(self):+    # Test case for GitHub issue 39475:+    # https://github.com/tensorflow/tensorflow/issues/39475+    x = math_ops.divide(5,  2)

Nit: please fix double whitespace

yongtang

comment created time in 2 months

Pull request review commenttensorflow/tensorflow

Fix the issue of tf.divide's return value is not a tensor

 def divide(x, y, name=None):     # override names. Use a dummy class to track the runtime division behavior     return DivideDelegateWithName(x, name) / y   else:+    if not (isinstance(x, ops.Tensor)  or isinstance(y, ops.Tensor)):

Nit: please fix double whitespace

yongtang

comment created time in 2 months

Pull request review commenttensorflow/tensorflow

Fix the issue of tf.divide's return value is not a tensor

 def divide(x, y, name=None):     # override names. Use a dummy class to track the runtime division behavior     return DivideDelegateWithName(x, name) / y   else:+    if not (isinstance(x, ops.Tensor)  or isinstance(y, ops.Tensor)):+      if sys.version_info.major < 3:+        return _truediv_python2(x, y)+      else:+        return _truediv_python3(x, y)     return x / y -

Please keep blank line

yongtang

comment created time in 2 months

Pull request review commenttensorflow/tensorflow

[INTEL MKL] Fix for build error.

 cc_library( cc_library(     name = "mkl_tfconversion_pass",     srcs = ["mkl_tfconversion_pass.cc"],-    hdrs = ["mkl_tfconversion_pass.h"],+    hdrs = [+        "mkl_tfconversion_pass.h",+        "//tensorflow/core/graph:mkl_graph_util_header",+    ],     copts = tf_copts(),     deps = [+        ":function",

This seems to be causing a circular dependency. Try building //tensorflow/c:c_api_test and you should see something like:

.-> //tensorflow/core/common_runtime:core_cpu
|   //tensorflow/core/common_runtime:core_cpu_internal
|   //tensorflow/core/common_runtime:core_cpu_impl
|   //tensorflow/core/common_runtime:mkl_tfconversion_pass
`-- //tensorflow/core/common_runtime:core_cpu
agramesh1

comment created time in 2 months

PR closed tensorflow/tensorflow

Reviewers
Fixed auxiliary verbs in two error messages. cla: yes comp:ops size:XS
+2 -2

4 comments

1 changed file

ihsuy

pr closed time in 2 months

pull request commenttensorflow/tensorflow

Fixed auxiliary verbs in two error messages.

Unfortunately we do not accept these types of isolated PRs. Please see the following contribution guideline:

As every PR requires several CPU/GPU hours of CI testing, we discourage submitting PRs to fix one typo, one warning,etc. We recommend fixing the same issue at the file level at least (e.g.: fix all typos in a file, fix all compiler warning in a file, etc.)

https://github.com/tensorflow/tensorflow/blob/master/CONTRIBUTING.md#general-guidelines-and-philosophy-for-contribution

ihsuy

comment created time in 2 months

Pull request review commenttensorflow/tensorflow

[ROCm] Disabling a subtest within //tensorflow/python/eager:function_test_gpu…

 def func(x, kangaroo=None, octopus=7):                   r'      <1>: int32 Tensor, shape=\(\)\n'                   r'      <2>: RaggedTensorSpec\(.*\)\n'                   r'      <3>: RaggedTensorSpec\(.*\)')-    self.assertRegexpMatches(c3.pretty_printed_signature(),-                             c3_summary + '\n' + c3_details)++    # The output of pretty_printed_signature is not deterministic for dict

Thanks. Could you please update the check to sys.version_info >= (3, 5): with attention to the whitespaces.

deven-amd

comment created time in 2 months

Pull request review commenttensorflow/tensorflow

[ROCm] Disabling a subtest within //tensorflow/python/eager:function_test_gpu…

 def func(x, kangaroo=None, octopus=7):                   r'      <1>: int32 Tensor, shape=\(\)\n'                   r'      <2>: RaggedTensorSpec\(.*\)\n'                   r'      <3>: RaggedTensorSpec\(.*\)')-    self.assertRegexpMatches(c3.pretty_printed_signature(),-                             c3_summary + '\n' + c3_details)++    # The output of pretty_printed_signature is not deterministic for dict

I don't follow shouldn't this follow insertion order? At least on python3? What is the ROCm-specific issue here?

deven-amd

comment created time in 3 months

push eventjaingaurav/tensorflow

TensorFlower Gardener

commit sha 82acb876dc5c973078524018ac83352f4de4b791

Merge pull request #33068 from hristo-vrigazov:linspace_nd PiperOrigin-RevId: 304038907 Change-Id: I566974563b01fa300b8ba3b43b51bcc9f1f6b2a8

view details

Andy Ly

commit sha 9c2cd9ea930aba13f2e170bb8ceaccf6838fee29

Remove restriction of lowering tf_device.replicate to islands if replicate body contains a parallel_execute. TPU rewrite has been updated to generate parallel_execute that may be replicated, for model parallelism. PiperOrigin-RevId: 304040132 Change-Id: Iadce6ce3e553f646e8bd74cfbe70e9adef419e17

view details

Eugene Zhulenev

commit sha 16b606b0579894041722b9480254c873e2764018

[TF:MLIR] Implement optimal layout assignment for FusedBatchNormV3 PiperOrigin-RevId: 304046758 Change-Id: I2f60d09a8308a9453df3b4a031972bcac4300ebf

view details

Eugene Zhulenev

commit sha 8f9bbf617add1ef1bbb22da70bcdaa89cb31e37e

[TF:MLIR] Implement optimal layout assignment for FusedBatchNormGradV3 PiperOrigin-RevId: 304047939 Change-Id: I6aa692f656dd29ae2bbf9998f9a884d65713c61c

view details

Yash Katariya

commit sha 0a704c08b681fd3bb965a6a626d96f9ecca30477

Closing backticks were not there breaking the website's view. PiperOrigin-RevId: 304053545 Change-Id: I1fefee6c499c49b1a7f08be0a91e2245600c588e

view details

Anudhyan Boral

commit sha 21813af36014ef706f622b4757f4c28b928e018f

Modify the XLA Uniform sampler to use cast instead of bitcasts. We didn't strictly need a bitcast because we are ignoring the exponent bits anyway. Before and after logic is equivalent. However, performance could have an impact. PiperOrigin-RevId: 304057589 Change-Id: I2ad9e923b1c966f46eba91ae47e0e632b74cff72

view details

Raman Sarokin

commit sha ed0d053b9d67cdec34f4500e4b541d2bb945b4da

Added vendors to DeviceInfo. Metal convolution performance improved for Intel. PiperOrigin-RevId: 304060255 Change-Id: Idaf828ab2e995da7bbe0e189fb7cfc2d242519c7

view details

Francois Chollet

commit sha 076ac2e8e0ad09feb8272931c2b7d94823311382

Include InputLayer (when available) in Sequential config, so as not to lose a custom input name or dtype upon deserialization, if the user-defined model starts with an `Input` object. PiperOrigin-RevId: 304062384 Change-Id: I919068f0b00134ba55ee3f8c4a08ccda14cb3164

view details

Kaixi Hou

commit sha dbd15af8702aad5d219318116c305222a06186e7

Improve the softmax normalization CUDA kernel

view details

A. Unique TensorFlower

commit sha 7cec3e42684ad0ae3581e96d19c131d10aa9f8f1

Add AlignmentPointsToTransformMatrix layer in OpenGL delegate PiperOrigin-RevId: 304067509 Change-Id: I4b7b94180aff5ff05094c284d51a072488dd5b07

view details

A. Unique TensorFlower

commit sha bec9e52822c330ae926aa029cd26e924b42166a9

Add extra mutable accessors for Slice and Constant instructions. PiperOrigin-RevId: 304071089 Change-Id: I73c208a2f2fb7d9e48c88e1e9a8006cb2ab0da95

view details

A. Unique TensorFlower

commit sha 486a20777c304887fb45485ca98acec2d0136b9e

In Cloud Storage reader, if a buffered read catches an Out of Range error, cache the file size and return an error when the user tries to access an offset past the cached file size. PiperOrigin-RevId: 304081110 Change-Id: I97cc51500152270a2c3dabf0adead58b954ab379

view details

Peter Hawkins

commit sha 566c03a749af231664b3db534e17c6d51a2216dc

[XLA:Python] Remove the tuple_arguments argument from Execute(). tuple_arguments can be passed to Compile() instead. PiperOrigin-RevId: 304083684 Change-Id: I310986631c1489d0e21cff73a75c15ed41b4c747

view details

Koan-Sin Tan

commit sha 9d022cb885fa6fd66c9631063ad358ee59d89dd4

Merge branch 'master' into add_xnnpack_to_label_image_again

view details

Koan-Sin Tan

commit sha 80935f002e49f2d7210c745c4e56eed8b23cf9bd

address review concerns rebase and revise per reviewer's concerns

view details

A. Unique TensorFlower

commit sha c335ace72676b423d47a370a28b50890d33b470b

Go: Update generated wrapper functions for TensorFlow ops. PiperOrigin-RevId: 304089582 Change-Id: I00f95e98691ddbd70f541f724beb864ffc103c08

view details

A. Unique TensorFlower

commit sha ad274fc1b2a61a77c512a20b8b4119efe750f5de

don't "print xla expression" in trunk annotation. PiperOrigin-RevId: 304090402 Change-Id: Iff60d4b92ce38d8e1d6204b5013f5b8d9fcf459b

view details

Yujing Zhang

commit sha 8fcd1056f319dd90652bee511b0c1d2826ee42c5

Fix testRemoteCall when running on GPU. PiperOrigin-RevId: 304097605 Change-Id: I8b6c9382c9d93df52f520140dbbcac8a56d18bcf

view details

Jaesung Chung

commit sha aad269461261803eb7575264b8e2ce755867d69a

Enable Keras/RNN case via MLIR SavedModel import in TFLiteConverterV2 PiperOrigin-RevId: 304098351 Change-Id: Ide8275e35b1c59240f953bb614825f02eeca9a4b

view details

A. Unique TensorFlower

commit sha 453399f82c212a676b9510b3dc214c7f86e06d7c

create_sine_model.ipynb: update to Python 3 & tensorflow 2 PiperOrigin-RevId: 304102552 Change-Id: If7cd059bdcae96eca8fc69a64f0efe7b30212454

view details

push time in 3 months

create barnchjaingaurav/tensorflow

branch : remote

created branch time in 3 months

PR opened tensorflow/tensorflow

Reviewers
Backport 'Do not call Unprotect on remote inputs'
+1 -1

0 comment

1 changed file

pr created time in 3 months

Pull request review commenttensorflow/tensorflow

[INTEL MKL] Fixing build issue in two unit tests

 cc_library(     deps = [         ":context",         ":copy_to_device_node",+        ":eager_executor",         ":eager_op_rewrite_registry",         ":eager_operation",+        ":kernel_and_device",         ":tensor_handle",-        "//tensorflow/c:c_api_internal",

I think this need to be added back

mahmoud-abuzaina

comment created time in 3 months

pull request commenttensorflow/tensorflow

[ROCm] Adding no_rocm tag to CSB tests currently failing on the ROCm platform

@deven-amd: Apologies, the PR was missing one approval internally. It should be landing shortly now.

deven-amd

comment created time in 3 months

Pull request review commenttensorflow/tensorflow

[INTEL MKL] Fixing build issue in two unit tests

 tf_cc_test(     name = "eager_op_rewrite_registry_test",     srcs = ["eager_op_rewrite_registry_test.cc"],     deps = [-        ":core",         ":eager_op_rewrite_registry",-        ":execute",+        ":mkl_core",         "//tensorflow/core:lib",         "//tensorflow/core:no_op_op_lib",         "//tensorflow/core:test",         "//tensorflow/core:test_main",     ], ) +# Temporary rule until the circular dependencies issue is resolved.+# TODO(mabuzain): remove this once original "core" package is fixed.+cc_library(+    name = "mkl_core",+    srcs = [+        "core.cc",+        "execute.cc",+        "execute_node.cc",+    ],+    hdrs = [+        "execute.h",+        "execute_node.h",+    ],+    deps = [+        ":context",

Thanks @mahmoud-abuzaina. Seems we are still seeing an internal failure where //tensorflow/core/common_runtime/eager:mkl_core does not depend on a module exporting 'tensorflow/c/c_api_internal.h'.

mahmoud-abuzaina

comment created time in 3 months

Pull request review commenttensorflow/tensorflow

[INTEL MKL] Fixing build issue in two unit tests

 tf_cc_test(     name = "eager_op_rewrite_registry_test",     srcs = ["eager_op_rewrite_registry_test.cc"],     deps = [-        ":core",         ":eager_op_rewrite_registry",-        ":execute",+        ":mkl_core",         "//tensorflow/core:lib",         "//tensorflow/core:no_op_op_lib",         "//tensorflow/core:test",         "//tensorflow/core:test_main",     ], ) +# Temporary rule until the circular dependencies issue is resolved.+# TODO(mabuzain): remove this once original "core" package is fixed.+cc_library(+    name = "mkl_core",+    srcs = [+        "core.cc",+        "execute.cc",+        "execute_node.cc",+    ],+    hdrs = [+        "execute.h",+        "execute_node.h",+    ],+    deps = [+        ":context",

//tensorflow/core/common_runtime/eager:eager_op_rewrite_registry_test is failing to build internally because a number of dependencies from execute.cc are missing. In particular, a few that seem to be missing:

//absl/container:inlined_vector //absl/types:span //tensorflow/core:framework //tensorflow/core:framework_internal //tensorflow/core:lib //tensorflow/core:lib_internal

There might be more. But maybe start by copying what's in the ":execute" target.

mahmoud-abuzaina

comment created time in 3 months

Pull request review commenttensorflow/tensorflow

[ROCm] Fix for a test regression on the ROCm platform - 200207 - 2

 from tensorflow.python.framework import constant_op from tensorflow.python.framework import test_util from tensorflow.python.platform import gfile+from tensorflow.python.platform import test

So finally found out the cause of the test failure. This import conflicts with from tensorflow.python.eager import test which is why we are getting the test failure. Could you please change the condition on line 51 to use test_util.IsBuiltWithROCm or else alias this import?

deven-amd

comment created time in 3 months

more