Smit Hinsu smit-hinsu Working on @tensorflow at @google.

tensorflow/lingvo 1903


smit-hinsu/distribtued_file_system 3

This repository contains all the lab assignment for MIT course distributed systmes(6.824). Link:

smit-hinsu/tensor2tensor 1

Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.

smit-hinsu/ 0

Public facing notes page

smit-hinsu/gce-scripts 0

Scripts to deploy VMs and TensorFlow on Google Cloud.

smit-hinsu/models 0

Models built with TensorFlow

smit-hinsu/stanford-tensorflow-tutorials 0

This repository contains code examples for the Stanford's course: TensorFlow for Deep Learning Research.

smit-hinsu/subpar 0

Subpar is a utility for creating self-contained python executables. It is designed to work well with Bazel.

smit-hinsu/tensorboard 0

TensorFlow's Visualization Toolkit

smit-hinsu/tensorflow 0

An Open Source Machine Learning Framework for Everyone

pull request commenttensorflow/tensorflow

[Intel MKL] Fix dequantize accuracy issue and re-enable this OP

tensorflow/core/kernels:mkl_dequantize_op_test seems to be failing and this PR seems to be a related change.

Please take a look.


comment created time in 3 days

issue commenttensorflow/tensorflow

New Feature: Pascal, Cuda 8, Unified memory

@donglinjy I don't think uvm configuration is exposed in TF 2.0. Please file a separate feature request for that.

cc @aaroey @jaingaurav


comment created time in 3 months

pull request commenttensorflow/tensorflow

[ROCm] fix CSB build

Sorry about the rollback without any details.

This was causing some integration test to fail internally. I don't understand the test so I have asked @chsigg to follow-up with you to help roll-forward the PR.


comment created time in 5 months

Pull request review commenttensorflow/tensorflow

Clean-up unused functors

 struct TransformFilter {   } }; -template <typename Device, typename T, typename IndexType>-struct TransformDepth {-  void operator()(const Device& d,-                  typename TTypes<T, 4, IndexType>::ConstTensor in,-                  const Eigen::DSizes<IndexType, 4>& shuffle,-                  typename TTypes<T, 4, IndexType>::Tensor out) {-    Eigen::DSizes<IndexType, 3> merged_dims;-    Eigen::DSizes<IndexType, 4> expanded_dims;-    Eigen::DSizes<IndexType, 3> new_shuffle;--    // Merge dimensions that won't be shuffled together to speed things up.-    if (shuffle[1] == 2 && shuffle[2] == 3) {-      merged_dims[0] = in.dimension(0);-      merged_dims[1] = in.dimension(1);-      merged_dims[2] = in.dimension(2) * in.dimension(3);-      new_shuffle[0] = shuffle[0];-      new_shuffle[1] = 2;-      new_shuffle[2] = shuffle[3];-      expanded_dims[0] = in.dimension(shuffle[0]);-      expanded_dims[1] = in.dimension(2);-      expanded_dims[2] = in.dimension(3);-      expanded_dims[3] = in.dimension(shuffle[3]);-    } else if (shuffle[0] == 2 && shuffle[1] == 3) {-      merged_dims[0] = in.dimension(0);-      merged_dims[1] = in.dimension(1);-      merged_dims[2] = in.dimension(2) * in.dimension(3);-      new_shuffle[0] = 2;-      new_shuffle[1] = shuffle[2];-      new_shuffle[2] = shuffle[3];-      expanded_dims[0] = in.dimension(2);-      expanded_dims[1] = in.dimension(3);-      expanded_dims[2] = in.dimension(shuffle[2]);-      expanded_dims[3] = in.dimension(shuffle[3]);-    } else if (shuffle[0] == 0 && shuffle[1] == 3 && shuffle[2] == 1 &&-               shuffle[3] == 2) {-      merged_dims[0] = in.dimension(0);-      merged_dims[1] = in.dimension(1) * in.dimension(2);-      merged_dims[2] = in.dimension(3);-      new_shuffle[0] = 0;-      new_shuffle[1] = 2;-      new_shuffle[2] = 1;-      expanded_dims[0] = in.dimension(0);-      expanded_dims[1] = in.dimension(3);-      expanded_dims[2] = in.dimension(1);-      expanded_dims[3] = in.dimension(2);-    } else {-      assert(false && "unexpected shuffle");-    }--    out.device(d) =

It is still failing.

What do you think about my earlier proposal of leaving this behind for now and then we can revisit this when I get some time to further investigate it?


comment created time in 6 months