profile
viewpoint

vissood/graphql-java-client 0

Java client for GraphQL service

vissood/resources 0

resources

vissood/Useful-commands 0

This repository contains useful developre commands

issue commenttensorflow/java

Tensors created using TFloat16.tensorOf does not have correct output

Thanks @karllessard Verified the fix #62 and it is working fine. Thanks a lot

vissood

comment created time in a month

issue openedtensorflow/java

Tensors created using TFloat16.tensorOf does not have correct output

<em>Please make sure that this is a bug. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:bug_template</em>

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow):Yes
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Mac
  • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device:
  • TensorFlow installed from (source or binary): source
  • TensorFlow version (use command below): 2.2.0
  • Python version:3.6
  • Bazel version (if compiling from source):
  • GCC/Compiler version (if compiling from source):
  • CUDA/cuDNN version:
  • GPU model and memory:

You can collect some of this information using our environment capture script You can also obtain the TensorFlow version with python -c "import tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)"

Describe the current behavior Tensor created using TFloat16.tensorOf does not have correct value. In example below float has value [0, 1] but TFloat16 generate values [0,0] whereas TFloat32 gives correct value [0, 1] Describe the expected behavior

Code to reproduce the issue Provide a reproducible test case that is the bare minimum necessary to generate the problem. public class Test {

public static void main(String[] args) {
	float[][] f1 = {{0}, {1}};
	

    System.out.println(StdArrays.ndCopyOf(f1).getFloat(0,0));
    System.out.println(StdArrays.ndCopyOf(f1).getFloat(1,0));
    
	System.out.println("FLOAT16");
    Tensor<TFloat16> tf_float1 = TFloat16.tensorOf(StdArrays.ndCopyOf(f1));
    System.out.println(tf_float1.data().getFloat(0,0));
    System.out.println(tf_float1.data().getFloat(1,0));
    
    System.out.println("FLOAT32");
    Tensor<TFloat32> tf_float2 = TFloat32.tensorOf(StdArrays.ndCopyOf(f1));
    System.out.println(tf_float2.data().getFloat(0,0));
    System.out.println(tf_float2.data().getFloat(1,0));
}

}

OUTPUT 0.0 1.0 FLOAT16 Warning: Could not load Loader: java.lang.UnsatisfiedLinkError: no jnijavacpp in java.library.path Warning: Could not load Pointer: java.lang.UnsatisfiedLinkError: no jnijavacpp in java.library.path Warning: Could not load BytePointer: java.lang.UnsatisfiedLinkError: no jnijavacpp in java.library.path 0.0 0.0 FLOAT32 0.0 1.0 Other info / logs Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.

created time in 2 months

issue openedtensorflow/java

TFlite support for tensorflow java

Does Tensorflow Java API support .tflite file format? I am not able to find any documentation, so any leads in this direction will be helpful

created time in 2 months

issue openedtensorflow/tensorflow

Getting error saved model to tflite - Float 64 error

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Mac OS
  • TensorFlow installed from (source or binary): binary
  • TensorFlow version (or github SHA if from source): 2.2.0

Command used to run the converter or code if you’re using the Python API If possible, please share a link to Colab/Jupyter/any notebook.

# Copy and paste here the exact command
```converter = tf.lite.TFLiteConverter.from_saved_model(model_file_pb)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_model = converter.convert()

**The output from the converter invocation**

``` ERROR
# Copy and paste the output here.

To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2020-05-16 20:04:12.645740: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7f98f6906430 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2020-05-16 20:04:12.645752: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2020-05-16 20:04:12.885160: I tensorflow/cc/saved_model/loader.cc:234] Restoring SavedModel bundle. 2020-05-16 20:04:13.539394: I tensorflow/cc/saved_model/loader.cc:183] Running initialization op on SavedModel bundle at path: MODEL/PB/64/tf_i11_v210/ 2020-05-16 20:04:13.765676: I tensorflow/cc/saved_model/loader.cc:364] SavedModel load for tags { serve }; Status: success: OK. Took 1183597 microseconds. error: type of return operand 2 ('tensor<?x400xf32>') doesn't match function result type ('tensor<?x400xf64>') Traceback (most recent call last): File "/Users/visood/opt/anaconda3/bin/toco_from_protos", line 8, in <module> sys.exit(main()) File "/Users/visood/opt/anaconda3/lib/python3.7/site-packages/tensorflow/lite/toco/python/toco_from_protos.py", line 93, in main app.run(main=execute, argv=[sys.argv[0]] + unparsed) File "/Users/visood/opt/anaconda3/lib/python3.7/site-packages/tensorflow/python/platform/app.py", line 40, in run _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef) File "/Users/visood/opt/anaconda3/lib/python3.7/site-packages/absl/app.py", line 299, in run _run_main(main, args) File "/Users/visood/opt/anaconda3/lib/python3.7/site-packages/absl/app.py", line 250, in _run_main sys.exit(main(argv)) File "/Users/visood/opt/anaconda3/lib/python3.7/site-packages/tensorflow/lite/toco/python/toco_from_protos.py", line 56, in execute enable_mlir_converter) Exception: <unknown>:0: error: type of return operand 2 ('tensor<?x400xf32>') doesn't match function result type ('tensor<?x400xf64>') <unknown>:0: note: see current operation: "std.return"(%12, %11, %13, %14, %15) : (tensor<?x400xf64>, tensor<?x1x400xf64>, tensor<?x400xf32>, tensor<?x400xf32>, tensor<f32>) -> ()

Also, please include a link to the saved model or GraphDef https://github.com/vissood/resources/tree/master/TFLITE_ISSUE

# Put link here or attach to the issue.

Failure details If the conversion is successful, but the generated model is wrong, state what is wrong:

  • Producing wrong results and/or decrease in accuracy
  • Producing correct results, but the model is slower than expected (model generated from old converter)

RNN conversion support If converting TF RNN to TFLite fused RNN ops, please prefix [RNN] in the title.

Any other info / logs

Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.

created time in 2 months

push eventvissood/resources

Vishal Sood

commit sha ac9b92196691348b1306ab02d80ec8b4d210ed70

added saved model and json config for tflite conversion issue

view details

push time in 2 months

create barnchvissood/resources

branch : master

created branch time in 2 months

created repositoryvissood/resources

resources

created time in 2 months

issue commenttensorflow/java

Tensors.create factory method is not supported

Thanks for reply!! Can you please share what is alternative way?

vissood

comment created time in 2 months

issue openedtensorflow/java

Tensors.create factory method is not supported

Java API does not support factory method for creating Tensor Tensors.create() - https://www.tensorflow.org/api_docs/java/reference/org/tensorflow/Tensors

created time in 2 months

more