profile
viewpoint

g0v/moedict-webkit 362

萌典網站

miaout17/hirb-unicode 58

Unicode support for hirb

miaout17/lolize 36

Colorize your ruby output with rainbow

albb0920/big5-ansiart 26

This is a gem for converting Big5-UAO encoded ANSI art to html or PNG, may also works with pure ASCII encoding file.

godfat/dm-is-reflective 23

DataMapper plugin that helps you manipulate an existing database.

miaout17/blogtrans 5

Blog format converter

miaout17/justfuck 4

A x86 Just-In-Time Compiler for Brainfuck

miaout17/dotfiles 3

my dotfiles

miaout17/filedots 2

A simple tool to install/uninstall dotfiles

issue commenttensorflow/tensorflow

Error message is unclear when we failed to load tflite model.

Sorry for the delayed response, @w4-hyunseok

First, your TensorFlow version seems very old (1.15.0). Could you try to upgrade to the newest version and try it again?

I tried to reproduce the issue with the model you uploaded, but I'm running into an irrelevant error:

KeyError: u'Generator/decoder1_14/attention_decoder/rnn/while/Identity_8'

Could you double check if you can reproduce the issue with the same model?

We are working on replacing the old TensorFlow Lite converter backend with a new one, and I think it has a great chance to solve this issue (because we completely reworked relevant code). To use it, just add --experimental_new_converter argument to the command line, like:

tflite_convert --experimental_new_converter \
    --saved_model_dir ./tts/exported/more-kss-spk10-tflite \
    --output_file ./tts/tflite/more-kss-spk10.tflite

However it only works in TensorFlow 2.1.0+, or tf-nightly pip.

cc @renjie-liu @ashwinmurthy -- It seems a buffer is incorrectly attached to a variable tensor of an op generated by OpHints, but this was the old converter. I'm hoping the new converter can just solve this issue.

w4-hyunseok

comment created time in 2 days

issue commenttensorflow/tensorflow

tf.lite.converter.convert() Erroe

I switched to the latest version of tf 2.1 and added the above lines. It perfectly created the tflite file. The experimental_new_converter attribute only work on TF 2.1 and above. When doing this in 2.0 it doesn't change anything.

But when I use the GRU model on the board, I get an error message on the serial console of the board.

Only 1 subgraph is currently supported.
AllocateTensors() failed

It seems you're running the microcontroller version of TFLite. At this moment it only support a subset of TFLite features, and control flow isn't supported yet.

At this moment it's working as intended. @petewarden do you have any comment about GRU support?

pramodjayram

comment created time in 4 days

issue closedtensorflow/tensorflow

Output from TFLite model converted using experimental converter does not match old converter

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux
  • TensorFlow installed from (source or binary): pip install tf-nightly
  • TensorFlow version (or github SHA if from source): 2.2.0-dev20200123

EDIT: I am experimenting with converting the SSD MobileNet V2 models. Using the old converter, the model seems to work without any issues. However, the model converted with the new experimental converter does not match the results. I also tried a Float16 quantized version of the model and it also doesn't match the output of any the two float32 models.

Command used to run the converter or code if you’re using the Python API

converter = tf.compat.v1.lite.TFLiteConverter.from_frozen_graph(frozen_model, input_arrays, output_arrays, input_shapes)
converter.allow_custom_ops = True
converter.experimental_new_converter = True
tflite_model = converter.convert()
open('ssd_mobilenet_v2_float32_experimental.tflite', "wb").write(tflite_model)

The output from the converter invocation Comparison of model output with old and new experimental converter.

AssertionError: 
Arrays are not almost equal to 5 decimals

Mismatched elements: 40 / 40 (100%)
Max absolute difference: 11.113381
Max relative difference: 3501.543
 x: array([[ -8.2944 , -10.18906,  -9.16781,  -8.85916],
       [ -8.63264,  -9.66667,  -8.13753, -11.32026],
       [ -7.67533,  -9.24478,  -9.12119, -11.07866],...
 y: array([[ 2.46333e-01, -4.78567e-01, -7.21600e-01,  4.30845e-02],
       [ 3.55053e-01,  1.12653e+00, -3.39829e+00, -5.85306e+00],
       [ 7.73718e-01,  4.47561e-01,  1.09934e+00, -1.17987e-02],...

Also, please include a link to the saved model or GraphDef Colab notebook with model

Failure details The output produced by models using various options all output different results as demonstrated in notebook. I also created a Float16 Quantized version of the SSD MobileNet V2 model however that doesn't produce a similar precision output to the two F32 models.

closed time in 11 days

bilalsoomro

issue commenttensorflow/tensorflow

Output from TFLite model converted using experimental converter does not match old converter

We investigated the model and found it's actually working as intended.

Here's the detail: The model outputs up to 10 results, but the actual number of results is stored in an output tensor (output 3). The other outputs always allocate enough memory to handle 10 results, but only the first num_results rows are meaningful. The rest of output buffer is uninitialized.

Below is the code snippet to fix the testing code:

# Get the number of results from the 2 models, and check if it matches. 
tflite_num_results_1 = interpreter_1.get_tensor(output_details_1[3]['index'])[0]
tflite_num_results_2 = interpreter_2.get_tensor(output_details_2[3]['index'])[0]
np.testing.assert_equal(tflite_num_results_1, tflite_num_results_2)
num_results = int(tflite_num_results_1)

# Get the actual results (bonding boxes), while slicing the first num_results rows.
tflite_results_1 = interpreter_1.get_tensor(output_details_1[0]['index'])[0, :num_results, :]
tflite_results_2 = interpreter_2.get_tensor(output_details_2[0]['index'])[0, :num_results, :]

# Compare the result.
for result_1, result_2 in zip(tflite_results_1, tflite_results_2):
  np.testing.assert_almost_equal(result_1, result_2, decimal=5)

Thanks for trying out the experimental converter. Let us know if you have further questions!

bilalsoomro

comment created time in 11 days

issue commenttensorflow/tensorflow

TFLite metal delegate can't share MTLDevice

(I'm sorry for the delay -- Missed these message due to my notification settings)

Did you run into any real problems with this issue on iOS devices? If I understand correctly, MTLCreateSystemDefaultDevice returns a reference to the preferred default Metal device object, so it's not really creating multiple devices when it's called more than once.

If you want to pass a MTLDevice for some reasons, I think it's possible to make the change.

Replying to @impjdi 's question:

IIRC there is some complexities with ObjC and Swift. How do you feel we should proceed? I think support of this functionality is now up to you guys, from the API's point of view.

I think this is possible. The only constraint is that TFLGpuDelegateOptions should to be in pure C. We can either define a void * or a forward decelerated structure in TFLGpuDelegateOptions to represent MTLDevice.

stakemura

comment created time in a month

PR closed tensorflow/tensorflow

Reviewers
Fix BUILD_WITH_MMAP error in TFLite Makefile awaiting review cla: yes comp:lite size:XS
+2 -2

3 comments

1 changed file

zhuyie

pr closed time in 4 months

pull request commenttensorflow/tensorflow

Fix BUILD_WITH_MMAP error in TFLite Makefile

I'm closing this PR since it's incorrect to simply flipping the logic. Please create a bug report if you're hitting some issues, with reproduce steps. Thanks a lot!

zhuyie

comment created time in 4 months

pull request commenttensorflow/tensorflow

Fix BUILD_WITH_MMAP error in TFLite Makefile

Thanks for sending the Pull Request.

However it doesn't look right to disable mmap_allocation when BUILD_WITH_MMAP is true. Could you describe what issue are you running into?

zhuyie

comment created time in 4 months

pull request commenttensorflow/community

RFC: On-Device Training with TensorFlow Lite

@ewilderj Could you help to merge this? Thanks a lot!

miaout17

comment created time in 5 months

push eventmiaout17/community

Yu-Cheng Ling

commit sha e7fbb740a43849abd9027cc4e2689958565f9fa5

Change the status to accepted

view details

push time in 5 months

pull request commenttensorflow/community

RFC: On-Device Training with TensorFlow Lite

Pasting the summary of the design review notes

  • User-friendly route for Keras
    • Takeaway: Don’t operate for Keras at first because that won’t make it flexible to support non-Keras model
    • Target for the layer below Keras (e.g. SavedModel). Theoretically Keras model can be converted to SavedModel so we should have well support for both.
  • Getting gradient and optimizer operations
    • Takeaway: Use TF fused gradient ops initially. Do not introduce fused ops into TFLite.
  • Defining (resource) variables in TensorFlow Lite
    • Takeaway: Define the TFLite resource variable in a way that’s very similar to TF. Using integer as a variable id instead of a top level container name and variable name and type as a key. This makes the conversion very simple and allows the TF ops to 1:1 map to the TFLite ops.
  • (Not) Supporting TensorFlow 1.x
    • Takeaway: We will not support TensorFlow 1.x for on-device training in TFLite
    • Some features are supported only with the new MLIR converter (i.e. control flow and training). There are a few things that are required to make this work including needing a way to handle variables. Currently we are only focused on TensorFlow v2 variables and only control flow v2 is supported and only v2 tensor list ops are supported.
  • Weight saving format
    • Takeaway: TFLite interpreter should be agnostic to the saving format
    • It should be possible to support multiple saving format
  • Having separate training&inference models, or have one model that contains training&inference functions?
    • Takeaway: Ideally we need flexibility to support both options - joined and separate cases
    • Many existing use cases in TF 1.x is the former. The latter is recommended in TF 2.x
miaout17

comment created time in 5 months

pull request commenttensorflow/community

RFC: On-Device Training with TensorFlow Lite

@bhack

Thanks a lot for the feedback. Your point is well received. I think it requires a bigger design (essentially the same point can be applied to inference too), and we can consider to have a higher-level API on top of the TFLite interpreter.

miaout17

comment created time in 5 months

Pull request review commenttensorflow/tensorflow

Added Support for different data types.

 TfLiteStatus Eval(TfLiteContext* context, TfLiteNode* node) {   const OpData* op_data = reinterpret_cast<OpData*>(node->user_data);    const TfLiteTensor* cond = GetInput(context, node, 0);-  bool cond_value = cond->data.b[0];++  int active_branch_subgraph_index = -1;++  switch (cond->type) {

Is should be possible to significantly shorten the code, like...

bool condition;
switch (cond->type) {
    case kTfLiteInt32:
      condition = cond->data.i32[0];
      break;
    case kTfLiteFloat32:
      condition = cond->data.f[0];
      break;
    // omitted...
  }
  active_branch_subgraph_index = condition
                                         ? op_data->then_subgraph_index
                                         : op_data->else_subgraph_index;
  

amitsrivastava78

comment created time in 5 months

pull request commenttensorflow/community

RFC: Control Flow in TensorFlow Lite

@cena001plus sorry that I didn't see your post earlier. This is a RFC for discussing a new design.

For your question, I'd suggest to post it on StackOverflow or Github Issues on TensorFlow repo. Thanks!

miaout17

comment created time in 5 months

pull request commenttensorflow/community

RFC: Control Flow in TensorFlow Lite

@ewilderj sorry for the long delay since I missed your previous comment.

I updated the status and here's the review notes:

  • Discussion around control flow V1 support   * We consider to support control flow V1 for legacy models   * In addition to control flow V1, TensorList is also a problem. Legacy models use TensorArray but the new conversion flow will only support TensorList.   * Raising control flow from V1 to V2 isn't always trivial and it's not guaranteed to work. This will be best-effort behavior. The converter will attempt and error out if unable to.   * Decision: Focus on TF 2.0 supported ops first (Control Flow V2, TensorList).
  • What control flow ops are we expected to see in v2?   * If and While, For can be done if needed but is not generable by the user
  • Function recursion  * Not a high priority because TensorFlow doesn't support this either.    * Can not be supported but should not be excluded from design. No real use case for now
miaout17

comment created time in 5 months

push eventmiaout17/community

Yu-Cheng Ling

commit sha b7d52ee4d7b81b91097c2d4cb2f20ee02cc6d26d

Change the status to Accepted

view details

push time in 5 months

more