profile
viewpoint
Andrew Selle aselle Google Mountain View, CA, USA http://physbam.stanford.edu/~aselle Currently working on @tensorflow at @google. Previously physical simulation, movie making, etc. at @wdas, @lucasfilm, and Stanford.

tensorflow/tensorflow 141337

An Open Source Machine Learning Framework for Everyone

google/stm32_bare_lib 102

System functions and example code for programming the "Blue Pill" STM32-compatible micro-controller boards.

aselle/tensorflow 4

Computation using data flow graphs for scalable machine learning

aselle/brdf 2

BRDF Viewer

aselle/SeExpr 2

SeExpr is a simple expression language that we use to provide artistic control and customization to our core software. We use it for procedural geometry synthesis, image synthesis, simulation control, and much more.

gunan/tensorflow 2

Computation using data flow graphs for scalable machine learning

aselle/bullet3 0

Bullet 2.x official repository with optional experimental Bullet 3 GPU rigid body pipeline

aselle/DenseNet 0

Code for Densely Connected Convolutional Networks (DenseNets)

issue commenttensorflow/tensorflow

ModuleNotFoundError: No module named 'tflite_runtime'

Ah, it makes sense from your commands what happened when you run with sudo pip install you install in /usr/local/lbi

conda create --name myenv conda activate myenv pip install tensorflow (or whatever)

The point is that you need to activate the conda environment. Then you don't need "sudo pip"

batulrangwala

comment created time in a month

issue closedtensorflow/tensorflow

TFlite compilation failing for tf 2.1.0

<em> I am trying to compile tflite library for x86 machine, I have tried it using the following script https://github.com/sourcecode369/tensorflow-1/blob/master/tensorflow/lite/tools/make/build_lib.sh

before this, I have also installed the required dependencies using https://github.com/sourcecode369/tensorflow-1/blob/master/tensorflow/lite/tools/make/download_dependencies.sh </em>

System information

  • OS Platform and Distribution : Linux Ubuntu 18.04
  • TensorFlow installed from : source
  • TensorFlow version : 2.1.0
  • GCC/Compiler version :7.4.0
  • Bazel version : 2.0.0

Describe the current behavior /home/swati/git_workspace/tensorflow/tensorflow/lite/tools/make/gen/linux_x86_64/obj/tensorflow/lite/tools/benchmark/benchmark_performance_options.o /home/swati/git_workspace/tensorflow/tensorflow/lite/tools/make/gen/linux_x86_64/obj/tensorflow/lite/tools/benchmark/benchmark_utils.o /home/swati/git_workspace/tensorflow/tensorflow/lite/tools/make/gen/linux_x86_64/obj/tensorflow/lite/tools/benchmark/benchmark_params.o /home/swati/git_workspace/tensorflow/tensorflow/lite/tools/make/gen/linux_x86_64/obj/tensorflow/lite/profiling/profile_summarizer.o /home/swati/git_workspace/tensorflow/tensorflow/lite/tools/make/gen/linux_x86_64/obj/tensorflow/core/util/stats_calculator.o /home/swati/git_workspace/tensorflow/tensorflow/lite/tools/make/gen/linux_x86_64/obj/tensorflow/lite/tools/command_line_flags.o /home/swati/git_workspace/tensorflow/tensorflow/lite/tools/make/gen/linux_x86_64/obj/tensorflow/lite/tools/evaluation/utils.o ar: creating /home/swati/git_workspace/tensorflow/tensorflow/lite/tools/make/gen/linux_x86_64/lib/benchmark-lib.a g++ -O3 -DNDEBUG -fPIC --std=c++11 -fPIC -DGEMMLOWP_ALLOW_SLOW_SCALAR_FALLBACK -pthread -I. -I/home/swati/git_workspace/tensorflow/tensorflow/lite/tools/make/../../../../../ -I/home/swati/git_workspace/tensorflow/tensorflow/lite/tools/make/../../../../../../ -I/home/swati/git_workspace/tensorflow/tensorflow/lite/tools/make/downloads/ -I/home/swati/git_workspace/tensorflow/tensorflow/lite/tools/make/downloads/eigen -I/home/swati/git_workspace/tensorflow/tensorflow/lite/tools/make/downloads/absl -I/home/swati/git_workspace/tensorflow/tensorflow/lite/tools/make/downloads/gemmlowp -I/home/swati/git_workspace/tensorflow/tensorflow/lite/tools/make/downloads/neon_2_sse -I/home/swati/git_workspace/tensorflow/tensorflow/lite/tools/make/downloads/farmhash/src -I/home/swati/git_workspace/tensorflow/tensorflow/lite/tools/make/downloads/flatbuffers/include -I -I/usr/local/include
-o /home/swati/git_workspace/tensorflow/tensorflow/lite/tools/make/gen/linux_x86_64/bin/benchmark_model /home/swati/git_workspace/tensorflow/tensorflow/lite/tools/make/gen/linux_x86_64/obj/tensorflow/lite/tools/benchmark/benchmark_main.o
/home/swati/git_workspace/tensorflow/tensorflow/lite/tools/make/gen/linux_x86_64/lib/benchmark-lib.a -lstdc++ -lpthread -lm -lz -ldl g++ -O3 -DNDEBUG -fPIC --std=c++11 -fPIC -DGEMMLOWP_ALLOW_SLOW_SCALAR_FALLBACK -pthread -I. -I/home/swati/git_workspace/tensorflow/tensorflow/lite/tools/make/../../../../../ -I/home/swati/git_workspace/tensorflow/tensorflow/lite/tools/make/../../../../../../ -I/home/swati/git_workspace/tensorflow/tensorflow/lite/tools/make/downloads/ -I/home/swati/git_workspace/tensorflow/tensorflow/lite/tools/make/downloads/eigen -I/home/swati/git_workspace/tensorflow/tensorflow/lite/tools/make/downloads/absl -I/home/swati/git_workspace/tensorflow/tensorflow/lite/tools/make/downloads/gemmlowp -I/home/swati/git_workspace/tensorflow/tensorflow/lite/tools/make/downloads/neon_2_sse -I/home/swati/git_workspace/tensorflow/tensorflow/lite/tools/make/downloads/farmhash/src -I/home/swati/git_workspace/tensorflow/tensorflow/lite/tools/make/downloads/flatbuffers/include -I -I/usr/local/include
-o /home/swati/git_workspace/tensorflow/tensorflow/lite/tools/make/gen/linux_x86_64/bin/benchmark_model_performance_options /home/swati/git_workspace/tensorflow/tensorflow/lite/tools/make/gen/linux_x86_64/obj/tensorflow/lite/tools/benchmark/benchmark_tflite_performance_options_main.o
/home/swati/git_workspace/tensorflow/tensorflow/lite/tools/make/gen/linux_x86_64/lib/benchmark-lib.a -lstdc++ -lpthread -lm -lz -ldl /home/swati/git_workspace/tensorflow/tensorflow/lite/tools/make/gen/linux_x86_64/lib/benchmark-lib.a(command_line_flags.o): In function tflite::Flags::Parse(int*, char const**, std::vector<tflite::Flag, std::allocator<tflite::Flag> > const&)': command_line_flags.cc:(.text+0x57c2): undefined reference totensorflow::internal::LogMessage::LogMessage(char const*, int, int)' command_line_flags.cc:(.text+0x57ed): undefined reference to tensorflow::internal::LogMessage::~LogMessage()' command_line_flags.cc:(.text+0x5997): undefined reference totensorflow::internal::LogMessage::LogMessage(char const*, int, int)' command_line_flags.cc:(.text+0x59c8): undefined reference to tensorflow::internal::LogMessage::~LogMessage()' command_line_flags.cc:(.text+0x5b1d): undefined reference totensorflow::internal::LogMessage::LogMessage(char const*, int, int)' command_line_flags.cc:(.text+0x5b48): undefined reference to tensorflow::internal::LogMessage::~LogMessage()' command_line_flags.cc:(.text+0x5db0): undefined reference totensorflow::internal::LogMessage::LogMessage(char const*, int, int)' command_line_flags.cc:(.text+0x5e24): undefined reference to tensorflow::internal::LogMessage::~LogMessage()' command_line_flags.cc:(.text+0x5ea5): undefined reference totensorflow::internal::LogMessage::~LogMessage()' collect2: error: ld returned 1 exit status tensorflow/lite/tools/make/Makefile:295: recipe for target '/home/swati/git_workspace/tensorflow/tensorflow/lite/tools/make/gen/linux_x86_64/bin/benchmark_model' failed make: *** [/home/swati/git_workspace/tensorflow/tensorflow/lite/tools/make/gen/linux_x86_64/bin/benchmark_model] Error 1 make: *** Waiting for unfinished jobs.... /home/swati/git_workspace/tensorflow/tensorflow/lite/tools/make/gen/linux_x86_64/lib/benchmark-lib.a(command_line_flags.o): In function tflite::Flags::Parse(int*, char const**, std::vector<tflite::Flag, std::allocator<tflite::Flag> > const&)': command_line_flags.cc:(.text+0x57c2): undefined reference totensorflow::internal::LogMessage::LogMessage(char const*, int, int)' command_line_flags.cc:(.text+0x57ed): undefined reference to tensorflow::internal::LogMessage::~LogMessage()' command_line_flags.cc:(.text+0x5997): undefined reference totensorflow::internal::LogMessage::LogMessage(char const*, int, int)' command_line_flags.cc:(.text+0x59c8): undefined reference to tensorflow::internal::LogMessage::~LogMessage()' command_line_flags.cc:(.text+0x5b1d): undefined reference totensorflow::internal::LogMessage::LogMessage(char const*, int, int)' command_line_flags.cc:(.text+0x5b48): undefined reference to tensorflow::internal::LogMessage::~LogMessage()' command_line_flags.cc:(.text+0x5db0): undefined reference totensorflow::internal::LogMessage::LogMessage(char const*, int, int)' command_line_flags.cc:(.text+0x5e24): undefined reference to tensorflow::internal::LogMessage::~LogMessage()' command_line_flags.cc:(.text+0x5ea5): undefined reference totensorflow::internal::LogMessage::~LogMessage()' collect2: error: ld returned 1 exit status tensorflow/lite/tools/make/Makefile:301: recipe for target '/home/swati/git_workspace/tensorflow/tensorflow/lite/tools/make/gen/linux_x86_64/bin/benchmark_model_performance_options' failed make: *** [/home/swati/git_workspace/tensorflow/tensorflow/lite/tools/make/gen/linux_x86_64/bin/benchmark_model_performance_options] Error 1 make: Leaving directory '/home/swati/git_workspace/tensorflow'

Describe the expected behavior It should successfully compile and build the library

Code to reproduce the issue git clone https://github.com/tensorflow/tensorflow cd tensorflow ./tensorflow/lite/tools/download_dependencies.sh ./tensorflow/lite/tools/make/build_lib.sh

I am new to source compilation, currently unable to understand why this is not working.

closed time in 2 months

SwatiModi

issue commenttensorflow/tensorflow

TFlite compilation failing for tf 2.1.0

This was a regression. Fixed in master with this commit. https://github.com/tensorflow/tensorflow/commit/35095ee07fd63b4722d2b87b4de928c89c5a4845

SwatiModi

comment created time in 2 months

issue commenttensorflow/tensorflow

Converting saved_model to TFLite model using TF 2.0

The easiest way to override the signature is to load the saved model back into tensorflow and then edit the concrete function signature to specify the shape.

Guide to concrete functions here https://www.tensorflow.org/guide/concrete_function

reloaded = tf.saved_model.load(export_dir)
cf = reloaded.signatures['/content/']

Then you can change cf.inputs to provide the shape.

Finally make use TF Lite converter function from_concrete_function.

chauhansaurabhb

comment created time in 2 months

issue commenttensorflow/tensorflow

ModuleNotFoundError: No module named 'tflite_runtime'

Try making a new conda environment and installing tf_runtime again. Show the whole log of commands you wrote and what happened. You can look inside the conda directories site-packages to see if tf_runtime files are there. You can run python3.6 interactively and look at sys.path to see what paths are searched for imports.

batulrangwala

comment created time in 2 months

issue commenttensorflow/tensorflow

tflite crash with segmentfault when I use set_tensor to set input tensor.

I also submitted a fix (above) 5521416 so this warns you that allocate needs to be called first.

woolpeeker

comment created time in 2 months

issue commenttensorflow/tensorflow

tflite crash with segmentfault when I use set_tensor to set input tensor.

Call interpreter.allocate_tensors() before running set_tensor()

woolpeeker

comment created time in 2 months

issue commenttensorflow/tensorflow

ModuleNotFoundError: No module named 'tflite_runtime'

What exactly are you using to run the inference command? Are you sure that is the same version of python? i.e. if you are using pip3 but then using python you will probably be using python2 which doesn't have it installed.

batulrangwala

comment created time in 2 months

issue commenttensorflow/tensorflow

TFLite GPU execution failed

Can you specify more details about what device you are trying this on? are you using a standard model?

joyalbin

comment created time in 2 months

issue commenttensorflow/tensorflow

tensorflow for arm64 issue

how did you build it? you can't import a .a you must import a python library which usually is a dylib or so or dll

openedev

comment created time in 2 months

pull request commenttensorflow/tensorflow

Check for memory overflow during tensor allocation

Thanks for the submission. I've reworked this internally based on some refactors and also optimized the code from overflow.h to not have to do some checks since we already know it is unsigned this should land soon.

joyalbin

comment created time in 2 months

issue commentxesscorp/skidl

Improving performance

Thanks for all your work on Skidl! This is a great improvement.

aselle

comment created time in 2 months

issue commenttensorflow/tensorflow

toco_from_protos: not found - breaking

I could not reproduce this. On linux you shouldn't have to manipulate your path and virtualenv should work.

virtualenv -p python3 ~/py3-for-repro
source ~/py3-for-repro/bin/activate
pip install --upgrade tensorflow==2.0.0
cat > repro_test.py <<EOF;
import tensorflow as tf

@tf.function(input_signature=[tf.TensorSpec(shape=[1], dtype=tf.float32)])
def simple(x):
  return tf.add(x,x)
converter = tf.lite.TFLiteConverter.from_concrete_functions([simple.get_concrete_function()])
converter.convert()
EOF
python repro_test.py

The only guesses I have without more info is you are not sourcing the virtualenv script?

igorhoogerwoord

comment created time in 2 months

issue commenttensorflow/tensorflow

pip install tensorflow-lite PLEASE!

There are many precompiled binaries for various platforms. See the full guide https://www.tensorflow.org/lite/guide/

However, here's more links

If you just want to run it on raspberry pi or linux, then you can already do a pip install https://www.tensorflow.org/lite/guide/python which can be built by using from the source tree with these instructions https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/tools/pip_package/README.md

If you want to use it on android you can use the precompiled aar. https://www.tensorflow.org/lite/guide/android

If you want to use it on ios you can use a cocoapod https://www.tensorflow.org/lite/guide/ios

Tylersuard

comment created time in 3 months

issue closedtensorflow/tensorflow

pip install tensorflow-lite PLEASE!

I'm finding very, very difficult-to-understand information online for how to install TF-lite. Most of it involves cross-compilation and 10+ hours of waiting. Tensorflow installation is easy. Could you please make it so we can install TF-lite by just typing "pip install tensorflow-lite?"

Thanks!

closed time in 3 months

Tylersuard

issue commenttensorflow/tensorflow

Missing `person_detect.tflite` file

Glad your issue is resolved. Thanks @frreiss for the quick answer.

mariusz-r

comment created time in 3 months

issue closedtensorflow/tensorflow

Missing `person_detect.tflite` file

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Mint 19.2 Tina
  • TensorFlow installed from (source or binary): source
  • TensorFlow version (or github SHA if from source): 80c04b80ad66bf95aa3f41d72a6bba5e84a99622

Files person_detect_model_data.h/.cc in downloads directory (downloaded from here) mention person_detect.tflite file:

// Automatically created from a TensorFlow Lite flatbuffer using a command like:
// xxd -i person_detect.tflite > person_detect_model_data.cc

Is this file available somewhere? Is the full TF model available?

closed time in 3 months

mariusz-r

issue closedtensorflow/tensorflow

Is tensorflow lite malloc free?

I read that TFL Micro is malloc free, but is Tensorflow Lite also malloc free? I wondering if its safe to use inside an audio thread?

closed time in 3 months

mlostekk

issue commenttensorflow/tensorflow

Is tensorflow lite malloc free?

TensorFlow lite for fixed shape inference minimizes mallocs needed when running repeated inferences. In cases of fixed shapes after AllocateTensors() there should be no more mallocs. However, malloc is thread safe, so you can just try and make sure it works well for your use case.

mlostekk

comment created time in 3 months

issue closedtensorflow/tensorflow

AttributeError: 'Module' object has no attribute 'app'

<em>Please make sure that this is a bug. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:bug_template</em>

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): no
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): MacOS Mojave 10.14.5
  • TensorFlow version (use command below): Tensorflow for poets 2
  • Python version: 2.7.17

Describe the current behavior I tried to run the training for "tensorflow for poets 2" and it shows me this:

Traceback (most recent call last):
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py", line 174, in _run_module_as_main
    "__main__", fname, loader, pkg_name)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py", line 72, in _run_code
    exec code in run_globals
  File "/Users/fabian/tensorflow-for-poets-2/scripts/retrain.py", line 1326, in <module>
    tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)
AttributeError: 'module' object has no attribute 'app'

I'm not so familiar with coding, but is there a way to resolve this problem? Thanks

closed time in 3 months

yourntjamdotexe

issue commenttensorflow/tensorflow

AttributeError: 'Module' object has no attribute 'app'

tf.app is a v1 feature. Use tf.compat.v1.app instead.

yourntjamdotexe

comment created time in 3 months

issue commenttensorflow/tensorflow

tf.reduce_mean crashes TensorFlow Lite

Could you please try this on the nightly which you can install with pip install --upgrade tf-nightly (after uninstalling the regular version). It works for me on the nightly but not on the release version.

vmarkovtsev

comment created time in 3 months

issue commenttensorflow/tensorflow

toco_from_protos: not found - breaking

How are you adding it to your path?

igorhoogerwoord

comment created time in 3 months

Pull request review commenttensorflow/tensorflow

include comment for kInferencesPerCycle for a Teensy4.0

 limitations under the License. #include "tensorflow/lite/experimental/micro/examples/hello_world/constants.h"  // This is tuned so that a full cycle takes ~4 seconds on an Arduino MKRZERO.+// For a cycle in ~4 seconds on a Teensy4.0 use kInferencesPerCycle = 400000

How about making a variable for each platform? i.e.

constexpr int kInferencesPerCycle_ArduinoMkrZero = 1000;
constexpr int kInferencesPerCycle_Teensy40 = 400000;
constexpr int kInferencesPerCycle = kInferencesPerCycle_ArduinoMkrZero;
matpalm

comment created time in 4 months

more