profile
viewpoint

google/gemmlowp 1173

Low-precision matrix multiplication

petewarden/dstk 1051

A collection of the best open data sets and open-source tools for data science

petewarden/c_hashmap 395

A simple string hashmap in C

petewarden/dstkdata 136

The (large) data files needed for the Data Science Toolkit project

petewarden/findbyemail 104

A PHP module that incorporates all known APIs that map an email address to user information

google/stm32_bare_lib 102

System functions and example code for programming the "Blue Pill" STM32-compatible micro-controller boards.

petewarden/buzzprofilecrawl 91

A simple script to crawl Google Profile pages and extract their information as structured data

petewarden/extract_loudest_section 63

Trims .wav audio files to the loudest section of a given length

petewarden/crunchcrawl 40

A project to gather, analyze and visualized the data in Crunchbase

petewarden/catdoc 37

Command-line utility for converting Microsoft Word documents to text

Pull request review commenttensorflow/tensorflow

Cadence HiFi4 Neural Network (NN) Library

++/******************************************************************************+* Copyright (C) 2019 Cadence Design Systems, Inc.

These copyrights will need to be converted to the standard "TensorFlow Authors" license used elsewhere.

niruyadla

comment created time in 5 days

push eventpetewarden/Arduino_LSM9DS1

Pete Warden

commit sha 9ff53a86f07b4e56bdde622aa43a8b071e2f239b

Added API to control continuous FIFO mode

view details

push time in 11 days

PR opened arduino-libraries/Arduino_LSM9DS1

Enable continuous FIFO mode

We need this change to run the magic wand TensorFlow example. At the moment we ask users to manually patch the library themselves after downloading, which isn't ideal.

+8 -2

0 comment

2 changed files

pr created time in 14 days

push eventpetewarden/Arduino_LSM9DS1

Pete Warden

commit sha fe00b8a9c86515f9784760fd0f5327db6e51dc4d

Updated library version

view details

push time in 14 days

push eventpetewarden/Arduino_LSM9DS1

Pete Warden

commit sha c2621fdf4c5bca859cd7a18230dbdb28df1fe52d

Enable continuous FIFO mode

view details

push time in 14 days

PR closed tensorflow/tensorflow

Reviewers
Implement reference kernel and test for concatenation into TFLu - Uin… cla: yes comp:lite comp:micro kokoro:force-run ready to pull size:L stat:awaiting tensorflower

This patch adds the support for concatenation (Uint8\Int8) along with few tests in TensorFlow Lite micro. The reference kernel integrated to validate this function is the same one used in TensorFlow Lite.

The following 22 tests have been added to validate the functionality:

ConcatTestTwoInputsFourDimensionalAxes0UInt8 ConcatTestTwoInputsFourDimensionalAxes1UInt8 ConcatTestTwoInputsFourDimensionalAxes2UInt8 ConcatTestTwoInputsFourDimensionalAxes3UInt8 ConcatTestTwoInputsFourDimensionalAxesNegativeUInt8 ConcatTestOneInputThreeDimensionalUInt8 ConcatTestTwoInputsThreeDimensionalAxes0UInt8 ConcatTestTwoInputsThreeDimensionalAxes1UInt8 ConcatTestTwoInputsThreeDimensionalAxes2UInt8 ConcatTestThreeInputsThreeDimensionalAxes2UInt8 ConcatTestOneInputFourDimensionalInt8 ConcatTestTwoInputsFourDimensionalAxes0Int8 ConcatTestTwoInputsFourDimensionalAxes1Int8 ConcatTestTwoInputsFourDimensionalAxes2Int8 ConcatTestTwoInputsFourDimensionalAxes3Int8 ConcatTestTwoInputsFourDimensionalAxesNegativeInt8 ConcatTestOneInputThreeDimensionalInt8 ConcatTestTwoInputsThreeDimensionalAxes0Int8 ConcatTestTwoInputsThreeDimensionalAxes1Int8 ConcatTestTwoInputsThreeDimensionalAxes2Int8 ConcatTestThreeInputsThreeDimensionalAxes2Int8

+552 -0

11 comments

5 changed files

gmiodice

pr closed time in 16 days

pull request commenttensorflow/tensorflow

Implement reference kernel and test for concatenation into TFLu - Uin…

Closing this since the main op has been implemented, and we'll try to adopt some of the tests in later changes.

gmiodice

comment created time in 16 days

PullRequestEvent

pull request commenttensorflow/tensorflow

Implement reference kernel for Softmax using CMSIS-NN

Sorry about that, and thanks for the correction! I've reopened this now.

giorgio-arenarm

comment created time in 20 days

pull request commenttensorflow/tensorflow

Implement reference kernel for Softmax using CMSIS-NN

Sorry for the lack of comment, I missed that this was still open! We actually had a parallel thread going adding int8 support to softmax internally, so I think internally, so I think this PR is now redundant? Apologies for the duplication of work, please re-open if I'm incorrect: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/kernels/softmax.cc

giorgio-arenarm

comment created time in a month

PR closed tensorflow/tensorflow

Reviewers
Implement reference kernel for Softmax using CMSIS-NN awaiting review cla: yes comp:lite comp:micro size:L
+587 -2

13 comments

4 changed files

giorgio-arenarm

pr closed time in a month

Pull request review commenttensorflow/tensorflow

TFLu: Add stm32f4 and build target

+# Settings for stm32f4 based platforms+ifeq ($(TARGET), stm32f4)+  export PATH := $(MAKEFILE_DIR)/downloads/gcc_embedded/bin/:$(PATH)+  TARGET_ARCH := cortex-m4+  TARGET_TOOLCHAIN_PREFIX := arm-none-eabi-++  $(eval $(call add_third_party_download,$(GCC_EMBEDDED_URL),$(GCC_EMBEDDED_MD5),gcc_embedded,))+  $(eval $(call add_third_party_download,$(CMSIS_URL),$(CMSIS_MD5),cmsis,))+  $(eval $(call add_third_party_download,$(STM32_BARE_LIB_URL),$(STM32_BARE_LIB_MD5),stm32_bare_lib,))++  PLATFORM_FLAGS = \+    -DGEMMLOWP_ALLOW_SLOW_SCALAR_FALLBACK \+    -DTF_LITE_STATIC_MEMORY \+    -DTF_LITE_MCU_DEBUG_LOG \+    -fno-rtti \+    -fmessage-length=0 \+    -fno-exceptions \+    -fno-unwind-tables \+    -fno-builtin \+    -ffunction-sections \+    -fdata-sections \+    -funsigned-char \+    -MMD \+    -mcpu=cortex-m4 \+    -mthumb \+    -std=gnu++11 \+    -Wvla \+    -Wall \+    -Wextra \+    -Wno-unused-parameter \+    -Wno-missing-field-initializers \+    -Wno-write-strings \+    -Wno-sign-compare \+    -fno-delete-null-pointer-checks \+    -fomit-frame-pointer \+    -fpermissive \+    -g \+    -Os+  CXXFLAGS += $(PLATFORM_FLAGS)+  CCFLAGS += $(PLATFORM_FLAGS)+  LDFLAGS += \+    --specs=nosys.specs \+    -T $(MAKEFILE_DIR)/targets/stm32f4/stm32f4.lds \+    -Wl,-Map=$(MAKEFILE_DIR)/gen/$(TARGET).map,--cref \+    -Wl,--gc-sections+  BUILD_TYPE := micro+  MICROLITE_LIBS := \+    -lm+  INCLUDES += \+    -isystem$(MAKEFILE_DIR)/downloads/cmsis/CMSIS/Core/Include/ \+    -I$(MAKEFILE_DIR)/downloads/stm32_bare_lib/include+  MICROLITE_CC_SRCS += \+    $(wildcard $(MAKEFILE_DIR)/downloads/stm32_bare_lib/source/*.c) \+    $(wildcard $(MAKEFILE_DIR)/downloads/stm32_bare_lib/source/*.cc)+  EXCLUDED_SRCS := \+    $(MAKEFILE_DIR)/downloads/stm32_bare_lib/source/debug_log.c+  MICROLITE_CC_SRCS := $(filter-out $(EXCLUDED_SRCS), $(MICROLITE_CC_SRCS))+  # Stm32f4 is reusing the bluepill renode scripts for now+  TEST_SCRIPT := tensorflow/lite/micro/testing/test_bluepill_binary.sh

This PR looks great overall, but I'm confused about how the runtime testing with Renode works. I thought the test_bluepill_binary.sh script invoked Renode with a Cortex M3 target, but it looks like you're building for a Cortex M4? Can you help me understand what's going on here? Thanks!

mansnils

comment created time in a month

pull request commenttensorflow/tensorflow

[TF_MICRO] Adding cwrapper and compiler option example for building library

Thanks @cdknorow! This is tackling an important issue for us, I've emailed you offline to discuss more about these changes.

cdknorow

comment created time in a month

issue openedtensorflow/tensorflow

TensorFlow Lite Micro fully connected int8 test passes illegal filter offset

An external developer pointed out that the test for the quantized fully connected operation passes in a non-zero weight offset to the kernel for int8 tests: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/kernels/fully_connected_test.cc#L118-L119

The quantization specification promises that int8 kernels will always receive zero weight offsets: https://www.tensorflow.org/lite/performance/quantization_spec

This failing test is preventing an optimized kernel for a hardware platform from being accepted.

created time in 2 months

pull request commenttensorflow/tensorflow

[lite/micro] Override operator delete in memory planner

Thanks for your patience, sorry this has taken so long, we're trying to improve the process so it doesn't take so long in the future. Feel free to email me directly at petewarden@google.com if you find yourself stuck with a long-running PR like this again.

csukuangfj

comment created time in 2 months

PR closed tensorflow/tensorflow

Reviewers
Adjust audio_provider for the sparkfun_edge for production boards. awaiting review cla: yes comp:lite comp:micro size:S

Temporary gain emulation to deal with too-quiet audio on prototype boards together with fluctuating 'zero' level offset causes oversaturation on production boards.

cc: @petewarden

+14 -3

5 comments

1 changed file

suphoff

pr closed time in 2 months

pull request commenttensorflow/tensorflow

Adjust audio_provider for the sparkfun_edge for production boards.

Thanks for your patience! We're not going to take this PR, since we're hoping the number of proto boards is pretty small now and newer generations are moving over to digital mics.

suphoff

comment created time in 2 months

issue commenttensorflow/tensorflow

Bug in person_detection tf-lite example

Sorry for the problems! I looked into the PR, and we did merge the changes through a separate mechanism, though the flag is now called use_grayscale instead of input_grayscale: https://github.com/tensorflow/models/blob/master/research/slim/eval_image_classifier.py#L133

Does specifying that help?

xieydd

comment created time in 2 months

PR closed tensorflow/models

Reviewers
Added grayscale support to Slim preprocessing cla: yes

This is needed for a person detection model running on an MCU, which only has a grayscale camera.

+281 -76

1 comment

19 changed files

petewarden

pr closed time in 2 months

pull request commenttensorflow/models

Added grayscale support to Slim preprocessing

Closing, since this has been merged internally and pushed separately.

petewarden

comment created time in 2 months

issue commenttensorflow/tensorflow

default installed version of tensorflow lite arduino library is pre-compiled, causing confusing error reports

The Arduino team now have a pending PR which they believe should fix this problem: https://github.com/arduino/arduino-cli/pull/512

Could you take a look and provide feedback? Thanks for your patience!

ladyada

comment created time in 2 months

Pull request review commenttensorflow/tensorflow

Add tests in TFLite micro for Float/Uint8/Int8 Tanh activation

 namespace ops { namespace micro { namespace activations { +namespace {++enum TanhKernelType {+  kReference,+  kGenericOptimized,+  kFixedPointOptimized,+};++struct TanhOpData {+  int32_t input_multiplier = 0;+  int input_left_shift = 0;+  int32_t input_range_radius = 0;+  int diff_min = 0;+  uint8_t table[256] = {0};

As with the other PR, could you look at doing this without a lookup table, even though it will be slower?

giorgio-arenarm

comment created time in 2 months

Pull request review commenttensorflow/tensorflow

Add tests in TFLite micro for Float/Uint8/Int8 Tanh activation

 extern tflite::ErrorReporter* reporter;     }                                                                         \   } while (false) +#define TF_LITE_MICRO_EXPECT_NEAR_COUNT(x, y, count, epsilon, max_errs)      \

Great to see this, thanks, it will be useful! Could you add errs_count to the micro_test namespace, like the other variables?

giorgio-arenarm

comment created time in 2 months

Pull request review commenttensorflow/tensorflow

Add tests in TFLite micro for Logistic Int8

 namespace tflite { namespace ops { namespace micro { namespace activations {-+namespace { constexpr int kInputTensor = 0; constexpr int kOutputTensor = 0; -TfLiteStatus Prepare(TfLiteContext* context, TfLiteNode* node) {+struct OpData {+  bool use_table;+  int32_t input_zero_point;+  int32_t input_range_radius;+  int32_t input_multiplier;+  int32_t input_left_shift;+  int8_t table[256];

Could we actually just call the math directly in the kernel, without using a lookup table? I know this will be potentially very slow, but the goal of the reference implementation is to provide a simple algorithm and not worry about caching.

giorgio-arenarm

comment created time in 2 months

Pull request review commenttensorflow/tensorflow

TFLu: Pointwise subtraction Int8

+/* Copyright 2019 The TensorFlow Authors. All Rights Reserved.++Licensed under the Apache License, Version 2.0 (the "License");+you may not use this file except in compliance with the License.+You may obtain a copy of the License at++    http://www.apache.org/licenses/LICENSE-2.0++Unless required by applicable law or agreed to in writing, software+distributed under the License is distributed on an "AS IS" BASIS,+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+See the License for the specific language governing permissions and+limitations under the License.+==============================================================================*/++#include "tensorflow/lite/c/builtin_op_data.h"+#include "tensorflow/lite/c/common.h"+#include "tensorflow/lite/kernels/internal/common.h"+#include "tensorflow/lite/kernels/internal/quantization_util.h"+#include "tensorflow/lite/kernels/internal/reference/process_broadcast_shapes.h"+#include "tensorflow/lite/kernels/internal/tensor_ctypes.h"+#include "tensorflow/lite/kernels/kernel_util.h"+#include "tensorflow/lite/kernels/op_macros.h"++namespace tflite {+namespace ops {+namespace micro {+namespace sub {++namespace {+

Can you put these math functions in a new file in tensorflow/lite/kernels/internal/reference/sub.h, remove the duplicate code from tensorflow/lite/kernels/internal/reference/reference_ops.h, and #include the file at the top of reference_ops.h?

That will help us use the same code from TFL mobile and micro, so we aren't increasing the duplication.

giuseros

comment created time in 2 months

Pull request review commenttensorflow/tensorflow

Initial port of TF micro to synopsys ARC processors

 endef # generate the standalone project. define generate_microlite_projects $(call generate_project,make,$(MAKE_PROJECT_FILES),$(1),$(MICROLITE_CC_SRCS) $(THIRD_PARTY_CC_SRCS) $(2),$(MICROLITE_CC_HDRS) $(THIRD_PARTY_CC_HDRS) $(MICROLITE_TEST_HDRS) $(3),$(LDFLAGS) $(MICROLITE_LIBS),$(CXXFLAGS) $(GENERATED_PROJECT_INCLUDES), $(CCFLAGS) $(GENERATED_PROJECT_INCLUDES))+ifneq ($(TARGET_ARCH), arc)

Can you remove this ifneq? We'd prefer not to have target-conditional logic at this level.

JaccovG

comment created time in 2 months

Pull request review commenttensorflow/tensorflow

Initial port of TF micro to synopsys ARC processors

 $(PRJDIR)$(3)/$(1)/third_party/%: tensorflow/lite/micro/tools/make/downloads/% t 	@mkdir -p $$(dir $$@) 	@cp $$< $$@ +ifeq ($(TARGET_ARCH), arc)

Can you break out this into a separate generate_synposys_project function, like we do for generate_esp_project, rather than modifying generate_project conditionally?

JaccovG

comment created time in 2 months

issue openedtensorflow/tensorflow

Explain how int8 input and output quantization conversion works in TensorFlow Lite

We've had feedback from multiple developers that it's hard to figure out how to calculate the right int8 values for quantized inputs, and understand what int8 values mean as outputs.

For example, when feeding an image to uint8 quantized inputs, the values can be left as in their source 0 to 255 range. For int8 inputs, the developer will typically need to subtract 128 from each value, but this knowledge (and how the offset value is calculated) is not documented. In the same way, users will need to map the -128 to 127 output values to the actual real number range of their outputs, but this process is unclear.

Tagging the @tensorflow/micro team.

created time in 2 months

issue commenttensorflow/tensorflow

TensorFlow Lite Micro MaxPool kernel needs int8 support

Tagging @tensorflow/micro

petewarden

comment created time in 2 months

issue commenttensorflow/tensorflow

Calculate arena size for TensorFlow Lite Micro models

Tagging @tensorflow/micro

petewarden

comment created time in 2 months

issue openedtensorflow/tensorflow

TensorFlow Lite Micro MaxPool kernel needs int8 support

TensorFlow Lite for Microcontrollers has a MAXPOOL operation, but it only supports float and uint8 execution, not int8.

created time in 2 months

issue openedtensorflow/tensorflow

Calculate arena size for TensorFlow Lite Micro models

TensorFlow Lite for Microcontrollers doesn't depend on dynamic memory allocation, so it requires users to supply a memory arena when an interpreter is created, as described in this documentation: https://www.tensorflow.org/lite/microcontrollers/get_started We need a better way to decide 'tensor_arena_size'. Currently, the above page says 'The size required will depend on the model you are using, and may need to be determined by experimentation.'

The simplest solution might be an offline script that returns the size needed to hold a model's activation buffers, but this won't include miscellaneous or platform-dependent allocations in its total. Another approach might be to return the size needed as an integer along with the error report string from the interpreter creation.

created time in 2 months

issue commenttensorflow/tensorflow

default installed version of tensorflow lite arduino library is pre-compiled, causing confusing error reports

I've contacted the Arduino IDE team for help on this, I hope to have more progress to report soon.

ladyada

comment created time in 2 months

issue commenttensorflow/tensorflow

default installed version of tensorflow lite arduino library is pre-compiled, causing confusing error reports

Sorry, I missed this one originally! I will dig into this, since we are keen to have widespread support.

ladyada

comment created time in 2 months

pull request commenttensorflow/tensorflow

Port mul from Tensorflow Lite to Tensorflow Lite Micro

This is an issue with one of our Bazel build rules. I have a one-line fix (adding mul.cc to :portable_optimized_micro_ops in the kernel BUILD file), but longer-term we'll fix this so that we don't have to repeat this file. We will make this change on our side, no action is needed from you Jens!

jenselofsson

comment created time in 3 months

pull request commenttensorflow/tensorflow

Port mul from Tensorflow Lite to Tensorflow Lite Micro

This is surprising, since the .cc should automatically be included in the library sources here: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/experimental/micro/tools/make/Makefile#L93

I'll investigate on our end what's going wrong.

jenselofsson

comment created time in 3 months

pull request commenttensorflow/tensorflow

Port mul from Tensorflow Lite to Tensorflow Lite Micro

Sorry for the delay on this one, I am chasing it up internally!

jenselofsson

comment created time in 4 months

Pull request review commenttensorflow/tensorflow

Implement reference kernel and test for concatenation into TFLu - Uin…

-load("//tensorflow/lite:build_def.bzl", "tflite_copts")

It looks like the formatting got messed up on this file?

gmiodice

comment created time in 4 months

issue openedtensorflow/tensorflow

Pull Requests: Map needed for request workflow

At the moment external contributors don't have any way of knowing what the lifecycle of a pull request is. We have documentation on contributing code, but it's silent on the details of what happens to a request once it's been submitted. Chandni presented a fantastic flowchart at the contributor's summit today that would be a great foundation for documentation explaining the stages that a PR goes through.

This would help external contributors understand what they need to do to successfully submit code to the project, and combined with https://github.com/tensorflow/tensorflow/issues/33801 will give them the visibility they need to be effective TensorFlow developers.

/cc @freddan80 @jenselofsson

created time in 4 months

issue openedtensorflow/tensorflow

Pull Requests: Status information should be available

There is currently no way to tell who needs to take the next action on a pull request, or what state a PR is in. This delays external contributions and frustrates developers. Here is the sort of information that is needed to enable efficient contributions:

  • Who needs to take action? Is it the contributor, a member of the gtech team, or a Google TensorFlow engineer? This should be clearly and publicly visible on the request, so that stakeholders can communicate with the responsible individual.

  • Where is the request in the approval workflow? We'll need a map of the stages involved, and a way to map the current state to each node in the graph.

There are other pieces of information that would be nice to have, but these are essential to shepherding contributions through our process.

/cc @jenselofsson @freddan80

created time in 4 months

issue commenttensorflow/tensorflow

Pull Requests: Trusted committers should be able to approve

/cc @jenselofsson @freddan80

petewarden

comment created time in 4 months

issue commenttensorflow/tensorflow

Pull Requests: Ubuntu CC test is flakey

/cc @freddan80

petewarden

comment created time in 4 months

issue openedtensorflow/tensorflow

Pull Requests: Trusted committers should be able to approve

@jenselofsson is a trusted external committer from Arm, but at least some of the PRs that he has approved have required an additional Google review on GitHub. I would expect that he would be able to review and approve pull requests. For an example, see PR https://github.com/tensorflow/tensorflow/pull/33420

created time in 4 months

issue commenttensorflow/tensorflow

Pull Requests: Ubuntu CC test is flakey

/cc @jenselofsson

petewarden

comment created time in 4 months

issue openedtensorflow/tensorflow

Pull Requests: Ubuntu CC test is flakey

When testing pull requests the "Ubuntu CC" test seems to never(?) complete. See two examples here:

https://github.com/tensorflow/tensorflow/pull/33492 https://github.com/tensorflow/tensorflow/pull/32168

This makes it hard to for contributors to tell if their changes have broken the project, or if it's an unrelated flake (as it seems to be in these cases). This is the most obvious example of the problem, but we see many unrelated failures on the CI tests for PRs.

created time in 4 months

pull request commenttensorflow/tensorflow

Add initial minimal ArmNN delegate plugin.

@jdduke could you take a look at this, since it's in the mobile TensorFlow Lite code?

GeorgeARM

comment created time in 4 months

pull request commenttensorflow/tensorflow

Fix missing activation methods

It would be nice to have unit tests for these activation functions too, but since it gets rid of a blocking bug for some platforms we can take this change as-is. Thanks for the contribution!

kwagyeman

comment created time in 4 months

issue commenttensorflow/tensorflow

TFLite-micro: AllocateTensors produces HardFault even for small models

I did notice in your code snippet that the OpResolver isn't declared static, like the interpreter is. I'm not sure what the rest of your code looks like, but if the OpResolver object has a shorter lifetime than the interpreter then you could end up with mysterious crashes like this. If you look in the examples, you can see we declare resolvers as static in the setup() function:

https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/experimental/micro/examples/hello_world/main_functions.cc#L63

antofara

comment created time in 4 months

issue commenttensorflow/tensorflow

TFLite-micro: AllocateTensors produces HardFault even for small models

Sorry you're hitting this issue! As a debugging step, can you try running your same code on Linux/x86? It would be helpful to know if it works there, and if it does I can suggest some further debugging steps.

antofara

comment created time in 4 months

pull request commenttensorflow/tensorflow

Image recognition example for Tensorflow Lite Micro

Sorry for the slow response. Overall this looks great, thanks! We will have to move the model file outside of the repo unfortunately, since it's 280KB and larger files like that cause problems as they grow the download for all users.

You can see how we do something similar for the person detection model here: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/experimental/micro/examples/micro_vision/Makefile.inc#L1

It would also be great if you could update the readme to mention the specific kinds of recognition that this model is doing, "Plane", "Car", "Bird", "Cat", "Deer", "Dog", "Frog", "Horse", "Ship", "Truck", and that it will always try to put what it's shown into one of these categories. Could you also rename it to cifar_recognition? I realize I started a confusing trend with "micro_speech", I should have named it something clearer.

jenselofsson

comment created time in 5 months

push eventpetewarden/models

Hongkun Yu

commit sha f52b8c93ee0e7dfa6b804706f53355215ced952f

Adds tensorflow-hub PiperOrigin-RevId: 271873759

view details

Hongkun Yu

commit sha 777107312382e292dadba091de1d2a64998c23b5

Moves activations to official/modeling Adds a swish activation without customized gradients. PiperOrigin-RevId: 272029817

view details

Hongkun Yu

commit sha ddee474eb4cb05ed84ed8e6bd026753514d81c39

Internal change PiperOrigin-RevId: 272043067

view details

George Karpenkov

commit sha 6d6ab9ca95eeaa54d595a165815c9811d26f1c66

Remove dead code from shakespeare_main model PiperOrigin-RevId: 272077584

view details

David Chen

commit sha 6bbc45dd173e0c17dbc57157bbd63fe27093f492

Internal change PiperOrigin-RevId: 272121528

view details

Pete Warden

commit sha c0e9829e8e4fc3ccf7177c6db19ee3c21bfd24c2

Added grayscale option to export script

view details

Pete Warden

commit sha c89240021cc533413873a98db3d07e7d8728d9df

Merge branch 'master' of https://github.com/petewarden/models

view details

push time in 5 months

push eventpetewarden/models

Pete Warden

commit sha e090c8ccf992f46816b80293128358d4ef5a981e

Fixed typo in VGG preprocessing

view details

push time in 5 months

PR opened tensorflow/models

Reviewers
Added grayscale support to Slim preprocessing

This is needed for a person detection model running on an MCU, which only has a grayscale camera.

+76 -22

0 comment

9 changed files

pr created time in 5 months

push eventpetewarden/models

Pete Warden

commit sha 113ab2b842414af7f45ded1c363e02ab5108212a

Added grayscale support to Slim preprocessing

view details

push time in 5 months

push eventpetewarden/models

Pete Warden

commit sha 6b1fd0c51ad204c94d24c21442459beca995058f

Updated Visual Wake Words validation name

view details

Pete Warden

commit sha 16e98dca30d779c771ad016a4ba5544c0395b96d

Merge branch 'master' of https://github.com/petewarden/models

view details

push time in 5 months

fork petewarden/models

Models and examples built with TensorFlow

fork in 5 months

startedPeteBlackerThe3rd/tflite_analyser

started time in 5 months

more