profile
viewpoint
Martin Wicke martinwicke San Francisco

tensorflow/skflow 3217

Simplified interface for TensorFlow (mimicking Scikit Learn) for Deep Learning

tensorflow/datasets 2270

TFDS is a collection of datasets ready to use with TensorFlow, Jax, ...

studioml/studio 358

Studio: Simplify and expedite model building process

martinwicke/tf-dev-summit-tensorboard-tutorial 342

Code that accompanies my talk at TF Dev Summit 2016

martinwicke/tensorflow-tutorial 222

A tutorial on TensorFlow

tensorflow/java 102

Java bindings for TensorFlow

eddysystems/eddy 51

eddy - autocorrect for java

tensorflow/tfx-bsl 20

Common code for TFX

tensorflow/java-models 12

Models in Java

push eventtensorflow/community

guptapriya

commit sha b275e4c3ecbcda3c09e3aa0f3b9a161675262267

Policy to require public types for API We want to require all new public APIs to have publicly documented arguments and return values.

view details

guptapriya

commit sha d71d1ccb3afeaf79bb294eb038dacda908e6d704

Updated wording

view details

push time in a day

PR merged tensorflow/community

Policy to require public types for API cla: yes

We want to require all new public APIs to have publicly documented arguments and return values.

+9 -0

1 comment

1 changed file

guptapriya

pr closed time in a day

pull request commenttensorflow/tensorflow

Fix exception causes in session.py

triggered it now

cool-RR

comment created time in a day

pull request commenttensorflow/tensorflow

Fix exception causes in session.py

@cool-RR We are soooo close to being able to actually using Python3-only syntax. I'm re-running the tests and I will merge this if I can using the six as is.

@ematejska FYI this would make a good test case whether we can really break Py2.

cool-RR

comment created time in 2 days

issue commenttensorflow/tensorflow

PEP 484 Type Annotations (feature request)

@mdanatg I think type hints should be acceptable now? Or do we need to wait for the type classes?

ed-alertedh

comment created time in 2 days

pull request commenttensorflow/community

Add a note about Python versions

Minimum isn't sufficient. For instance 3.6.9 will happily accept def test(async=False): ..., but 3.8 will break.

The point is that your code has to work in the range. If it doesn't, we might reject it. We should find most problems in testing, so I don't think this policy would create much confusion.

martinwicke

comment created time in 19 days

push eventtensorflow/community

Martin Wicke

commit sha bc2f2a15b66fd2f0adeefc307e3bdf1c8caa6550

Update api-reviews.md

view details

push time in 19 days

pull request commenttensorflow/community

Add a note about Python versions

@karmel: The issue is that we don't actually tests all versions. I don't want to separately encode the supported versions, I guess I should point at https://www.tensorflow.org/install.

martinwicke

comment created time in 19 days

Pull request review commenttensorflow/community

Add a note about Python versions

 the comments to get an answer.  ## High level points +### Python Versions++TensorFlow supports a range of Python versions, and changes need to be+compatible with all of them. This means that language features not available in+TensorFlow's minimum supported version cannot be used. ++We regularly reconsider the range of supported versions based on the number of

As seen by Python2, these are mostly aspirational. Realistically, we will make these decisions based on users, not timelines.

martinwicke

comment created time in 19 days

push eventtensorflow/community

Martin Wicke

commit sha 69a2229371f234a034c82e41076865f469c3919e

Update governance/api-reviews.md Co-authored-by: Yuan Tang <terrytangyuan@gmail.com>

view details

push time in 19 days

push eventtensorflow/community

Martin Wicke

commit sha 6da78154f8cb38da04c4f021c852fc7231d27595

Update api-reviews.md

view details

push time in 19 days

PR opened tensorflow/community

Add a note about Python versions

Clarify how Python version compatibility is handled. This is mostly covered by testing, since we do run tests with the minimum version, but not entirely (as seen with 3.8 and async).

+13 -0

0 comment

1 changed file

pr created time in 20 days

create barnchtensorflow/community

branch : martinwicke-patch-2

created branch time in 20 days

pull request commenttensorflow/community

RFC: TensorFlow on DirectML

With the pluggable device, your device will be usable from the official builds, and that's the key. As soon as we have that (even at head or in a nightly) we should advertise it loudly. Your code will still live elsewhere, but it won't be a fork, i.e. it won't be mutually exclusive with an official installation of TensorFlow.

wchao1115

comment created time in 25 days

pull request commenttensorflow/community

RFC: TensorFlow on DirectML

We would like to avoid advertising forks, especially 1.15 forks. I think once we make some progress on integration we should absolutely consider a joint blog post.

wchao1115

comment created time in 25 days

CommitCommentEvent

Pull request review commenttensorflow/community

RFC: Adding Pluggable Device For TensorFlow

+# **Pluggable device for TensorFlow**++| Status        | Proposed                                             |+:-------------- |:---------------------------------------------------- |+| **RFC #**     | [262](https://github.com/tensorflow/community/pull/262)|+| **Author(s)** | Zhoulong Jiang (zhoulong.jiang@intel.com), Yiqiang Li (yiqiang.li@intel.com),  Eric Lin (eric.lin@intel.com), Jianhui Li (jian.hui.li@intel.com) |+| **Sponsor**   | Anna Revinskaya (annarev@google.com)                 |+| **Updated**   | 2020-06-24                                           |++## **Objective**++Implement a pluggable device mechanism which allows to run existing TensorFlow programs on a new device without user changing the code.  Users only need to install a plugin in a specified directory, and the mechanism is able to discover and plug in the capabilities offered by the plugin. ++This RFC is based on the Modular TensorFlow  [RFC](https://github.com/tensorflow/community/pull/77), which aims to extend the TensorFlow design to plugin capabilities like adding a new device support.  The modular device interface is based on StreamExecutor C API [RFC](https://github.com/tensorflow/community/pull/257). ++## **Motivation**++When extending TensorFlow to support a new device, one needs to modify TensorFlow code and maintain a special TensorFlow build for the new device. Modular TensorFlow RFC design a plugin architecture for serveral TensorFlow components(`Networking`, `Filesystems`, `Kernel`, `Graph` and `Accelerator backends`). This RFC describes the Accelerator backends module in the Tensorflow proper side, by introducing pluggable device to the TensorFlow device classes.++The pluggable device discovery and initialization is transparent to end users. As long as the device plugin libraries follow the design described in this RFC, it can be plugged to TensorFlow proper and enable TensorFlow to run existing TensorFlow programs on a new device. ++## **User Benefit**++This RFC allows TensorFlow to transparently run TensorFlow programs on new devices, as long as users set up the system properly installing the device plugin. ++## **Design Proposal**++### Design Overview++This RFC extends the TensorFlow device class hierarchy to add a standardized pluggable device named `PluggableDevice` which is built on top of [StreamExecutor](https://github.com/tensorflow/tensorflow/blob/e5023a1738cce7efcdf9d87863b85c80ab2f8c9e/tensorflow/stream_executor/stream_executor_pimpl.h#L73), and all new third-party devices who want to integrate with current TensorFlow stack only need to implement StreamExecutor C API(shown in Diagram 1).++<div align=center> +<img src=20200624-pluggable-device-for-tensorflow/design_overview.png>+</div>++* `PluggableDevice` is defined in TensorFlow proper which inherits from [LocalDevice](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/common_runtime/local_device.h).It is built on top of  StreamExecutor C++ interface to manage `PluggableDevice`’s key abstractions like StreamExecutor, stream, memory and event.++* `PluggableDeviceExecutor` implements [StreamExecutor](https://github.com/tensorflow/tensorflow/blob/e5023a1738cce7efcdf9d87863b85c80ab2f8c9e/tensorflow/stream_executor/stream_executor_pimpl.h#L73) and is built on top of StreamExecutor C API (addressed in [RFC](https://github.com/tensorflow/community/pull/257)). ++* `PluggableDevice Implementation` is inside the TensorFlow plugin, which provides those C functions implementation defined in the StreamExecutor C API.++The pluggable device mechanism contains device discovery and creation process which creates a `PluggableDevice` object and `PluggableDeviceExecutor` object for each pluggable device. ++With the RFC, existing TensorFlow GPU programs can run on a plugged device without the user changing the code. The Diagram 2 describes the workflow of TensorFlow with device plugin, it shows how a simple GPU program runs on the pluggable device.+<div align="center">+<img src=20200624-pluggable-device-for-tensorflow/gpu_example.png>+</div>++### Device Discovery++Upon initialization of TensorFlow, it uses platform independent `LoadLibrary()` to load the dynamic library. The plugin library should be installed to default plugin directory "…python_dir.../site-packages/tensorflow-plugins". The modular tensorflow [RFC](https://github.com/tensorflow/community/pull/77) describes the process of loading plugins. ++During the plugin library initialization, it calls the `SE_ReigsterPlatform()` API to register the stream executor platform (`SE_Platform` struct) to TensorFlow proper. The `SE_ReigsterPlatform()` API is a callback API, part of StreamExecutor C API, which passes necessary information to TensorFlow proper to instantiate a stream executor platform ([se::platform](https://github.com/tensorflow/tensorflow/blob/cb32cf0f0160d1f582787119d0480de3ba8b9b53/tensorflow/stream_executor/platform.h#L93) class) and register to a global object [se::MultiPlatformManager](https://github.com/tensorflow/tensorflow/blob/cb32cf0f0160d1f582787119d0480de3ba8b9b53/tensorflow/stream_executor/multi_platform_manager.h#L82). +The stream executor platform must be registered with the name "PluggableDevice".  +See below code which is an example of registering a PluggableDevice platform with StreamExecutor C API:+```cpp+void RegisterPluggableDevicePlatform() {+  static plugin_id_value = 123;+  SE_PlatformId id;+  id.id = &plugin_id_value;+  int visible_device_count = get_plugin_device_count;+  SE_Platform* custom_platform = SE_NewPlatform(+     id, visible_device_count,+     create_device, create_stream_executor,+     delete_device, delete_stream_executor);+  TF_Status* status = TF_NewStatus();+  std::string name = "PluggableDevice";+  SE_RegisterPlatform(+     name.c_str(), name.size(),+     custom_platform,+     status);+}++```+Use static initialization to register the new platform:

Static initializers have a lot of unclear semantics, so if we can avoid them and do a dlsym based thing here, that would be safer.

jzhoulon

comment created time in a month

Pull request review commenttensorflow/tensorflow

Update release notes for TensorFlow 2.3.0

 # Release 2.3.0 +## Major Features and Improvements+  * `tf.data` adds two new mechanisms to solve input pipeline bottlenecks and save resources:+    * [snapshot](https://www.tensorflow.org/api_docs/python/tf/data/experimental/snapshot)+    * [tf.data service](https://www.tensorflow.org/api_docs/python/tf/data/experimental/service). ++  In addition checkout the detailed [guide](https://www.tensorflow.org/guide/data_performance_analysis) for analyzing input pipeline performance with TF Profiler.++  * [`tf.distribute.TPUStrategy`](https://www.tensorflow.org/api_docs/python/tf/distribute/TPUStrategy) is now a stable API and no longer considered experimental for TensorFlow. (earlier `tf.distribute.experimental.TPUStrategy`).++  * TF Profiler introduces two new tools: a memory profiler to visualize your model’s memory usage over time and a python tracer which allows you to trace python function calls in your model. Usability improvements include better diagnostic messages and profile options to customize the host and device trace verbosity level.++  * Introduces experimental support for Keras Preprocessing Layers API ([`tf.keras.layers.experimental.preprocessing.*`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing?version=nightly)) to handle data preprocessing operations, with support for composite tensor inputs. Please see below for additional details on these layers.+  +  * TFLite now properly supports dynamic shapes during conversion and inference. We’ve also added opt-in support on Android and iOS for [XNNPACK](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/delegates/xnnpack), a highly optimized set of CPU kernels, as well as opt-in support for [executing quantized models on the GPU](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/gpu_advanced.md#running-quantized-models-experimental). ++  * Libtensorflow packages are available in GCS starting this release. We have also started to release a nightly version of these packages. + ## Breaking Changes+* Increases the **minimum bazel version** required to build TF to **3.1.0**.+* `tf.data`+  *  Makes the following (breaking) changes to the `tf.data`.+    * C++ API: - `IteratorBase::RestoreInternal`, `IteratorBase::SaveInternal`, and `DatasetBase::CheckExternalState` become pure-virtual and subclasses are now expected to provide an implementation.+    * The deprecated `DatasetBase::IsStateful` method is removed in favor of `DatasetBase::CheckExternalState`.+    * Deprecated overrides of `DatasetBase::MakeIterator` and `MakeIteratorFromInputElement` are removed.+  * The signature of `tensorflow::data::IteratorBase::SaveInternal` and `tensorflow::data::IteratorBase::SaveInput` has been extended with `SerializationContext` argument to enable overriding the default policy for the handling external state during iterator checkpointing. This is not a backwards compatible change and all subclasses of `IteratorBase` *need to be updated* accordingly.+* `tf.keras`+    * Add a new `BackupAndRestore` callback for handling distributed training failures & restarts. Please take a look at this [tutorial](https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras) for details on how to use the callback. +* `tf.image.extract_glimpse` has been updated to correctly process the case+   where `centered=False` and `normalized=False`. This is a breaking change as+   the output is different from (incorrect) previous versions. Note this+   breaking change only impacts `tf.image.extract_glimpse` and+   `tf.compat.v2.image.extract_glimpse` API endpoints. The behavior of+   `tf.compat.v1.image.extract_glimpse` does not change. The behavior of+   exsiting C++ kernel `ExtractGlimpse` does not change as well, so saved+   models will not be impacted.++## Bug Fixes and Other Changes++### TF Core:+  * Set `tf2_behavior` to 1 to enable V2 for early loading cases.+  * Add a function to dynamically choose the implementation based on underlying device placement.+  * Eager:+    * Add `reduce_logsumexp` benchmark with experiment compile.+    * Give `EagerTensor`s a meaningful `__array__` implementation.+    * Add another version of defun matmul for performance analysis.+  * `tf.function`/AutoGraph:+    * `AutoGraph` now includes into TensorFlow loops any variables that are closed over by local functions. Previously, such variables were sometimes incorrectly ignored.+    * functions returned by the `get_concrete_function` method of `tf.function` objects can now be called with arguments consistent with the original arguments or type specs passed to `get_concrete_function`.  This calling convention is now the preferred way to use concrete functions with nested values and composite tensors. Please check the [guide](https://www.tensorflow.org/guide/concrete_function) for more details on `concrete_ function`.+    * Update `tf.function`'s `experimental_relax_shapes` to handle composite tensors appropriately.+    * Optimize `tf.function` invocation, by removing redundant list converter.+    * `tf.function` will retrace when called with a different variable instead of simply using the `dtype` & `shape`.+    * [Improve support](https://github.com/tensorflow/tensorflow/issues/33862) for dynamically-sized TensorArray inside `tf.function`.+  * `tf.math`:+    * Narrow down `argmin`/`argmax` contract to always return the smallest index for ties.+    * `tf.math.reduce_variance` and `tf.math.reduce_std` return correct computation for complex types and no longer support integer types.+    * Add Bessel functions of order 0,1 to `tf.math.special`.+    * `tf.divide` now always returns a tensor to be consistent with documentation and other APIs.+  * `tf.image`:+    * Replaces [`tf.image.non_max_suppression_padded`](https://www.tensorflow.org/api_docs/python/tf/image/non_max_suppression_padded?hl=en&version=nightly) with a new implementation that supports batched inputs, which is considerably faster on TPUs and GPUs. Boxes with area=0 will be neglected. Existing usage with single inputs should still work as before.+  * `tf.linalg`+    * Add `tf.linalg.banded_triangular_solve`.+  * `tf.random`:+    * Add `tf.random.stateless_parameterized_truncated_normal`.+  * `tf.ragged`:+    * Add `tf.ragged.cross` and `tf.ragged.cross_hashed` operations.+  * `tf.RaggedTensor`:+    * `RaggedTensor.to_tensor()` now preserves static shape.+    * Add `tf.strings.format()` and `tf.print()` to support RaggedTensors.+  * `tf.saved_model`:+    * `@tf.function` from SavedModel no longer ignores args after a `RaggedTensor` when selecting the concrete function to run.+    * Fix save model issue for ops with a list of functions.+    * Add `tf.saved_model.LoadOptions` with [`experimental_io_device`](https://www.tensorflow.org/api_docs/python/tf/saved_model/LoadOptions) as arg with default value `None` to choose the I/O device for loading models and weights.+     * Update `tf.saved_model.SaveOptions` with [`experimental_io_device`](https://www.tensorflow.org/api_docs/python/tf/saved_model/SaveOptions?version=nightly) as arg with default value `None` to choose the I/O device for saving models and weights.+  * GPU+    * No longer includes PTX kernels for GPU except for sm_70 to reduce binary size.+  * Profiler+    * Fix a subtle use-after-free issue in `XStatVisitor::RefValue()`.+  * Others+    * Retain parent namescope for ops added inside `tf.while_loop`/`tf.cond`/`tf.switch_case`.+    * Update `tf.vectorized_map` to support vectorizing `tf.while_loop` and TensorList operations.+    * `tf.custom_gradient` can now be applied to functions that accept nested structures of `tensors` as inputs (instead of just a list of tensors). Note that Python structures such as tuples and lists now won't be treated as tensors, so if you still want them to be treated that way, you need to wrap them with `tf.convert_to_tensor`.+    * No lowering on gradient case op when input is `DeviceIndex` op.+    * Fix in c_api `DEFINE_GETATTR`.+    * Extend the ragged version of `tf.gather` to support `batch_dims` and `axis` args.+    * Update `tf.map_fn` to support RaggedTensors and SparseTensors.+    * Deprecate `tf.group`. It is not useful in eager mode.+    * Add a new variant of `FTRL` allowing a learning rate of zero.+    +### `tf.data`: +  * `tf.data.experimental.dense_to_ragged_batch` works correctly with tuples.+  * `tf.data.experimental.dense_to_ragged_batch` to output variable ragged rank.+  * `tf.data.experimental.cardinality` is now a method on `tf.data.Dataset`.+  * `tf.data.Dataset` now supports `len(Dataset)` when the cardinality is finite.++### `tf.distribute`: +  * Expose experimental [`tf.distribute.DistributedDataset`](https://www.tensorflow.org/api_docs/python/tf/distribute/DistributedDataset) and [`tf.distribute.DistributedIterator`](https://www.tensorflow.org/api_docs/python/tf/distribute/DistributedIterator) to distribute input data when using `tf.distribute` to scale training on multiple devices. +    * Added a `get_next_as_optional` method for `tf.distribute.DistributedIterator` class to return a `tf.experimental.Optional` instance that contains the next value for all replicas or none instead of raising an out of range error. Also see *new* [guide on input distribution](https://www.tensorflow.org/tutorials/distribute/input).+  * Allow `var.assign` on `MirroredVariables` with `aggregation=NONE` in replica context. Previously this would raise an error since there was no way to confirm that the values being assigned to the `MirroredVariables` were in fact identical.+  * `tf.distribute.experimental.MultiWorkerMirroredStrategy` adds support for partial batches. Workers running out of data now continue to participate in the training with empty inputs, instead of raising an error.+  * Improve the performance of reading metrics eagerly under `tf.distribute.experimental.MultiWorkerMirroredStrategy`.+  * Fix the issue that `strategy.reduce()` inside `tf.function` may raise exceptions when the values to reduce are from loops or if-clauses.+  * Fix the issue that `tf.distribute.MirroredStrategy` cannot be used together with `tf.distribute.experimental.MultiWorkerMirroredStrategy`.+  * Add a `tf.distribute.cluster_resolver.TPUClusterResolver.connect` API to simplify TPU initialization.++### `tf.keras`:+  * Introduces experimental preprocessing layers API (`tf.keras.layers.experimental.preprocessing`)  to handle data preprocessing operations such as categorical feature encoding, text vectorization, data normalization, and data discretization (binning). The newly added layers provide a replacement for the  legacy feature column API, and support composite tensor inputs. +  * Added **categorical data** processing layers:+    * `IntegerLookup` & `StringLookup`: build an index of categorical feature values+    * `CategoryEncoding`: turn integer-encoded categories into one-hot, multi-hot, or tf-idf encoded representations+    * `CategoryCrossing`: create new categorical features representing co-occurrences of previous categorical feature values+    * `Hashing`: the hashing trick, for large-vocabulary categorical features+    * `Discretization`: turn continuous numerical features into categorical features by binning their values+  * Improved **image preprocessing** layers: `CenterCrop`, `Rescaling`+  * Improved **image augmentation** layers: `RandomCrop`, `RandomFlip`, `RandomTranslation`, `RandomRotation`, `RandomHeight`, `RandomWidth`, `RandomZoom`, `RandomContrast`+  * Improved **`TextVectorization`** layer, which handles string tokenization, n-gram generation, and token encoding+    * The `TextVectorization` layer now accounts for the mask_token as part of the vocabulary size when output_mode='int'. This means that, if you have a max_tokens value of 5000, your output will have 5000 unique values (not 5001 as before).+    * Change the return value of `TextVectorization.get_vocabulary()` from `byte` to `string`. Users who previously were calling 'decode' on the output of this method should no longer need to do so.+  * Introduce new Keras dataset generation utilities :+    * **[`image_dataset_from_directory`](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image_dataset_from_directory)** is a utility based on `tf.data.Dataset`, meant to replace the legacy `ImageDataGenerator`. It takes you from a structured directory of images to a labeled dataset, in one function call. Note that it doesn't perform image data augmentation (which is meant to be done using preprocessing layers).+    * **[`text_dataset_from_directory`](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/text_dataset_from_directory)** takes you from a structured directory of text files to a labeled dataset, in one function call.+    * **[`timeseries_dataset_from_array`](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/timeseries_dataset_from_array)** is a `tf.data.Dataset`-based replacement of the legacy `TimeseriesGenerator`. It takes you from an array of timeseries data to a dataset of shifting windows with their targets.+  * Added [`experimental_steps_per_execution`](https://www.tensorflow.org/api_docs/python/tf/keras/Model?version=nightly#compile)+ arg to `model.compile` to indicate the number of batches to run per `tf.function` call. This can speed up Keras Models on TPUs up to 3x.+  * Extends `tf.keras.layers.Lambda` layers to support multi-argument lambdas, and keyword arguments when calling the layer.+  * Functional models now get constructed if *any* tensor in a layer call's arguments/keyword arguments comes from a keras input. Previously the functional api would only work if all of the elements in the first argument to the layer came from a keras input.+  * Clean up `BatchNormalization` layer's `trainable` property to act like standard python state when it's used inside `tf.functions` (frozen at tracing time), instead of acting like a pseudo-variable whose updates *kind of sometimes* get reflected in already-traced `tf.function` traces.+  * Add the `Conv1DTranspose` layer.+  * Fix bug in `SensitivitySpecificityBase` derived metrics.+  * Blacklist Case op from callback

what does this do?

tensorflow-jenkins

comment created time in a month

Pull request review commenttensorflow/tensorflow

Update release notes for TensorFlow 2.3.0

 # Release 2.3.0 +## Major Features and Improvements+  * `tf.data` adds two new mechanisms to solve input pipeline bottlenecks and save resources:+    * [snapshot](https://www.tensorflow.org/api_docs/python/tf/data/experimental/snapshot)+    * [tf.data service](https://www.tensorflow.org/api_docs/python/tf/data/experimental/service). ++  In addition checkout the detailed [guide](https://www.tensorflow.org/guide/data_performance_analysis) for analyzing input pipeline performance with TF Profiler.++  * [`tf.distribute.TPUStrategy`](https://www.tensorflow.org/api_docs/python/tf/distribute/TPUStrategy) is now a stable API and no longer considered experimental for TensorFlow. (earlier `tf.distribute.experimental.TPUStrategy`).++  * TF Profiler introduces two new tools: a memory profiler to visualize your model’s memory usage over time and a python tracer which allows you to trace python function calls in your model. Usability improvements include better diagnostic messages and profile options to customize the host and device trace verbosity level.++  * Introduces experimental support for Keras Preprocessing Layers API ([`tf.keras.layers.experimental.preprocessing.*`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing?version=nightly)) to handle data preprocessing operations, with support for composite tensor inputs. Please see below for additional details on these layers.+  +  * TFLite now properly supports dynamic shapes during conversion and inference. We’ve also added opt-in support on Android and iOS for [XNNPACK](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/delegates/xnnpack), a highly optimized set of CPU kernels, as well as opt-in support for [executing quantized models on the GPU](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/gpu_advanced.md#running-quantized-models-experimental). ++  * Libtensorflow packages are available in GCS starting this release. We have also started to release a nightly version of these packages. + ## Breaking Changes+* Increases the **minimum bazel version** required to build TF to **3.1.0**.+* `tf.data`+  *  Makes the following (breaking) changes to the `tf.data`.+    * C++ API: - `IteratorBase::RestoreInternal`, `IteratorBase::SaveInternal`, and `DatasetBase::CheckExternalState` become pure-virtual and subclasses are now expected to provide an implementation.+    * The deprecated `DatasetBase::IsStateful` method is removed in favor of `DatasetBase::CheckExternalState`.+    * Deprecated overrides of `DatasetBase::MakeIterator` and `MakeIteratorFromInputElement` are removed.+  * The signature of `tensorflow::data::IteratorBase::SaveInternal` and `tensorflow::data::IteratorBase::SaveInput` has been extended with `SerializationContext` argument to enable overriding the default policy for the handling external state during iterator checkpointing. This is not a backwards compatible change and all subclasses of `IteratorBase` *need to be updated* accordingly.+* `tf.keras`+    * Add a new `BackupAndRestore` callback for handling distributed training failures & restarts. Please take a look at this [tutorial](https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras) for details on how to use the callback. +* `tf.image.extract_glimpse` has been updated to correctly process the case+   where `centered=False` and `normalized=False`. This is a breaking change as+   the output is different from (incorrect) previous versions. Note this+   breaking change only impacts `tf.image.extract_glimpse` and+   `tf.compat.v2.image.extract_glimpse` API endpoints. The behavior of+   `tf.compat.v1.image.extract_glimpse` does not change. The behavior of+   exsiting C++ kernel `ExtractGlimpse` does not change as well, so saved+   models will not be impacted.++## Bug Fixes and Other Changes++### TF Core:+  * Set `tf2_behavior` to 1 to enable V2 for early loading cases.+  * Add a function to dynamically choose the implementation based on underlying device placement.+  * Eager:+    * Add `reduce_logsumexp` benchmark with experiment compile.+    * Give `EagerTensor`s a meaningful `__array__` implementation.+    * Add another version of defun matmul for performance analysis.+  * `tf.function`/AutoGraph:+    * `AutoGraph` now includes into TensorFlow loops any variables that are closed over by local functions. Previously, such variables were sometimes incorrectly ignored.+    * functions returned by the `get_concrete_function` method of `tf.function` objects can now be called with arguments consistent with the original arguments or type specs passed to `get_concrete_function`.  This calling convention is now the preferred way to use concrete functions with nested values and composite tensors. Please check the [guide](https://www.tensorflow.org/guide/concrete_function) for more details on `concrete_ function`.+    * Update `tf.function`'s `experimental_relax_shapes` to handle composite tensors appropriately.+    * Optimize `tf.function` invocation, by removing redundant list converter.+    * `tf.function` will retrace when called with a different variable instead of simply using the `dtype` & `shape`.+    * [Improve support](https://github.com/tensorflow/tensorflow/issues/33862) for dynamically-sized TensorArray inside `tf.function`.+  * `tf.math`:+    * Narrow down `argmin`/`argmax` contract to always return the smallest index for ties.+    * `tf.math.reduce_variance` and `tf.math.reduce_std` return correct computation for complex types and no longer support integer types.+    * Add Bessel functions of order 0,1 to `tf.math.special`.+    * `tf.divide` now always returns a tensor to be consistent with documentation and other APIs.+  * `tf.image`:+    * Replaces [`tf.image.non_max_suppression_padded`](https://www.tensorflow.org/api_docs/python/tf/image/non_max_suppression_padded?hl=en&version=nightly) with a new implementation that supports batched inputs, which is considerably faster on TPUs and GPUs. Boxes with area=0 will be neglected. Existing usage with single inputs should still work as before.+  * `tf.linalg`+    * Add `tf.linalg.banded_triangular_solve`.+  * `tf.random`:+    * Add `tf.random.stateless_parameterized_truncated_normal`.+  * `tf.ragged`:+    * Add `tf.ragged.cross` and `tf.ragged.cross_hashed` operations.+  * `tf.RaggedTensor`:+    * `RaggedTensor.to_tensor()` now preserves static shape.+    * Add `tf.strings.format()` and `tf.print()` to support RaggedTensors.+  * `tf.saved_model`:+    * `@tf.function` from SavedModel no longer ignores args after a `RaggedTensor` when selecting the concrete function to run.+    * Fix save model issue for ops with a list of functions.+    * Add `tf.saved_model.LoadOptions` with [`experimental_io_device`](https://www.tensorflow.org/api_docs/python/tf/saved_model/LoadOptions) as arg with default value `None` to choose the I/O device for loading models and weights.+     * Update `tf.saved_model.SaveOptions` with [`experimental_io_device`](https://www.tensorflow.org/api_docs/python/tf/saved_model/SaveOptions?version=nightly) as arg with default value `None` to choose the I/O device for saving models and weights.+  * GPU+    * No longer includes PTX kernels for GPU except for sm_70 to reduce binary size.+  * Profiler+    * Fix a subtle use-after-free issue in `XStatVisitor::RefValue()`.+  * Others+    * Retain parent namescope for ops added inside `tf.while_loop`/`tf.cond`/`tf.switch_case`.+    * Update `tf.vectorized_map` to support vectorizing `tf.while_loop` and TensorList operations.+    * `tf.custom_gradient` can now be applied to functions that accept nested structures of `tensors` as inputs (instead of just a list of tensors). Note that Python structures such as tuples and lists now won't be treated as tensors, so if you still want them to be treated that way, you need to wrap them with `tf.convert_to_tensor`.+    * No lowering on gradient case op when input is `DeviceIndex` op.+    * Fix in c_api `DEFINE_GETATTR`.+    * Extend the ragged version of `tf.gather` to support `batch_dims` and `axis` args.+    * Update `tf.map_fn` to support RaggedTensors and SparseTensors.+    * Deprecate `tf.group`. It is not useful in eager mode.+    * Add a new variant of `FTRL` allowing a learning rate of zero.+    +### `tf.data`: +  * `tf.data.experimental.dense_to_ragged_batch` works correctly with tuples.+  * `tf.data.experimental.dense_to_ragged_batch` to output variable ragged rank.+  * `tf.data.experimental.cardinality` is now a method on `tf.data.Dataset`.+  * `tf.data.Dataset` now supports `len(Dataset)` when the cardinality is finite.++### `tf.distribute`: +  * Expose experimental [`tf.distribute.DistributedDataset`](https://www.tensorflow.org/api_docs/python/tf/distribute/DistributedDataset) and [`tf.distribute.DistributedIterator`](https://www.tensorflow.org/api_docs/python/tf/distribute/DistributedIterator) to distribute input data when using `tf.distribute` to scale training on multiple devices. +    * Added a `get_next_as_optional` method for `tf.distribute.DistributedIterator` class to return a `tf.experimental.Optional` instance that contains the next value for all replicas or none instead of raising an out of range error. Also see *new* [guide on input distribution](https://www.tensorflow.org/tutorials/distribute/input).+  * Allow `var.assign` on `MirroredVariables` with `aggregation=NONE` in replica context. Previously this would raise an error since there was no way to confirm that the values being assigned to the `MirroredVariables` were in fact identical.+  * `tf.distribute.experimental.MultiWorkerMirroredStrategy` adds support for partial batches. Workers running out of data now continue to participate in the training with empty inputs, instead of raising an error.+  * Improve the performance of reading metrics eagerly under `tf.distribute.experimental.MultiWorkerMirroredStrategy`.+  * Fix the issue that `strategy.reduce()` inside `tf.function` may raise exceptions when the values to reduce are from loops or if-clauses.

Do we have issues IDs for all of these? It would be great to include them where they exist... This might be an action item for the script that collects them in the first place -- if the commit adding the item had an issue attached to it, add the issue number (or complain if it wasn't added).

tensorflow-jenkins

comment created time in a month

Pull request review commenttensorflow/tensorflow

Update release notes for TensorFlow 2.3.0

 # Release 2.3.0 +## Major Features and Improvements+  * `tf.data` adds two new mechanisms to solve input pipeline bottlenecks and save resources:+    * [snapshot](https://www.tensorflow.org/api_docs/python/tf/data/experimental/snapshot)+    * [tf.data service](https://www.tensorflow.org/api_docs/python/tf/data/experimental/service). ++  In addition checkout the detailed [guide](https://www.tensorflow.org/guide/data_performance_analysis) for analyzing input pipeline performance with TF Profiler.++  * [`tf.distribute.TPUStrategy`](https://www.tensorflow.org/api_docs/python/tf/distribute/TPUStrategy) is now a stable API and no longer considered experimental for TensorFlow. (earlier `tf.distribute.experimental.TPUStrategy`).++  * TF Profiler introduces two new tools: a memory profiler to visualize your model’s memory usage over time and a python tracer which allows you to trace python function calls in your model. Usability improvements include better diagnostic messages and profile options to customize the host and device trace verbosity level.++  * Introduces experimental support for Keras Preprocessing Layers API ([`tf.keras.layers.experimental.preprocessing.*`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing?version=nightly)) to handle data preprocessing operations, with support for composite tensor inputs. Please see below for additional details on these layers.+  +  * TFLite now properly supports dynamic shapes during conversion and inference. We’ve also added opt-in support on Android and iOS for [XNNPACK](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/delegates/xnnpack), a highly optimized set of CPU kernels, as well as opt-in support for [executing quantized models on the GPU](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/gpu_advanced.md#running-quantized-models-experimental). ++  * Libtensorflow packages are available in GCS starting this release. We have also started to release a nightly version of these packages. + ## Breaking Changes+* Increases the **minimum bazel version** required to build TF to **3.1.0**.+* `tf.data`+  *  Makes the following (breaking) changes to the `tf.data`.+    * C++ API: - `IteratorBase::RestoreInternal`, `IteratorBase::SaveInternal`, and `DatasetBase::CheckExternalState` become pure-virtual and subclasses are now expected to provide an implementation.+    * The deprecated `DatasetBase::IsStateful` method is removed in favor of `DatasetBase::CheckExternalState`.+    * Deprecated overrides of `DatasetBase::MakeIterator` and `MakeIteratorFromInputElement` are removed.+  * The signature of `tensorflow::data::IteratorBase::SaveInternal` and `tensorflow::data::IteratorBase::SaveInput` has been extended with `SerializationContext` argument to enable overriding the default policy for the handling external state during iterator checkpointing. This is not a backwards compatible change and all subclasses of `IteratorBase` *need to be updated* accordingly.+* `tf.keras`+    * Add a new `BackupAndRestore` callback for handling distributed training failures & restarts. Please take a look at this [tutorial](https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras) for details on how to use the callback. +* `tf.image.extract_glimpse` has been updated to correctly process the case+   where `centered=False` and `normalized=False`. This is a breaking change as+   the output is different from (incorrect) previous versions. Note this+   breaking change only impacts `tf.image.extract_glimpse` and+   `tf.compat.v2.image.extract_glimpse` API endpoints. The behavior of+   `tf.compat.v1.image.extract_glimpse` does not change. The behavior of+   exsiting C++ kernel `ExtractGlimpse` does not change as well, so saved+   models will not be impacted.++## Bug Fixes and Other Changes++### TF Core:+  * Set `tf2_behavior` to 1 to enable V2 for early loading cases.+  * Add a function to dynamically choose the implementation based on underlying device placement.+  * Eager:+    * Add `reduce_logsumexp` benchmark with experiment compile.+    * Give `EagerTensor`s a meaningful `__array__` implementation.+    * Add another version of defun matmul for performance analysis.+  * `tf.function`/AutoGraph:+    * `AutoGraph` now includes into TensorFlow loops any variables that are closed over by local functions. Previously, such variables were sometimes incorrectly ignored.+    * functions returned by the `get_concrete_function` method of `tf.function` objects can now be called with arguments consistent with the original arguments or type specs passed to `get_concrete_function`.  This calling convention is now the preferred way to use concrete functions with nested values and composite tensors. Please check the [guide](https://www.tensorflow.org/guide/concrete_function) for more details on `concrete_ function`.+    * Update `tf.function`'s `experimental_relax_shapes` to handle composite tensors appropriately.+    * Optimize `tf.function` invocation, by removing redundant list converter.+    * `tf.function` will retrace when called with a different variable instead of simply using the `dtype` & `shape`.+    * [Improve support](https://github.com/tensorflow/tensorflow/issues/33862) for dynamically-sized TensorArray inside `tf.function`.+  * `tf.math`:+    * Narrow down `argmin`/`argmax` contract to always return the smallest index for ties.+    * `tf.math.reduce_variance` and `tf.math.reduce_std` return correct computation for complex types and no longer support integer types.+    * Add Bessel functions of order 0,1 to `tf.math.special`.+    * `tf.divide` now always returns a tensor to be consistent with documentation and other APIs.+  * `tf.image`:+    * Replaces [`tf.image.non_max_suppression_padded`](https://www.tensorflow.org/api_docs/python/tf/image/non_max_suppression_padded?hl=en&version=nightly) with a new implementation that supports batched inputs, which is considerably faster on TPUs and GPUs. Boxes with area=0 will be neglected. Existing usage with single inputs should still work as before.+  * `tf.linalg`+    * Add `tf.linalg.banded_triangular_solve`.+  * `tf.random`:+    * Add `tf.random.stateless_parameterized_truncated_normal`.+  * `tf.ragged`:+    * Add `tf.ragged.cross` and `tf.ragged.cross_hashed` operations.+  * `tf.RaggedTensor`:+    * `RaggedTensor.to_tensor()` now preserves static shape.+    * Add `tf.strings.format()` and `tf.print()` to support RaggedTensors.+  * `tf.saved_model`:+    * `@tf.function` from SavedModel no longer ignores args after a `RaggedTensor` when selecting the concrete function to run.+    * Fix save model issue for ops with a list of functions.+    * Add `tf.saved_model.LoadOptions` with [`experimental_io_device`](https://www.tensorflow.org/api_docs/python/tf/saved_model/LoadOptions) as arg with default value `None` to choose the I/O device for loading models and weights.+     * Update `tf.saved_model.SaveOptions` with [`experimental_io_device`](https://www.tensorflow.org/api_docs/python/tf/saved_model/SaveOptions?version=nightly) as arg with default value `None` to choose the I/O device for saving models and weights.+  * GPU+    * No longer includes PTX kernels for GPU except for sm_70 to reduce binary size.+  * Profiler+    * Fix a subtle use-after-free issue in `XStatVisitor::RefValue()`.+  * Others+    * Retain parent namescope for ops added inside `tf.while_loop`/`tf.cond`/`tf.switch_case`.+    * Update `tf.vectorized_map` to support vectorizing `tf.while_loop` and TensorList operations.+    * `tf.custom_gradient` can now be applied to functions that accept nested structures of `tensors` as inputs (instead of just a list of tensors). Note that Python structures such as tuples and lists now won't be treated as tensors, so if you still want them to be treated that way, you need to wrap them with `tf.convert_to_tensor`.+    * No lowering on gradient case op when input is `DeviceIndex` op.+    * Fix in c_api `DEFINE_GETATTR`.+    * Extend the ragged version of `tf.gather` to support `batch_dims` and `axis` args.+    * Update `tf.map_fn` to support RaggedTensors and SparseTensors.+    * Deprecate `tf.group`. It is not useful in eager mode.+    * Add a new variant of `FTRL` allowing a learning rate of zero.+    +### `tf.data`: +  * `tf.data.experimental.dense_to_ragged_batch` works correctly with tuples.+  * `tf.data.experimental.dense_to_ragged_batch` to output variable ragged rank.+  * `tf.data.experimental.cardinality` is now a method on `tf.data.Dataset`.+  * `tf.data.Dataset` now supports `len(Dataset)` when the cardinality is finite.++### `tf.distribute`: +  * Expose experimental [`tf.distribute.DistributedDataset`](https://www.tensorflow.org/api_docs/python/tf/distribute/DistributedDataset) and [`tf.distribute.DistributedIterator`](https://www.tensorflow.org/api_docs/python/tf/distribute/DistributedIterator) to distribute input data when using `tf.distribute` to scale training on multiple devices. +    * Added a `get_next_as_optional` method for `tf.distribute.DistributedIterator` class to return a `tf.experimental.Optional` instance that contains the next value for all replicas or none instead of raising an out of range error. Also see *new* [guide on input distribution](https://www.tensorflow.org/tutorials/distribute/input).+  * Allow `var.assign` on `MirroredVariables` with `aggregation=NONE` in replica context. Previously this would raise an error since there was no way to confirm that the values being assigned to the `MirroredVariables` were in fact identical.+  * `tf.distribute.experimental.MultiWorkerMirroredStrategy` adds support for partial batches. Workers running out of data now continue to participate in the training with empty inputs, instead of raising an error.+  * Improve the performance of reading metrics eagerly under `tf.distribute.experimental.MultiWorkerMirroredStrategy`.+  * Fix the issue that `strategy.reduce()` inside `tf.function` may raise exceptions when the values to reduce are from loops or if-clauses.+  * Fix the issue that `tf.distribute.MirroredStrategy` cannot be used together with `tf.distribute.experimental.MultiWorkerMirroredStrategy`.+  * Add a `tf.distribute.cluster_resolver.TPUClusterResolver.connect` API to simplify TPU initialization.++### `tf.keras`:+  * Introduces experimental preprocessing layers API (`tf.keras.layers.experimental.preprocessing`)  to handle data preprocessing operations such as categorical feature encoding, text vectorization, data normalization, and data discretization (binning). The newly added layers provide a replacement for the  legacy feature column API, and support composite tensor inputs. +  * Added **categorical data** processing layers:+    * `IntegerLookup` & `StringLookup`: build an index of categorical feature values+    * `CategoryEncoding`: turn integer-encoded categories into one-hot, multi-hot, or tf-idf encoded representations+    * `CategoryCrossing`: create new categorical features representing co-occurrences of previous categorical feature values+    * `Hashing`: the hashing trick, for large-vocabulary categorical features+    * `Discretization`: turn continuous numerical features into categorical features by binning their values+  * Improved **image preprocessing** layers: `CenterCrop`, `Rescaling`+  * Improved **image augmentation** layers: `RandomCrop`, `RandomFlip`, `RandomTranslation`, `RandomRotation`, `RandomHeight`, `RandomWidth`, `RandomZoom`, `RandomContrast`+  * Improved **`TextVectorization`** layer, which handles string tokenization, n-gram generation, and token encoding+    * The `TextVectorization` layer now accounts for the mask_token as part of the vocabulary size when output_mode='int'. This means that, if you have a max_tokens value of 5000, your output will have 5000 unique values (not 5001 as before).+    * Change the return value of `TextVectorization.get_vocabulary()` from `byte` to `string`. Users who previously were calling 'decode' on the output of this method should no longer need to do so.+  * Introduce new Keras dataset generation utilities :+    * **[`image_dataset_from_directory`](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image_dataset_from_directory)** is a utility based on `tf.data.Dataset`, meant to replace the legacy `ImageDataGenerator`. It takes you from a structured directory of images to a labeled dataset, in one function call. Note that it doesn't perform image data augmentation (which is meant to be done using preprocessing layers).+    * **[`text_dataset_from_directory`](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/text_dataset_from_directory)** takes you from a structured directory of text files to a labeled dataset, in one function call.+    * **[`timeseries_dataset_from_array`](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/timeseries_dataset_from_array)** is a `tf.data.Dataset`-based replacement of the legacy `TimeseriesGenerator`. It takes you from an array of timeseries data to a dataset of shifting windows with their targets.+  * Added [`experimental_steps_per_execution`](https://www.tensorflow.org/api_docs/python/tf/keras/Model?version=nightly#compile)+ arg to `model.compile` to indicate the number of batches to run per `tf.function` call. This can speed up Keras Models on TPUs up to 3x.+  * Extends `tf.keras.layers.Lambda` layers to support multi-argument lambdas, and keyword arguments when calling the layer.+  * Functional models now get constructed if *any* tensor in a layer call's arguments/keyword arguments comes from a keras input. Previously the functional api would only work if all of the elements in the first argument to the layer came from a keras input.+  * Clean up `BatchNormalization` layer's `trainable` property to act like standard python state when it's used inside `tf.functions` (frozen at tracing time), instead of acting like a pseudo-variable whose updates *kind of sometimes* get reflected in already-traced `tf.function` traces.+  * Add the `Conv1DTranspose` layer.+  * Fix bug in `SensitivitySpecificityBase` derived metrics.+  * Blacklist Case op from callback++### `tf.lite`:+  * Converter+      * Restored `inference_input_type` and `inference_output_type` flags in TF 2.x TFLiteConverter (backward compatible with TF 1.x) to support integer (tf.int8, tf.uint8) input and output types in post training full integer quantized models.+      * Added support for converting and resizing models with dynamic (placeholder) dimensions. Previously, there was only limited support for dynamic batch size, and even that did not guarantee that the model could be properly resized at runtime.+  * CPU+      * Fix an issue w/ dynamic weights and `Conv2D` on x86.+      * Add a runtime Android flag for enabling `XNNPACK` for optimized CPU performance.+      * Add a runtime iOS flag for enabling `XNNPACK` for optimized CPU performance.+      * Add a compiler flag to enable building a TFLite library that applies `XNNPACK` delegate automatically when the model has a `fp32` operation.+  * GPU+      * Allow GPU acceleration starting with internal graph nodes+      * Experimental support for quantized models with the Android GPU delegate+      * Add GPU delegate whitelist.+      * Rename GPU whitelist -> compatibility (list).+      * Improve GPU compatibility list entries from crash reports. +  * NNAPI+      * Set default value for `StatefulNnApiDelegate::Options::max_number_delegated_partitions` to 3.+      * Add capability to disable `NNAPI` CPU and check `NNAPI` Errno.+      * Fix crashes when using `NNAPI` with target accelerator specified with model containing Conv2d or FullyConnected or LSTM nodes with quantized weights.+      * Fix `ANEURALNETWORKS_BAD_DATA` execution failures with `sum`/`max`/`min`/`reduce` operations with `scalar` inputs.+  * Hexagon+      * TFLite Hexagon Delegate out of experimental.+      * Experimental `int8` support for most hexagon ops.+      * Experimental per-channel quant support for `conv` in Hexagon delegate.+      * Support dynamic batch size in C++ API.+  * CoreML+     * Opensource CoreML delegate+  * Misc+      * Enable building Android TFLite targets on Windows+      * Add support for `BatchMatMul`.+      * Add support for `half_pixel_centers` with `ResizeNearestNeighbor`.+      * Add 3D support for `BatchToSpaceND`.+      * Add 5D support for `BroadcastSub`, `Maximum`, `Minimum`, `Transpose` and `BroadcastDiv`.+      * Rename `kTfLiteActRelu1` to `kTfLiteActReluN1To1`.+      * Enable flex delegate on tensorflow.lite.Interpreter Python package.+      * Add `Buckettize`, `SparseCross` and `BoostedTreesBucketize` to the flex whitelist.+      * Add support for selective registration of flex ops.+      * Add missing kernels for flex delegate whitelisted ops.+      * Fix issue when using direct `ByteBuffer` inputs with graphs that have dynamic shapes.+      * Fix error checking supported operations in a model containing `HardSwish`. + +### TPU Enhancements+  * 3D mesh support

Is there a link we could add?

tensorflow-jenkins

comment created time in a month

Pull request review commenttensorflow/tensorflow

Update release notes for TensorFlow 2.3.0

 # Release 2.3.0 +## Major Features and Improvements+  * `tf.data` adds two new mechanisms to solve input pipeline bottlenecks and save resources:+    * [snapshot](https://www.tensorflow.org/api_docs/python/tf/data/experimental/snapshot)+    * [tf.data service](https://www.tensorflow.org/api_docs/python/tf/data/experimental/service). ++  In addition checkout the detailed [guide](https://www.tensorflow.org/guide/data_performance_analysis) for analyzing input pipeline performance with TF Profiler.++  * [`tf.distribute.TPUStrategy`](https://www.tensorflow.org/api_docs/python/tf/distribute/TPUStrategy) is now a stable API and no longer considered experimental for TensorFlow. (earlier `tf.distribute.experimental.TPUStrategy`).++  * TF Profiler introduces two new tools: a memory profiler to visualize your model’s memory usage over time and a python tracer which allows you to trace python function calls in your model. Usability improvements include better diagnostic messages and profile options to customize the host and device trace verbosity level.++  * Introduces experimental support for Keras Preprocessing Layers API ([`tf.keras.layers.experimental.preprocessing.*`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing?version=nightly)) to handle data preprocessing operations, with support for composite tensor inputs. Please see below for additional details on these layers.+  +  * TFLite now properly supports dynamic shapes during conversion and inference. We’ve also added opt-in support on Android and iOS for [XNNPACK](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/delegates/xnnpack), a highly optimized set of CPU kernels, as well as opt-in support for [executing quantized models on the GPU](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/gpu_advanced.md#running-quantized-models-experimental). ++  * Libtensorflow packages are available in GCS starting this release. We have also started to release a nightly version of these packages. + ## Breaking Changes+* Increases the **minimum bazel version** required to build TF to **3.1.0**.+* `tf.data`+  *  Makes the following (breaking) changes to the `tf.data`.+    * C++ API: - `IteratorBase::RestoreInternal`, `IteratorBase::SaveInternal`, and `DatasetBase::CheckExternalState` become pure-virtual and subclasses are now expected to provide an implementation.+    * The deprecated `DatasetBase::IsStateful` method is removed in favor of `DatasetBase::CheckExternalState`.+    * Deprecated overrides of `DatasetBase::MakeIterator` and `MakeIteratorFromInputElement` are removed.+  * The signature of `tensorflow::data::IteratorBase::SaveInternal` and `tensorflow::data::IteratorBase::SaveInput` has been extended with `SerializationContext` argument to enable overriding the default policy for the handling external state during iterator checkpointing. This is not a backwards compatible change and all subclasses of `IteratorBase` *need to be updated* accordingly.+* `tf.keras`+    * Add a new `BackupAndRestore` callback for handling distributed training failures & restarts. Please take a look at this [tutorial](https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras) for details on how to use the callback. +* `tf.image.extract_glimpse` has been updated to correctly process the case+   where `centered=False` and `normalized=False`. This is a breaking change as+   the output is different from (incorrect) previous versions. Note this+   breaking change only impacts `tf.image.extract_glimpse` and+   `tf.compat.v2.image.extract_glimpse` API endpoints. The behavior of+   `tf.compat.v1.image.extract_glimpse` does not change. The behavior of+   exsiting C++ kernel `ExtractGlimpse` does not change as well, so saved+   models will not be impacted.++## Bug Fixes and Other Changes++### TF Core:+  * Set `tf2_behavior` to 1 to enable V2 for early loading cases.+  * Add a function to dynamically choose the implementation based on underlying device placement.+  * Eager:+    * Add `reduce_logsumexp` benchmark with experiment compile.+    * Give `EagerTensor`s a meaningful `__array__` implementation.+    * Add another version of defun matmul for performance analysis.+  * `tf.function`/AutoGraph:+    * `AutoGraph` now includes into TensorFlow loops any variables that are closed over by local functions. Previously, such variables were sometimes incorrectly ignored.+    * functions returned by the `get_concrete_function` method of `tf.function` objects can now be called with arguments consistent with the original arguments or type specs passed to `get_concrete_function`.  This calling convention is now the preferred way to use concrete functions with nested values and composite tensors. Please check the [guide](https://www.tensorflow.org/guide/concrete_function) for more details on `concrete_ function`.+    * Update `tf.function`'s `experimental_relax_shapes` to handle composite tensors appropriately.+    * Optimize `tf.function` invocation, by removing redundant list converter.+    * `tf.function` will retrace when called with a different variable instead of simply using the `dtype` & `shape`.+    * [Improve support](https://github.com/tensorflow/tensorflow/issues/33862) for dynamically-sized TensorArray inside `tf.function`.+  * `tf.math`:+    * Narrow down `argmin`/`argmax` contract to always return the smallest index for ties.+    * `tf.math.reduce_variance` and `tf.math.reduce_std` return correct computation for complex types and no longer support integer types.+    * Add Bessel functions of order 0,1 to `tf.math.special`.+    * `tf.divide` now always returns a tensor to be consistent with documentation and other APIs.+  * `tf.image`:+    * Replaces [`tf.image.non_max_suppression_padded`](https://www.tensorflow.org/api_docs/python/tf/image/non_max_suppression_padded?hl=en&version=nightly) with a new implementation that supports batched inputs, which is considerably faster on TPUs and GPUs. Boxes with area=0 will be neglected. Existing usage with single inputs should still work as before.+  * `tf.linalg`+    * Add `tf.linalg.banded_triangular_solve`.+  * `tf.random`:+    * Add `tf.random.stateless_parameterized_truncated_normal`.+  * `tf.ragged`:+    * Add `tf.ragged.cross` and `tf.ragged.cross_hashed` operations.+  * `tf.RaggedTensor`:+    * `RaggedTensor.to_tensor()` now preserves static shape.+    * Add `tf.strings.format()` and `tf.print()` to support RaggedTensors.+  * `tf.saved_model`:+    * `@tf.function` from SavedModel no longer ignores args after a `RaggedTensor` when selecting the concrete function to run.+    * Fix save model issue for ops with a list of functions.+    * Add `tf.saved_model.LoadOptions` with [`experimental_io_device`](https://www.tensorflow.org/api_docs/python/tf/saved_model/LoadOptions) as arg with default value `None` to choose the I/O device for loading models and weights.+     * Update `tf.saved_model.SaveOptions` with [`experimental_io_device`](https://www.tensorflow.org/api_docs/python/tf/saved_model/SaveOptions?version=nightly) as arg with default value `None` to choose the I/O device for saving models and weights.+  * GPU+    * No longer includes PTX kernels for GPU except for sm_70 to reduce binary size.+  * Profiler+    * Fix a subtle use-after-free issue in `XStatVisitor::RefValue()`.+  * Others+    * Retain parent namescope for ops added inside `tf.while_loop`/`tf.cond`/`tf.switch_case`.+    * Update `tf.vectorized_map` to support vectorizing `tf.while_loop` and TensorList operations.+    * `tf.custom_gradient` can now be applied to functions that accept nested structures of `tensors` as inputs (instead of just a list of tensors). Note that Python structures such as tuples and lists now won't be treated as tensors, so if you still want them to be treated that way, you need to wrap them with `tf.convert_to_tensor`.+    * No lowering on gradient case op when input is `DeviceIndex` op.+    * Fix in c_api `DEFINE_GETATTR`.+    * Extend the ragged version of `tf.gather` to support `batch_dims` and `axis` args.+    * Update `tf.map_fn` to support RaggedTensors and SparseTensors.+    * Deprecate `tf.group`. It is not useful in eager mode.+    * Add a new variant of `FTRL` allowing a learning rate of zero.+    +### `tf.data`: +  * `tf.data.experimental.dense_to_ragged_batch` works correctly with tuples.+  * `tf.data.experimental.dense_to_ragged_batch` to output variable ragged rank.+  * `tf.data.experimental.cardinality` is now a method on `tf.data.Dataset`.+  * `tf.data.Dataset` now supports `len(Dataset)` when the cardinality is finite.++### `tf.distribute`: +  * Expose experimental [`tf.distribute.DistributedDataset`](https://www.tensorflow.org/api_docs/python/tf/distribute/DistributedDataset) and [`tf.distribute.DistributedIterator`](https://www.tensorflow.org/api_docs/python/tf/distribute/DistributedIterator) to distribute input data when using `tf.distribute` to scale training on multiple devices. +    * Added a `get_next_as_optional` method for `tf.distribute.DistributedIterator` class to return a `tf.experimental.Optional` instance that contains the next value for all replicas or none instead of raising an out of range error. Also see *new* [guide on input distribution](https://www.tensorflow.org/tutorials/distribute/input).+  * Allow `var.assign` on `MirroredVariables` with `aggregation=NONE` in replica context. Previously this would raise an error since there was no way to confirm that the values being assigned to the `MirroredVariables` were in fact identical.

There still isn't a way to confirm this, right? Can we mention why we did this, or point to a doc explaining why you would use this?

tensorflow-jenkins

comment created time in a month

Pull request review commenttensorflow/tensorflow

Update release notes for TensorFlow 2.3.0

 # Release 2.3.0 +## Major Features and Improvements+  * `tf.data` adds two new mechanisms to solve input pipeline bottlenecks and save resources:+    * [snapshot](https://www.tensorflow.org/api_docs/python/tf/data/experimental/snapshot)+    * [tf.data service](https://www.tensorflow.org/api_docs/python/tf/data/experimental/service). ++  In addition checkout the detailed [guide](https://www.tensorflow.org/guide/data_performance_analysis) for analyzing input pipeline performance with TF Profiler.++  * [`tf.distribute.TPUStrategy`](https://www.tensorflow.org/api_docs/python/tf/distribute/TPUStrategy) is now a stable API and no longer considered experimental for TensorFlow. (earlier `tf.distribute.experimental.TPUStrategy`).++  * TF Profiler introduces two new tools: a memory profiler to visualize your model’s memory usage over time and a python tracer which allows you to trace python function calls in your model. Usability improvements include better diagnostic messages and profile options to customize the host and device trace verbosity level.++  * Introduces experimental support for Keras Preprocessing Layers API ([`tf.keras.layers.experimental.preprocessing.*`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing?version=nightly)) to handle data preprocessing operations, with support for composite tensor inputs. Please see below for additional details on these layers.+  +  * TFLite now properly supports dynamic shapes during conversion and inference. We’ve also added opt-in support on Android and iOS for [XNNPACK](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/delegates/xnnpack), a highly optimized set of CPU kernels, as well as opt-in support for [executing quantized models on the GPU](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/gpu_advanced.md#running-quantized-models-experimental). ++  * Libtensorflow packages are available in GCS starting this release. We have also started to release a nightly version of these packages. + ## Breaking Changes+* Increases the **minimum bazel version** required to build TF to **3.1.0**.+* `tf.data`+  *  Makes the following (breaking) changes to the `tf.data`.+    * C++ API: - `IteratorBase::RestoreInternal`, `IteratorBase::SaveInternal`, and `DatasetBase::CheckExternalState` become pure-virtual and subclasses are now expected to provide an implementation.+    * The deprecated `DatasetBase::IsStateful` method is removed in favor of `DatasetBase::CheckExternalState`.+    * Deprecated overrides of `DatasetBase::MakeIterator` and `MakeIteratorFromInputElement` are removed.+  * The signature of `tensorflow::data::IteratorBase::SaveInternal` and `tensorflow::data::IteratorBase::SaveInput` has been extended with `SerializationContext` argument to enable overriding the default policy for the handling external state during iterator checkpointing. This is not a backwards compatible change and all subclasses of `IteratorBase` *need to be updated* accordingly.+* `tf.keras`+    * Add a new `BackupAndRestore` callback for handling distributed training failures & restarts. Please take a look at this [tutorial](https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras) for details on how to use the callback. +* `tf.image.extract_glimpse` has been updated to correctly process the case+   where `centered=False` and `normalized=False`. This is a breaking change as+   the output is different from (incorrect) previous versions. Note this+   breaking change only impacts `tf.image.extract_glimpse` and+   `tf.compat.v2.image.extract_glimpse` API endpoints. The behavior of+   `tf.compat.v1.image.extract_glimpse` does not change. The behavior of+   exsiting C++ kernel `ExtractGlimpse` does not change as well, so saved+   models will not be impacted.++## Bug Fixes and Other Changes++### TF Core:+  * Set `tf2_behavior` to 1 to enable V2 for early loading cases.+  * Add a function to dynamically choose the implementation based on underlying device placement.+  * Eager:+    * Add `reduce_logsumexp` benchmark with experiment compile.+    * Give `EagerTensor`s a meaningful `__array__` implementation.+    * Add another version of defun matmul for performance analysis.+  * `tf.function`/AutoGraph:+    * `AutoGraph` now includes into TensorFlow loops any variables that are closed over by local functions. Previously, such variables were sometimes incorrectly ignored.+    * functions returned by the `get_concrete_function` method of `tf.function` objects can now be called with arguments consistent with the original arguments or type specs passed to `get_concrete_function`.  This calling convention is now the preferred way to use concrete functions with nested values and composite tensors. Please check the [guide](https://www.tensorflow.org/guide/concrete_function) for more details on `concrete_ function`.+    * Update `tf.function`'s `experimental_relax_shapes` to handle composite tensors appropriately.+    * Optimize `tf.function` invocation, by removing redundant list converter.+    * `tf.function` will retrace when called with a different variable instead of simply using the `dtype` & `shape`.+    * [Improve support](https://github.com/tensorflow/tensorflow/issues/33862) for dynamically-sized TensorArray inside `tf.function`.+  * `tf.math`:+    * Narrow down `argmin`/`argmax` contract to always return the smallest index for ties.+    * `tf.math.reduce_variance` and `tf.math.reduce_std` return correct computation for complex types and no longer support integer types.+    * Add Bessel functions of order 0,1 to `tf.math.special`.+    * `tf.divide` now always returns a tensor to be consistent with documentation and other APIs.+  * `tf.image`:+    * Replaces [`tf.image.non_max_suppression_padded`](https://www.tensorflow.org/api_docs/python/tf/image/non_max_suppression_padded?hl=en&version=nightly) with a new implementation that supports batched inputs, which is considerably faster on TPUs and GPUs. Boxes with area=0 will be neglected. Existing usage with single inputs should still work as before.+  * `tf.linalg`+    * Add `tf.linalg.banded_triangular_solve`.+  * `tf.random`:+    * Add `tf.random.stateless_parameterized_truncated_normal`.+  * `tf.ragged`:+    * Add `tf.ragged.cross` and `tf.ragged.cross_hashed` operations.+  * `tf.RaggedTensor`:+    * `RaggedTensor.to_tensor()` now preserves static shape.+    * Add `tf.strings.format()` and `tf.print()` to support RaggedTensors.+  * `tf.saved_model`:+    * `@tf.function` from SavedModel no longer ignores args after a `RaggedTensor` when selecting the concrete function to run.+    * Fix save model issue for ops with a list of functions.+    * Add `tf.saved_model.LoadOptions` with [`experimental_io_device`](https://www.tensorflow.org/api_docs/python/tf/saved_model/LoadOptions) as arg with default value `None` to choose the I/O device for loading models and weights.+     * Update `tf.saved_model.SaveOptions` with [`experimental_io_device`](https://www.tensorflow.org/api_docs/python/tf/saved_model/SaveOptions?version=nightly) as arg with default value `None` to choose the I/O device for saving models and weights.+  * GPU+    * No longer includes PTX kernels for GPU except for sm_70 to reduce binary size.+  * Profiler+    * Fix a subtle use-after-free issue in `XStatVisitor::RefValue()`.+  * Others+    * Retain parent namescope for ops added inside `tf.while_loop`/`tf.cond`/`tf.switch_case`.+    * Update `tf.vectorized_map` to support vectorizing `tf.while_loop` and TensorList operations.+    * `tf.custom_gradient` can now be applied to functions that accept nested structures of `tensors` as inputs (instead of just a list of tensors). Note that Python structures such as tuples and lists now won't be treated as tensors, so if you still want them to be treated that way, you need to wrap them with `tf.convert_to_tensor`.+    * No lowering on gradient case op when input is `DeviceIndex` op.+    * Fix in c_api `DEFINE_GETATTR`.

What fix?

tensorflow-jenkins

comment created time in a month

Pull request review commenttensorflow/tensorflow

Update release notes for TensorFlow 2.3.0

 # Release 2.3.0 +## Major Features and Improvements+  * `tf.data` adds two new mechanisms to solve input pipeline bottlenecks and save resources:+    * [snapshot](https://www.tensorflow.org/api_docs/python/tf/data/experimental/snapshot)+    * [tf.data service](https://www.tensorflow.org/api_docs/python/tf/data/experimental/service). ++  In addition checkout the detailed [guide](https://www.tensorflow.org/guide/data_performance_analysis) for analyzing input pipeline performance with TF Profiler.++  * [`tf.distribute.TPUStrategy`](https://www.tensorflow.org/api_docs/python/tf/distribute/TPUStrategy) is now a stable API and no longer considered experimental for TensorFlow. (earlier `tf.distribute.experimental.TPUStrategy`).++  * TF Profiler introduces two new tools: a memory profiler to visualize your model’s memory usage over time and a python tracer which allows you to trace python function calls in your model. Usability improvements include better diagnostic messages and profile options to customize the host and device trace verbosity level.++  * Introduces experimental support for Keras Preprocessing Layers API ([`tf.keras.layers.experimental.preprocessing.*`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing?version=nightly)) to handle data preprocessing operations, with support for composite tensor inputs. Please see below for additional details on these layers.+  +  * TFLite now properly supports dynamic shapes during conversion and inference. We’ve also added opt-in support on Android and iOS for [XNNPACK](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/delegates/xnnpack), a highly optimized set of CPU kernels, as well as opt-in support for [executing quantized models on the GPU](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/gpu_advanced.md#running-quantized-models-experimental). ++  * Libtensorflow packages are available in GCS starting this release. We have also started to release a nightly version of these packages. + ## Breaking Changes+* Increases the **minimum bazel version** required to build TF to **3.1.0**.+* `tf.data`+  *  Makes the following (breaking) changes to the `tf.data`.+    * C++ API: - `IteratorBase::RestoreInternal`, `IteratorBase::SaveInternal`, and `DatasetBase::CheckExternalState` become pure-virtual and subclasses are now expected to provide an implementation.+    * The deprecated `DatasetBase::IsStateful` method is removed in favor of `DatasetBase::CheckExternalState`.+    * Deprecated overrides of `DatasetBase::MakeIterator` and `MakeIteratorFromInputElement` are removed.+  * The signature of `tensorflow::data::IteratorBase::SaveInternal` and `tensorflow::data::IteratorBase::SaveInput` has been extended with `SerializationContext` argument to enable overriding the default policy for the handling external state during iterator checkpointing. This is not a backwards compatible change and all subclasses of `IteratorBase` *need to be updated* accordingly.+* `tf.keras`+    * Add a new `BackupAndRestore` callback for handling distributed training failures & restarts. Please take a look at this [tutorial](https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras) for details on how to use the callback. +* `tf.image.extract_glimpse` has been updated to correctly process the case+   where `centered=False` and `normalized=False`. This is a breaking change as+   the output is different from (incorrect) previous versions. Note this+   breaking change only impacts `tf.image.extract_glimpse` and+   `tf.compat.v2.image.extract_glimpse` API endpoints. The behavior of+   `tf.compat.v1.image.extract_glimpse` does not change. The behavior of+   exsiting C++ kernel `ExtractGlimpse` does not change as well, so saved+   models will not be impacted.++## Bug Fixes and Other Changes++### TF Core:+  * Set `tf2_behavior` to 1 to enable V2 for early loading cases.+  * Add a function to dynamically choose the implementation based on underlying device placement.+  * Eager:+    * Add `reduce_logsumexp` benchmark with experiment compile.+    * Give `EagerTensor`s a meaningful `__array__` implementation.+    * Add another version of defun matmul for performance analysis.+  * `tf.function`/AutoGraph:+    * `AutoGraph` now includes into TensorFlow loops any variables that are closed over by local functions. Previously, such variables were sometimes incorrectly ignored.+    * functions returned by the `get_concrete_function` method of `tf.function` objects can now be called with arguments consistent with the original arguments or type specs passed to `get_concrete_function`.  This calling convention is now the preferred way to use concrete functions with nested values and composite tensors. Please check the [guide](https://www.tensorflow.org/guide/concrete_function) for more details on `concrete_ function`.+    * Update `tf.function`'s `experimental_relax_shapes` to handle composite tensors appropriately.+    * Optimize `tf.function` invocation, by removing redundant list converter.+    * `tf.function` will retrace when called with a different variable instead of simply using the `dtype` & `shape`.+    * [Improve support](https://github.com/tensorflow/tensorflow/issues/33862) for dynamically-sized TensorArray inside `tf.function`.+  * `tf.math`:+    * Narrow down `argmin`/`argmax` contract to always return the smallest index for ties.+    * `tf.math.reduce_variance` and `tf.math.reduce_std` return correct computation for complex types and no longer support integer types.+    * Add Bessel functions of order 0,1 to `tf.math.special`.+    * `tf.divide` now always returns a tensor to be consistent with documentation and other APIs.+  * `tf.image`:+    * Replaces [`tf.image.non_max_suppression_padded`](https://www.tensorflow.org/api_docs/python/tf/image/non_max_suppression_padded?hl=en&version=nightly) with a new implementation that supports batched inputs, which is considerably faster on TPUs and GPUs. Boxes with area=0 will be neglected. Existing usage with single inputs should still work as before.+  * `tf.linalg`+    * Add `tf.linalg.banded_triangular_solve`.+  * `tf.random`:+    * Add `tf.random.stateless_parameterized_truncated_normal`.+  * `tf.ragged`:+    * Add `tf.ragged.cross` and `tf.ragged.cross_hashed` operations.+  * `tf.RaggedTensor`:+    * `RaggedTensor.to_tensor()` now preserves static shape.+    * Add `tf.strings.format()` and `tf.print()` to support RaggedTensors.+  * `tf.saved_model`:+    * `@tf.function` from SavedModel no longer ignores args after a `RaggedTensor` when selecting the concrete function to run.+    * Fix save model issue for ops with a list of functions.+    * Add `tf.saved_model.LoadOptions` with [`experimental_io_device`](https://www.tensorflow.org/api_docs/python/tf/saved_model/LoadOptions) as arg with default value `None` to choose the I/O device for loading models and weights.+     * Update `tf.saved_model.SaveOptions` with [`experimental_io_device`](https://www.tensorflow.org/api_docs/python/tf/saved_model/SaveOptions?version=nightly) as arg with default value `None` to choose the I/O device for saving models and weights.+  * GPU+    * No longer includes PTX kernels for GPU except for sm_70 to reduce binary size.+  * Profiler

Probably shouldn't be grouped with "Core"

tensorflow-jenkins

comment created time in a month

Pull request review commenttensorflow/tensorflow

Update release notes for TensorFlow 2.3.0

 # Release 2.3.0 +## Major Features and Improvements+  * `tf.data` adds two new mechanisms to solve input pipeline bottlenecks and save resources:+    * [snapshot](https://www.tensorflow.org/api_docs/python/tf/data/experimental/snapshot)+    * [tf.data service](https://www.tensorflow.org/api_docs/python/tf/data/experimental/service). ++  In addition checkout the detailed [guide](https://www.tensorflow.org/guide/data_performance_analysis) for analyzing input pipeline performance with TF Profiler.++  * [`tf.distribute.TPUStrategy`](https://www.tensorflow.org/api_docs/python/tf/distribute/TPUStrategy) is now a stable API and no longer considered experimental for TensorFlow. (earlier `tf.distribute.experimental.TPUStrategy`).++  * TF Profiler introduces two new tools: a memory profiler to visualize your model’s memory usage over time and a python tracer which allows you to trace python function calls in your model. Usability improvements include better diagnostic messages and profile options to customize the host and device trace verbosity level.++  * Introduces experimental support for Keras Preprocessing Layers API ([`tf.keras.layers.experimental.preprocessing.*`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing?version=nightly)) to handle data preprocessing operations, with support for composite tensor inputs. Please see below for additional details on these layers.+  +  * TFLite now properly supports dynamic shapes during conversion and inference. We’ve also added opt-in support on Android and iOS for [XNNPACK](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/delegates/xnnpack), a highly optimized set of CPU kernels, as well as opt-in support for [executing quantized models on the GPU](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/gpu_advanced.md#running-quantized-models-experimental). ++  * Libtensorflow packages are available in GCS starting this release. We have also started to release a nightly version of these packages. + ## Breaking Changes+* Increases the **minimum bazel version** required to build TF to **3.1.0**.+* `tf.data`+  *  Makes the following (breaking) changes to the `tf.data`.+    * C++ API: - `IteratorBase::RestoreInternal`, `IteratorBase::SaveInternal`, and `DatasetBase::CheckExternalState` become pure-virtual and subclasses are now expected to provide an implementation.+    * The deprecated `DatasetBase::IsStateful` method is removed in favor of `DatasetBase::CheckExternalState`.+    * Deprecated overrides of `DatasetBase::MakeIterator` and `MakeIteratorFromInputElement` are removed.+  * The signature of `tensorflow::data::IteratorBase::SaveInternal` and `tensorflow::data::IteratorBase::SaveInput` has been extended with `SerializationContext` argument to enable overriding the default policy for the handling external state during iterator checkpointing. This is not a backwards compatible change and all subclasses of `IteratorBase` *need to be updated* accordingly.+* `tf.keras`+    * Add a new `BackupAndRestore` callback for handling distributed training failures & restarts. Please take a look at this [tutorial](https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras) for details on how to use the callback. +* `tf.image.extract_glimpse` has been updated to correctly process the case+   where `centered=False` and `normalized=False`. This is a breaking change as+   the output is different from (incorrect) previous versions. Note this+   breaking change only impacts `tf.image.extract_glimpse` and+   `tf.compat.v2.image.extract_glimpse` API endpoints. The behavior of+   `tf.compat.v1.image.extract_glimpse` does not change. The behavior of+   exsiting C++ kernel `ExtractGlimpse` does not change as well, so saved+   models will not be impacted.++## Bug Fixes and Other Changes++### TF Core:+  * Set `tf2_behavior` to 1 to enable V2 for early loading cases.+  * Add a function to dynamically choose the implementation based on underlying device placement.+  * Eager:+    * Add `reduce_logsumexp` benchmark with experiment compile.+    * Give `EagerTensor`s a meaningful `__array__` implementation.+    * Add another version of defun matmul for performance analysis.+  * `tf.function`/AutoGraph:+    * `AutoGraph` now includes into TensorFlow loops any variables that are closed over by local functions. Previously, such variables were sometimes incorrectly ignored.+    * functions returned by the `get_concrete_function` method of `tf.function` objects can now be called with arguments consistent with the original arguments or type specs passed to `get_concrete_function`.  This calling convention is now the preferred way to use concrete functions with nested values and composite tensors. Please check the [guide](https://www.tensorflow.org/guide/concrete_function) for more details on `concrete_ function`.+    * Update `tf.function`'s `experimental_relax_shapes` to handle composite tensors appropriately.+    * Optimize `tf.function` invocation, by removing redundant list converter.+    * `tf.function` will retrace when called with a different variable instead of simply using the `dtype` & `shape`.+    * [Improve support](https://github.com/tensorflow/tensorflow/issues/33862) for dynamically-sized TensorArray inside `tf.function`.+  * `tf.math`:+    * Narrow down `argmin`/`argmax` contract to always return the smallest index for ties.+    * `tf.math.reduce_variance` and `tf.math.reduce_std` return correct computation for complex types and no longer support integer types.+    * Add Bessel functions of order 0,1 to `tf.math.special`.+    * `tf.divide` now always returns a tensor to be consistent with documentation and other APIs.+  * `tf.image`:+    * Replaces [`tf.image.non_max_suppression_padded`](https://www.tensorflow.org/api_docs/python/tf/image/non_max_suppression_padded?hl=en&version=nightly) with a new implementation that supports batched inputs, which is considerably faster on TPUs and GPUs. Boxes with area=0 will be neglected. Existing usage with single inputs should still work as before.+  * `tf.linalg`+    * Add `tf.linalg.banded_triangular_solve`.+  * `tf.random`:+    * Add `tf.random.stateless_parameterized_truncated_normal`.+  * `tf.ragged`:+    * Add `tf.ragged.cross` and `tf.ragged.cross_hashed` operations.+  * `tf.RaggedTensor`:+    * `RaggedTensor.to_tensor()` now preserves static shape.+    * Add `tf.strings.format()` and `tf.print()` to support RaggedTensors.+  * `tf.saved_model`:+    * `@tf.function` from SavedModel no longer ignores args after a `RaggedTensor` when selecting the concrete function to run.+    * Fix save model issue for ops with a list of functions.+    * Add `tf.saved_model.LoadOptions` with [`experimental_io_device`](https://www.tensorflow.org/api_docs/python/tf/saved_model/LoadOptions) as arg with default value `None` to choose the I/O device for loading models and weights.+     * Update `tf.saved_model.SaveOptions` with [`experimental_io_device`](https://www.tensorflow.org/api_docs/python/tf/saved_model/SaveOptions?version=nightly) as arg with default value `None` to choose the I/O device for saving models and weights.+  * GPU+    * No longer includes PTX kernels for GPU except for sm_70 to reduce binary size.+  * Profiler+    * Fix a subtle use-after-free issue in `XStatVisitor::RefValue()`.+  * Others+    * Retain parent namescope for ops added inside `tf.while_loop`/`tf.cond`/`tf.switch_case`.+    * Update `tf.vectorized_map` to support vectorizing `tf.while_loop` and TensorList operations.+    * `tf.custom_gradient` can now be applied to functions that accept nested structures of `tensors` as inputs (instead of just a list of tensors). Note that Python structures such as tuples and lists now won't be treated as tensors, so if you still want them to be treated that way, you need to wrap them with `tf.convert_to_tensor`.+    * No lowering on gradient case op when input is `DeviceIndex` op.+    * Fix in c_api `DEFINE_GETATTR`.+    * Extend the ragged version of `tf.gather` to support `batch_dims` and `axis` args.+    * Update `tf.map_fn` to support RaggedTensors and SparseTensors.+    * Deprecate `tf.group`. It is not useful in eager mode.+    * Add a new variant of `FTRL` allowing a learning rate of zero.

Does it replace the existing one? Is it new? Link?

tensorflow-jenkins

comment created time in a month

Pull request review commenttensorflow/tensorflow

Update release notes for TensorFlow 2.3.0

 # Release 2.3.0 +## Major Features and Improvements+  * `tf.data` adds two new mechanisms to solve input pipeline bottlenecks and save resources:+    * [snapshot](https://www.tensorflow.org/api_docs/python/tf/data/experimental/snapshot)+    * [tf.data service](https://www.tensorflow.org/api_docs/python/tf/data/experimental/service). ++  In addition checkout the detailed [guide](https://www.tensorflow.org/guide/data_performance_analysis) for analyzing input pipeline performance with TF Profiler.++  * [`tf.distribute.TPUStrategy`](https://www.tensorflow.org/api_docs/python/tf/distribute/TPUStrategy) is now a stable API and no longer considered experimental for TensorFlow. (earlier `tf.distribute.experimental.TPUStrategy`).++  * TF Profiler introduces two new tools: a memory profiler to visualize your model’s memory usage over time and a python tracer which allows you to trace python function calls in your model. Usability improvements include better diagnostic messages and profile options to customize the host and device trace verbosity level.++  * Introduces experimental support for Keras Preprocessing Layers API ([`tf.keras.layers.experimental.preprocessing.*`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing?version=nightly)) to handle data preprocessing operations, with support for composite tensor inputs. Please see below for additional details on these layers.+  +  * TFLite now properly supports dynamic shapes during conversion and inference. We’ve also added opt-in support on Android and iOS for [XNNPACK](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/delegates/xnnpack), a highly optimized set of CPU kernels, as well as opt-in support for [executing quantized models on the GPU](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/gpu_advanced.md#running-quantized-models-experimental). ++  * Libtensorflow packages are available in GCS starting this release. We have also started to release a nightly version of these packages. + ## Breaking Changes+* Increases the **minimum bazel version** required to build TF to **3.1.0**.+* `tf.data`+  *  Makes the following (breaking) changes to the `tf.data`.+    * C++ API: - `IteratorBase::RestoreInternal`, `IteratorBase::SaveInternal`, and `DatasetBase::CheckExternalState` become pure-virtual and subclasses are now expected to provide an implementation.+    * The deprecated `DatasetBase::IsStateful` method is removed in favor of `DatasetBase::CheckExternalState`.+    * Deprecated overrides of `DatasetBase::MakeIterator` and `MakeIteratorFromInputElement` are removed.+  * The signature of `tensorflow::data::IteratorBase::SaveInternal` and `tensorflow::data::IteratorBase::SaveInput` has been extended with `SerializationContext` argument to enable overriding the default policy for the handling external state during iterator checkpointing. This is not a backwards compatible change and all subclasses of `IteratorBase` *need to be updated* accordingly.+* `tf.keras`+    * Add a new `BackupAndRestore` callback for handling distributed training failures & restarts. Please take a look at this [tutorial](https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras) for details on how to use the callback. +* `tf.image.extract_glimpse` has been updated to correctly process the case+   where `centered=False` and `normalized=False`. This is a breaking change as+   the output is different from (incorrect) previous versions. Note this+   breaking change only impacts `tf.image.extract_glimpse` and+   `tf.compat.v2.image.extract_glimpse` API endpoints. The behavior of+   `tf.compat.v1.image.extract_glimpse` does not change. The behavior of+   exsiting C++ kernel `ExtractGlimpse` does not change as well, so saved+   models will not be impacted.++## Bug Fixes and Other Changes++### TF Core:+  * Set `tf2_behavior` to 1 to enable V2 for early loading cases.+  * Add a function to dynamically choose the implementation based on underlying device placement.+  * Eager:+    * Add `reduce_logsumexp` benchmark with experiment compile.+    * Give `EagerTensor`s a meaningful `__array__` implementation.+    * Add another version of defun matmul for performance analysis.+  * `tf.function`/AutoGraph:+    * `AutoGraph` now includes into TensorFlow loops any variables that are closed over by local functions. Previously, such variables were sometimes incorrectly ignored.+    * functions returned by the `get_concrete_function` method of `tf.function` objects can now be called with arguments consistent with the original arguments or type specs passed to `get_concrete_function`.  This calling convention is now the preferred way to use concrete functions with nested values and composite tensors. Please check the [guide](https://www.tensorflow.org/guide/concrete_function) for more details on `concrete_ function`.+    * Update `tf.function`'s `experimental_relax_shapes` to handle composite tensors appropriately.+    * Optimize `tf.function` invocation, by removing redundant list converter.+    * `tf.function` will retrace when called with a different variable instead of simply using the `dtype` & `shape`.+    * [Improve support](https://github.com/tensorflow/tensorflow/issues/33862) for dynamically-sized TensorArray inside `tf.function`.+  * `tf.math`:+    * Narrow down `argmin`/`argmax` contract to always return the smallest index for ties.+    * `tf.math.reduce_variance` and `tf.math.reduce_std` return correct computation for complex types and no longer support integer types.+    * Add Bessel functions of order 0,1 to `tf.math.special`.+    * `tf.divide` now always returns a tensor to be consistent with documentation and other APIs.+  * `tf.image`:+    * Replaces [`tf.image.non_max_suppression_padded`](https://www.tensorflow.org/api_docs/python/tf/image/non_max_suppression_padded?hl=en&version=nightly) with a new implementation that supports batched inputs, which is considerably faster on TPUs and GPUs. Boxes with area=0 will be neglected. Existing usage with single inputs should still work as before.+  * `tf.linalg`+    * Add `tf.linalg.banded_triangular_solve`.+  * `tf.random`:+    * Add `tf.random.stateless_parameterized_truncated_normal`.+  * `tf.ragged`:+    * Add `tf.ragged.cross` and `tf.ragged.cross_hashed` operations.+  * `tf.RaggedTensor`:+    * `RaggedTensor.to_tensor()` now preserves static shape.+    * Add `tf.strings.format()` and `tf.print()` to support RaggedTensors.+  * `tf.saved_model`:+    * `@tf.function` from SavedModel no longer ignores args after a `RaggedTensor` when selecting the concrete function to run.+    * Fix save model issue for ops with a list of functions.+    * Add `tf.saved_model.LoadOptions` with [`experimental_io_device`](https://www.tensorflow.org/api_docs/python/tf/saved_model/LoadOptions) as arg with default value `None` to choose the I/O device for loading models and weights.+     * Update `tf.saved_model.SaveOptions` with [`experimental_io_device`](https://www.tensorflow.org/api_docs/python/tf/saved_model/SaveOptions?version=nightly) as arg with default value `None` to choose the I/O device for saving models and weights.+  * GPU+    * No longer includes PTX kernels for GPU except for sm_70 to reduce binary size.+  * Profiler+    * Fix a subtle use-after-free issue in `XStatVisitor::RefValue()`.+  * Others+    * Retain parent namescope for ops added inside `tf.while_loop`/`tf.cond`/`tf.switch_case`.+    * Update `tf.vectorized_map` to support vectorizing `tf.while_loop` and TensorList operations.+    * `tf.custom_gradient` can now be applied to functions that accept nested structures of `tensors` as inputs (instead of just a list of tensors). Note that Python structures such as tuples and lists now won't be treated as tensors, so if you still want them to be treated that way, you need to wrap them with `tf.convert_to_tensor`.+    * No lowering on gradient case op when input is `DeviceIndex` op.+    * Fix in c_api `DEFINE_GETATTR`.+    * Extend the ragged version of `tf.gather` to support `batch_dims` and `axis` args.+    * Update `tf.map_fn` to support RaggedTensors and SparseTensors.+    * Deprecate `tf.group`. It is not useful in eager mode.+    * Add a new variant of `FTRL` allowing a learning rate of zero.+    +### `tf.data`: +  * `tf.data.experimental.dense_to_ragged_batch` works correctly with tuples.+  * `tf.data.experimental.dense_to_ragged_batch` to output variable ragged rank.+  * `tf.data.experimental.cardinality` is now a method on `tf.data.Dataset`.+  * `tf.data.Dataset` now supports `len(Dataset)` when the cardinality is finite.++### `tf.distribute`: +  * Expose experimental [`tf.distribute.DistributedDataset`](https://www.tensorflow.org/api_docs/python/tf/distribute/DistributedDataset) and [`tf.distribute.DistributedIterator`](https://www.tensorflow.org/api_docs/python/tf/distribute/DistributedIterator) to distribute input data when using `tf.distribute` to scale training on multiple devices. +    * Added a `get_next_as_optional` method for `tf.distribute.DistributedIterator` class to return a `tf.experimental.Optional` instance that contains the next value for all replicas or none instead of raising an out of range error. Also see *new* [guide on input distribution](https://www.tensorflow.org/tutorials/distribute/input).+  * Allow `var.assign` on `MirroredVariables` with `aggregation=NONE` in replica context. Previously this would raise an error since there was no way to confirm that the values being assigned to the `MirroredVariables` were in fact identical.+  * `tf.distribute.experimental.MultiWorkerMirroredStrategy` adds support for partial batches. Workers running out of data now continue to participate in the training with empty inputs, instead of raising an error.+  * Improve the performance of reading metrics eagerly under `tf.distribute.experimental.MultiWorkerMirroredStrategy`.+  * Fix the issue that `strategy.reduce()` inside `tf.function` may raise exceptions when the values to reduce are from loops or if-clauses.+  * Fix the issue that `tf.distribute.MirroredStrategy` cannot be used together with `tf.distribute.experimental.MultiWorkerMirroredStrategy`.+  * Add a `tf.distribute.cluster_resolver.TPUClusterResolver.connect` API to simplify TPU initialization.++### `tf.keras`:+  * Introduces experimental preprocessing layers API (`tf.keras.layers.experimental.preprocessing`)  to handle data preprocessing operations such as categorical feature encoding, text vectorization, data normalization, and data discretization (binning). The newly added layers provide a replacement for the  legacy feature column API, and support composite tensor inputs. +  * Added **categorical data** processing layers:+    * `IntegerLookup` & `StringLookup`: build an index of categorical feature values+    * `CategoryEncoding`: turn integer-encoded categories into one-hot, multi-hot, or tf-idf encoded representations+    * `CategoryCrossing`: create new categorical features representing co-occurrences of previous categorical feature values+    * `Hashing`: the hashing trick, for large-vocabulary categorical features+    * `Discretization`: turn continuous numerical features into categorical features by binning their values+  * Improved **image preprocessing** layers: `CenterCrop`, `Rescaling`+  * Improved **image augmentation** layers: `RandomCrop`, `RandomFlip`, `RandomTranslation`, `RandomRotation`, `RandomHeight`, `RandomWidth`, `RandomZoom`, `RandomContrast`+  * Improved **`TextVectorization`** layer, which handles string tokenization, n-gram generation, and token encoding+    * The `TextVectorization` layer now accounts for the mask_token as part of the vocabulary size when output_mode='int'. This means that, if you have a max_tokens value of 5000, your output will have 5000 unique values (not 5001 as before).+    * Change the return value of `TextVectorization.get_vocabulary()` from `byte` to `string`. Users who previously were calling 'decode' on the output of this method should no longer need to do so.+  * Introduce new Keras dataset generation utilities :+    * **[`image_dataset_from_directory`](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image_dataset_from_directory)** is a utility based on `tf.data.Dataset`, meant to replace the legacy `ImageDataGenerator`. It takes you from a structured directory of images to a labeled dataset, in one function call. Note that it doesn't perform image data augmentation (which is meant to be done using preprocessing layers).+    * **[`text_dataset_from_directory`](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/text_dataset_from_directory)** takes you from a structured directory of text files to a labeled dataset, in one function call.+    * **[`timeseries_dataset_from_array`](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/timeseries_dataset_from_array)** is a `tf.data.Dataset`-based replacement of the legacy `TimeseriesGenerator`. It takes you from an array of timeseries data to a dataset of shifting windows with their targets.+  * Added [`experimental_steps_per_execution`](https://www.tensorflow.org/api_docs/python/tf/keras/Model?version=nightly#compile)+ arg to `model.compile` to indicate the number of batches to run per `tf.function` call. This can speed up Keras Models on TPUs up to 3x.+  * Extends `tf.keras.layers.Lambda` layers to support multi-argument lambdas, and keyword arguments when calling the layer.+  * Functional models now get constructed if *any* tensor in a layer call's arguments/keyword arguments comes from a keras input. Previously the functional api would only work if all of the elements in the first argument to the layer came from a keras input.+  * Clean up `BatchNormalization` layer's `trainable` property to act like standard python state when it's used inside `tf.functions` (frozen at tracing time), instead of acting like a pseudo-variable whose updates *kind of sometimes* get reflected in already-traced `tf.function` traces.+  * Add the `Conv1DTranspose` layer.+  * Fix bug in `SensitivitySpecificityBase` derived metrics.

which bug?

tensorflow-jenkins

comment created time in a month

Pull request review commenttensorflow/tensorflow

Update release notes for TensorFlow 2.3.0

 # Release 2.3.0 +## Major Features and Improvements+  * `tf.data` adds two new mechanisms to solve input pipeline bottlenecks and save resources:+    * [snapshot](https://www.tensorflow.org/api_docs/python/tf/data/experimental/snapshot)+    * [tf.data service](https://www.tensorflow.org/api_docs/python/tf/data/experimental/service). ++  In addition checkout the detailed [guide](https://www.tensorflow.org/guide/data_performance_analysis) for analyzing input pipeline performance with TF Profiler.++  * [`tf.distribute.TPUStrategy`](https://www.tensorflow.org/api_docs/python/tf/distribute/TPUStrategy) is now a stable API and no longer considered experimental for TensorFlow. (earlier `tf.distribute.experimental.TPUStrategy`).++  * TF Profiler introduces two new tools: a memory profiler to visualize your model’s memory usage over time and a python tracer which allows you to trace python function calls in your model. Usability improvements include better diagnostic messages and profile options to customize the host and device trace verbosity level.++  * Introduces experimental support for Keras Preprocessing Layers API ([`tf.keras.layers.experimental.preprocessing.*`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing?version=nightly)) to handle data preprocessing operations, with support for composite tensor inputs. Please see below for additional details on these layers.+  +  * TFLite now properly supports dynamic shapes during conversion and inference. We’ve also added opt-in support on Android and iOS for [XNNPACK](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/delegates/xnnpack), a highly optimized set of CPU kernels, as well as opt-in support for [executing quantized models on the GPU](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/gpu_advanced.md#running-quantized-models-experimental). ++  * Libtensorflow packages are available in GCS starting this release. We have also started to release a nightly version of these packages. + ## Breaking Changes+* Increases the **minimum bazel version** required to build TF to **3.1.0**.+* `tf.data`+  *  Makes the following (breaking) changes to the `tf.data`.+    * C++ API: - `IteratorBase::RestoreInternal`, `IteratorBase::SaveInternal`, and `DatasetBase::CheckExternalState` become pure-virtual and subclasses are now expected to provide an implementation.+    * The deprecated `DatasetBase::IsStateful` method is removed in favor of `DatasetBase::CheckExternalState`.+    * Deprecated overrides of `DatasetBase::MakeIterator` and `MakeIteratorFromInputElement` are removed.+  * The signature of `tensorflow::data::IteratorBase::SaveInternal` and `tensorflow::data::IteratorBase::SaveInput` has been extended with `SerializationContext` argument to enable overriding the default policy for the handling external state during iterator checkpointing. This is not a backwards compatible change and all subclasses of `IteratorBase` *need to be updated* accordingly.+* `tf.keras`+    * Add a new `BackupAndRestore` callback for handling distributed training failures & restarts. Please take a look at this [tutorial](https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras) for details on how to use the callback. +* `tf.image.extract_glimpse` has been updated to correctly process the case+   where `centered=False` and `normalized=False`. This is a breaking change as+   the output is different from (incorrect) previous versions. Note this+   breaking change only impacts `tf.image.extract_glimpse` and+   `tf.compat.v2.image.extract_glimpse` API endpoints. The behavior of+   `tf.compat.v1.image.extract_glimpse` does not change. The behavior of+   exsiting C++ kernel `ExtractGlimpse` does not change as well, so saved+   models will not be impacted.++## Bug Fixes and Other Changes++### TF Core:+  * Set `tf2_behavior` to 1 to enable V2 for early loading cases.+  * Add a function to dynamically choose the implementation based on underlying device placement.+  * Eager:+    * Add `reduce_logsumexp` benchmark with experiment compile.+    * Give `EagerTensor`s a meaningful `__array__` implementation.+    * Add another version of defun matmul for performance analysis.+  * `tf.function`/AutoGraph:+    * `AutoGraph` now includes into TensorFlow loops any variables that are closed over by local functions. Previously, such variables were sometimes incorrectly ignored.+    * functions returned by the `get_concrete_function` method of `tf.function` objects can now be called with arguments consistent with the original arguments or type specs passed to `get_concrete_function`.  This calling convention is now the preferred way to use concrete functions with nested values and composite tensors. Please check the [guide](https://www.tensorflow.org/guide/concrete_function) for more details on `concrete_ function`.+    * Update `tf.function`'s `experimental_relax_shapes` to handle composite tensors appropriately.+    * Optimize `tf.function` invocation, by removing redundant list converter.+    * `tf.function` will retrace when called with a different variable instead of simply using the `dtype` & `shape`.+    * [Improve support](https://github.com/tensorflow/tensorflow/issues/33862) for dynamically-sized TensorArray inside `tf.function`.+  * `tf.math`:+    * Narrow down `argmin`/`argmax` contract to always return the smallest index for ties.+    * `tf.math.reduce_variance` and `tf.math.reduce_std` return correct computation for complex types and no longer support integer types.+    * Add Bessel functions of order 0,1 to `tf.math.special`.+    * `tf.divide` now always returns a tensor to be consistent with documentation and other APIs.+  * `tf.image`:+    * Replaces [`tf.image.non_max_suppression_padded`](https://www.tensorflow.org/api_docs/python/tf/image/non_max_suppression_padded?hl=en&version=nightly) with a new implementation that supports batched inputs, which is considerably faster on TPUs and GPUs. Boxes with area=0 will be neglected. Existing usage with single inputs should still work as before.

Link probably shouldn't be nightly? (elsewhere also)

tensorflow-jenkins

comment created time in a month

Pull request review commenttensorflow/tensorflow

Update release notes for TensorFlow 2.3.0

 # Release 2.3.0 +## Major Features and Improvements+  * `tf.data` adds two new mechanisms to solve input pipeline bottlenecks and save resources:+    * [snapshot](https://www.tensorflow.org/api_docs/python/tf/data/experimental/snapshot)+    * [tf.data service](https://www.tensorflow.org/api_docs/python/tf/data/experimental/service). ++  In addition checkout the detailed [guide](https://www.tensorflow.org/guide/data_performance_analysis) for analyzing input pipeline performance with TF Profiler.++  * [`tf.distribute.TPUStrategy`](https://www.tensorflow.org/api_docs/python/tf/distribute/TPUStrategy) is now a stable API and no longer considered experimental for TensorFlow. (earlier `tf.distribute.experimental.TPUStrategy`).++  * TF Profiler introduces two new tools: a memory profiler to visualize your model’s memory usage over time and a python tracer which allows you to trace python function calls in your model. Usability improvements include better diagnostic messages and profile options to customize the host and device trace verbosity level.++  * Introduces experimental support for Keras Preprocessing Layers API ([`tf.keras.layers.experimental.preprocessing.*`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing?version=nightly)) to handle data preprocessing operations, with support for composite tensor inputs. Please see below for additional details on these layers.+  +  * TFLite now properly supports dynamic shapes during conversion and inference. We’ve also added opt-in support on Android and iOS for [XNNPACK](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/delegates/xnnpack), a highly optimized set of CPU kernels, as well as opt-in support for [executing quantized models on the GPU](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/gpu_advanced.md#running-quantized-models-experimental). ++  * Libtensorflow packages are available in GCS starting this release. We have also started to release a nightly version of these packages. + ## Breaking Changes+* Increases the **minimum bazel version** required to build TF to **3.1.0**.+* `tf.data`+  *  Makes the following (breaking) changes to the `tf.data`.+    * C++ API: - `IteratorBase::RestoreInternal`, `IteratorBase::SaveInternal`, and `DatasetBase::CheckExternalState` become pure-virtual and subclasses are now expected to provide an implementation.+    * The deprecated `DatasetBase::IsStateful` method is removed in favor of `DatasetBase::CheckExternalState`.+    * Deprecated overrides of `DatasetBase::MakeIterator` and `MakeIteratorFromInputElement` are removed.+  * The signature of `tensorflow::data::IteratorBase::SaveInternal` and `tensorflow::data::IteratorBase::SaveInput` has been extended with `SerializationContext` argument to enable overriding the default policy for the handling external state during iterator checkpointing. This is not a backwards compatible change and all subclasses of `IteratorBase` *need to be updated* accordingly.+* `tf.keras`+    * Add a new `BackupAndRestore` callback for handling distributed training failures & restarts. Please take a look at this [tutorial](https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras) for details on how to use the callback. +* `tf.image.extract_glimpse` has been updated to correctly process the case+   where `centered=False` and `normalized=False`. This is a breaking change as+   the output is different from (incorrect) previous versions. Note this+   breaking change only impacts `tf.image.extract_glimpse` and+   `tf.compat.v2.image.extract_glimpse` API endpoints. The behavior of+   `tf.compat.v1.image.extract_glimpse` does not change. The behavior of+   exsiting C++ kernel `ExtractGlimpse` does not change as well, so saved+   models will not be impacted.++## Bug Fixes and Other Changes++### TF Core:+  * Set `tf2_behavior` to 1 to enable V2 for early loading cases.+  * Add a function to dynamically choose the implementation based on underlying device placement.+  * Eager:+    * Add `reduce_logsumexp` benchmark with experiment compile.+    * Give `EagerTensor`s a meaningful `__array__` implementation.+    * Add another version of defun matmul for performance analysis.+  * `tf.function`/AutoGraph:+    * `AutoGraph` now includes into TensorFlow loops any variables that are closed over by local functions. Previously, such variables were sometimes incorrectly ignored.+    * functions returned by the `get_concrete_function` method of `tf.function` objects can now be called with arguments consistent with the original arguments or type specs passed to `get_concrete_function`.  This calling convention is now the preferred way to use concrete functions with nested values and composite tensors. Please check the [guide](https://www.tensorflow.org/guide/concrete_function) for more details on `concrete_ function`.+    * Update `tf.function`'s `experimental_relax_shapes` to handle composite tensors appropriately.+    * Optimize `tf.function` invocation, by removing redundant list converter.+    * `tf.function` will retrace when called with a different variable instead of simply using the `dtype` & `shape`.+    * [Improve support](https://github.com/tensorflow/tensorflow/issues/33862) for dynamically-sized TensorArray inside `tf.function`.+  * `tf.math`:+    * Narrow down `argmin`/`argmax` contract to always return the smallest index for ties.+    * `tf.math.reduce_variance` and `tf.math.reduce_std` return correct computation for complex types and no longer support integer types.+    * Add Bessel functions of order 0,1 to `tf.math.special`.+    * `tf.divide` now always returns a tensor to be consistent with documentation and other APIs.+  * `tf.image`:+    * Replaces [`tf.image.non_max_suppression_padded`](https://www.tensorflow.org/api_docs/python/tf/image/non_max_suppression_padded?hl=en&version=nightly) with a new implementation that supports batched inputs, which is considerably faster on TPUs and GPUs. Boxes with area=0 will be neglected. Existing usage with single inputs should still work as before.+  * `tf.linalg`+    * Add `tf.linalg.banded_triangular_solve`.+  * `tf.random`:+    * Add `tf.random.stateless_parameterized_truncated_normal`.+  * `tf.ragged`:+    * Add `tf.ragged.cross` and `tf.ragged.cross_hashed` operations.+  * `tf.RaggedTensor`:+    * `RaggedTensor.to_tensor()` now preserves static shape.+    * Add `tf.strings.format()` and `tf.print()` to support RaggedTensors.+  * `tf.saved_model`:+    * `@tf.function` from SavedModel no longer ignores args after a `RaggedTensor` when selecting the concrete function to run.+    * Fix save model issue for ops with a list of functions.+    * Add `tf.saved_model.LoadOptions` with [`experimental_io_device`](https://www.tensorflow.org/api_docs/python/tf/saved_model/LoadOptions) as arg with default value `None` to choose the I/O device for loading models and weights.+     * Update `tf.saved_model.SaveOptions` with [`experimental_io_device`](https://www.tensorflow.org/api_docs/python/tf/saved_model/SaveOptions?version=nightly) as arg with default value `None` to choose the I/O device for saving models and weights.+  * GPU+    * No longer includes PTX kernels for GPU except for sm_70 to reduce binary size.

Can we tell users what that does? There is an increase in startup time, right?

tensorflow-jenkins

comment created time in a month

Pull request review commenttensorflow/tensorflow

Update release notes for TensorFlow 2.3.0

 # Release 2.3.0 +## Major Features and Improvements+  * `tf.data` adds two new mechanisms to solve input pipeline bottlenecks and save resources:+    * [snapshot](https://www.tensorflow.org/api_docs/python/tf/data/experimental/snapshot)+    * [tf.data service](https://www.tensorflow.org/api_docs/python/tf/data/experimental/service). ++  In addition checkout the detailed [guide](https://www.tensorflow.org/guide/data_performance_analysis) for analyzing input pipeline performance with TF Profiler.++  * [`tf.distribute.TPUStrategy`](https://www.tensorflow.org/api_docs/python/tf/distribute/TPUStrategy) is now a stable API and no longer considered experimental for TensorFlow. (earlier `tf.distribute.experimental.TPUStrategy`).++  * TF Profiler introduces two new tools: a memory profiler to visualize your model’s memory usage over time and a python tracer which allows you to trace python function calls in your model. Usability improvements include better diagnostic messages and profile options to customize the host and device trace verbosity level.++  * Introduces experimental support for Keras Preprocessing Layers API ([`tf.keras.layers.experimental.preprocessing.*`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing?version=nightly)) to handle data preprocessing operations, with support for composite tensor inputs. Please see below for additional details on these layers.+  +  * TFLite now properly supports dynamic shapes during conversion and inference. We’ve also added opt-in support on Android and iOS for [XNNPACK](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/delegates/xnnpack), a highly optimized set of CPU kernels, as well as opt-in support for [executing quantized models on the GPU](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/gpu_advanced.md#running-quantized-models-experimental). ++  * Libtensorflow packages are available in GCS starting this release. We have also started to release a nightly version of these packages. + ## Breaking Changes+* Increases the **minimum bazel version** required to build TF to **3.1.0**.+* `tf.data`+  *  Makes the following (breaking) changes to the `tf.data`.+    * C++ API: - `IteratorBase::RestoreInternal`, `IteratorBase::SaveInternal`, and `DatasetBase::CheckExternalState` become pure-virtual and subclasses are now expected to provide an implementation.+    * The deprecated `DatasetBase::IsStateful` method is removed in favor of `DatasetBase::CheckExternalState`.+    * Deprecated overrides of `DatasetBase::MakeIterator` and `MakeIteratorFromInputElement` are removed.+  * The signature of `tensorflow::data::IteratorBase::SaveInternal` and `tensorflow::data::IteratorBase::SaveInput` has been extended with `SerializationContext` argument to enable overriding the default policy for the handling external state during iterator checkpointing. This is not a backwards compatible change and all subclasses of `IteratorBase` *need to be updated* accordingly.+* `tf.keras`+    * Add a new `BackupAndRestore` callback for handling distributed training failures & restarts. Please take a look at this [tutorial](https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras) for details on how to use the callback. +* `tf.image.extract_glimpse` has been updated to correctly process the case+   where `centered=False` and `normalized=False`. This is a breaking change as+   the output is different from (incorrect) previous versions. Note this+   breaking change only impacts `tf.image.extract_glimpse` and+   `tf.compat.v2.image.extract_glimpse` API endpoints. The behavior of+   `tf.compat.v1.image.extract_glimpse` does not change. The behavior of+   exsiting C++ kernel `ExtractGlimpse` does not change as well, so saved+   models will not be impacted.++## Bug Fixes and Other Changes++### TF Core:+  * Set `tf2_behavior` to 1 to enable V2 for early loading cases.+  * Add a function to dynamically choose the implementation based on underlying device placement.+  * Eager:+    * Add `reduce_logsumexp` benchmark with experiment compile.+    * Give `EagerTensor`s a meaningful `__array__` implementation.+    * Add another version of defun matmul for performance analysis.+  * `tf.function`/AutoGraph:+    * `AutoGraph` now includes into TensorFlow loops any variables that are closed over by local functions. Previously, such variables were sometimes incorrectly ignored.+    * functions returned by the `get_concrete_function` method of `tf.function` objects can now be called with arguments consistent with the original arguments or type specs passed to `get_concrete_function`.  This calling convention is now the preferred way to use concrete functions with nested values and composite tensors. Please check the [guide](https://www.tensorflow.org/guide/concrete_function) for more details on `concrete_ function`.+    * Update `tf.function`'s `experimental_relax_shapes` to handle composite tensors appropriately.+    * Optimize `tf.function` invocation, by removing redundant list converter.+    * `tf.function` will retrace when called with a different variable instead of simply using the `dtype` & `shape`.+    * [Improve support](https://github.com/tensorflow/tensorflow/issues/33862) for dynamically-sized TensorArray inside `tf.function`.+  * `tf.math`:+    * Narrow down `argmin`/`argmax` contract to always return the smallest index for ties.+    * `tf.math.reduce_variance` and `tf.math.reduce_std` return correct computation for complex types and no longer support integer types.+    * Add Bessel functions of order 0,1 to `tf.math.special`.+    * `tf.divide` now always returns a tensor to be consistent with documentation and other APIs.+  * `tf.image`:+    * Replaces [`tf.image.non_max_suppression_padded`](https://www.tensorflow.org/api_docs/python/tf/image/non_max_suppression_padded?hl=en&version=nightly) with a new implementation that supports batched inputs, which is considerably faster on TPUs and GPUs. Boxes with area=0 will be neglected. Existing usage with single inputs should still work as before.+  * `tf.linalg`+    * Add `tf.linalg.banded_triangular_solve`.+  * `tf.random`:+    * Add `tf.random.stateless_parameterized_truncated_normal`.+  * `tf.ragged`:+    * Add `tf.ragged.cross` and `tf.ragged.cross_hashed` operations.+  * `tf.RaggedTensor`:+    * `RaggedTensor.to_tensor()` now preserves static shape.+    * Add `tf.strings.format()` and `tf.print()` to support RaggedTensors.+  * `tf.saved_model`:+    * `@tf.function` from SavedModel no longer ignores args after a `RaggedTensor` when selecting the concrete function to run.+    * Fix save model issue for ops with a list of functions.+    * Add `tf.saved_model.LoadOptions` with [`experimental_io_device`](https://www.tensorflow.org/api_docs/python/tf/saved_model/LoadOptions) as arg with default value `None` to choose the I/O device for loading models and weights.+     * Update `tf.saved_model.SaveOptions` with [`experimental_io_device`](https://www.tensorflow.org/api_docs/python/tf/saved_model/SaveOptions?version=nightly) as arg with default value `None` to choose the I/O device for saving models and weights.

nightly? probably shouldn't be nightly for the release notes?

tensorflow-jenkins

comment created time in a month

Pull request review commenttensorflow/tensorflow

Update release notes for TensorFlow 2.3.0

 # Release 2.3.0 +## Major Features and Improvements+  * `tf.data` adds two new mechanisms to solve input pipeline bottlenecks and save resources:+    * [snapshot](https://www.tensorflow.org/api_docs/python/tf/data/experimental/snapshot)+    * [tf.data service](https://www.tensorflow.org/api_docs/python/tf/data/experimental/service). ++  In addition checkout the detailed [guide](https://www.tensorflow.org/guide/data_performance_analysis) for analyzing input pipeline performance with TF Profiler.++  * [`tf.distribute.TPUStrategy`](https://www.tensorflow.org/api_docs/python/tf/distribute/TPUStrategy) is now a stable API and no longer considered experimental for TensorFlow. (earlier `tf.distribute.experimental.TPUStrategy`).++  * TF Profiler introduces two new tools: a memory profiler to visualize your model’s memory usage over time and a python tracer which allows you to trace python function calls in your model. Usability improvements include better diagnostic messages and profile options to customize the host and device trace verbosity level.++  * Introduces experimental support for Keras Preprocessing Layers API ([`tf.keras.layers.experimental.preprocessing.*`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing?version=nightly)) to handle data preprocessing operations, with support for composite tensor inputs. Please see below for additional details on these layers.+  +  * TFLite now properly supports dynamic shapes during conversion and inference. We’ve also added opt-in support on Android and iOS for [XNNPACK](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/delegates/xnnpack), a highly optimized set of CPU kernels, as well as opt-in support for [executing quantized models on the GPU](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/gpu_advanced.md#running-quantized-models-experimental). ++  * Libtensorflow packages are available in GCS starting this release. We have also started to release a nightly version of these packages. + ## Breaking Changes+* Increases the **minimum bazel version** required to build TF to **3.1.0**.+* `tf.data`+  *  Makes the following (breaking) changes to the `tf.data`.+    * C++ API: - `IteratorBase::RestoreInternal`, `IteratorBase::SaveInternal`, and `DatasetBase::CheckExternalState` become pure-virtual and subclasses are now expected to provide an implementation.+    * The deprecated `DatasetBase::IsStateful` method is removed in favor of `DatasetBase::CheckExternalState`.+    * Deprecated overrides of `DatasetBase::MakeIterator` and `MakeIteratorFromInputElement` are removed.+  * The signature of `tensorflow::data::IteratorBase::SaveInternal` and `tensorflow::data::IteratorBase::SaveInput` has been extended with `SerializationContext` argument to enable overriding the default policy for the handling external state during iterator checkpointing. This is not a backwards compatible change and all subclasses of `IteratorBase` *need to be updated* accordingly.+* `tf.keras`+    * Add a new `BackupAndRestore` callback for handling distributed training failures & restarts. Please take a look at this [tutorial](https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras) for details on how to use the callback. +* `tf.image.extract_glimpse` has been updated to correctly process the case+   where `centered=False` and `normalized=False`. This is a breaking change as+   the output is different from (incorrect) previous versions. Note this+   breaking change only impacts `tf.image.extract_glimpse` and+   `tf.compat.v2.image.extract_glimpse` API endpoints. The behavior of+   `tf.compat.v1.image.extract_glimpse` does not change. The behavior of+   exsiting C++ kernel `ExtractGlimpse` does not change as well, so saved+   models will not be impacted.++## Bug Fixes and Other Changes++### TF Core:+  * Set `tf2_behavior` to 1 to enable V2 for early loading cases.+  * Add a function to dynamically choose the implementation based on underlying device placement.+  * Eager:+    * Add `reduce_logsumexp` benchmark with experiment compile.+    * Give `EagerTensor`s a meaningful `__array__` implementation.+    * Add another version of defun matmul for performance analysis.+  * `tf.function`/AutoGraph:+    * `AutoGraph` now includes into TensorFlow loops any variables that are closed over by local functions. Previously, such variables were sometimes incorrectly ignored.+    * functions returned by the `get_concrete_function` method of `tf.function` objects can now be called with arguments consistent with the original arguments or type specs passed to `get_concrete_function`.  This calling convention is now the preferred way to use concrete functions with nested values and composite tensors. Please check the [guide](https://www.tensorflow.org/guide/concrete_function) for more details on `concrete_ function`.+    * Update `tf.function`'s `experimental_relax_shapes` to handle composite tensors appropriately.+    * Optimize `tf.function` invocation, by removing redundant list converter.+    * `tf.function` will retrace when called with a different variable instead of simply using the `dtype` & `shape`.+    * [Improve support](https://github.com/tensorflow/tensorflow/issues/33862) for dynamically-sized TensorArray inside `tf.function`.+  * `tf.math`:+    * Narrow down `argmin`/`argmax` contract to always return the smallest index for ties.+    * `tf.math.reduce_variance` and `tf.math.reduce_std` return correct computation for complex types and no longer support integer types.+    * Add Bessel functions of order 0,1 to `tf.math.special`.+    * `tf.divide` now always returns a tensor to be consistent with documentation and other APIs.+  * `tf.image`:+    * Replaces [`tf.image.non_max_suppression_padded`](https://www.tensorflow.org/api_docs/python/tf/image/non_max_suppression_padded?hl=en&version=nightly) with a new implementation that supports batched inputs, which is considerably faster on TPUs and GPUs. Boxes with area=0 will be neglected. Existing usage with single inputs should still work as before.

Also neglected -> ignored (or omitted?)

Neglected feels like something you can only do to children or pets.

tensorflow-jenkins

comment created time in a month

Pull request review commenttensorflow/tensorflow

Update release notes for TensorFlow 2.3.0

 # Release 2.3.0 +## Major Features and Improvements+  * `tf.data` adds two new mechanisms to solve input pipeline bottlenecks and save resources:+    * [snapshot](https://www.tensorflow.org/api_docs/python/tf/data/experimental/snapshot)+    * [tf.data service](https://www.tensorflow.org/api_docs/python/tf/data/experimental/service). ++  In addition checkout the detailed [guide](https://www.tensorflow.org/guide/data_performance_analysis) for analyzing input pipeline performance with TF Profiler.++  * [`tf.distribute.TPUStrategy`](https://www.tensorflow.org/api_docs/python/tf/distribute/TPUStrategy) is now a stable API and no longer considered experimental for TensorFlow. (earlier `tf.distribute.experimental.TPUStrategy`).++  * TF Profiler introduces two new tools: a memory profiler to visualize your model’s memory usage over time and a python tracer which allows you to trace python function calls in your model. Usability improvements include better diagnostic messages and profile options to customize the host and device trace verbosity level.++  * Introduces experimental support for Keras Preprocessing Layers API ([`tf.keras.layers.experimental.preprocessing.*`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing?version=nightly)) to handle data preprocessing operations, with support for composite tensor inputs. Please see below for additional details on these layers.+  +  * TFLite now properly supports dynamic shapes during conversion and inference. We’ve also added opt-in support on Android and iOS for [XNNPACK](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/delegates/xnnpack), a highly optimized set of CPU kernels, as well as opt-in support for [executing quantized models on the GPU](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/gpu_advanced.md#running-quantized-models-experimental). ++  * Libtensorflow packages are available in GCS starting this release. We have also started to release a nightly version of these packages. + ## Breaking Changes+* Increases the **minimum bazel version** required to build TF to **3.1.0**.+* `tf.data`+  *  Makes the following (breaking) changes to the `tf.data`.+    * C++ API: - `IteratorBase::RestoreInternal`, `IteratorBase::SaveInternal`, and `DatasetBase::CheckExternalState` become pure-virtual and subclasses are now expected to provide an implementation.+    * The deprecated `DatasetBase::IsStateful` method is removed in favor of `DatasetBase::CheckExternalState`.+    * Deprecated overrides of `DatasetBase::MakeIterator` and `MakeIteratorFromInputElement` are removed.+  * The signature of `tensorflow::data::IteratorBase::SaveInternal` and `tensorflow::data::IteratorBase::SaveInput` has been extended with `SerializationContext` argument to enable overriding the default policy for the handling external state during iterator checkpointing. This is not a backwards compatible change and all subclasses of `IteratorBase` *need to be updated* accordingly.+* `tf.keras`+    * Add a new `BackupAndRestore` callback for handling distributed training failures & restarts. Please take a look at this [tutorial](https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras) for details on how to use the callback. +* `tf.image.extract_glimpse` has been updated to correctly process the case+   where `centered=False` and `normalized=False`. This is a breaking change as+   the output is different from (incorrect) previous versions. Note this+   breaking change only impacts `tf.image.extract_glimpse` and+   `tf.compat.v2.image.extract_glimpse` API endpoints. The behavior of+   `tf.compat.v1.image.extract_glimpse` does not change. The behavior of+   exsiting C++ kernel `ExtractGlimpse` does not change as well, so saved+   models will not be impacted.++## Bug Fixes and Other Changes++### TF Core:+  * Set `tf2_behavior` to 1 to enable V2 for early loading cases.+  * Add a function to dynamically choose the implementation based on underlying device placement.+  * Eager:+    * Add `reduce_logsumexp` benchmark with experiment compile.+    * Give `EagerTensor`s a meaningful `__array__` implementation.+    * Add another version of defun matmul for performance analysis.+  * `tf.function`/AutoGraph:+    * `AutoGraph` now includes into TensorFlow loops any variables that are closed over by local functions. Previously, such variables were sometimes incorrectly ignored.+    * functions returned by the `get_concrete_function` method of `tf.function` objects can now be called with arguments consistent with the original arguments or type specs passed to `get_concrete_function`.  This calling convention is now the preferred way to use concrete functions with nested values and composite tensors. Please check the [guide](https://www.tensorflow.org/guide/concrete_function) for more details on `concrete_ function`.+    * Update `tf.function`'s `experimental_relax_shapes` to handle composite tensors appropriately.+    * Optimize `tf.function` invocation, by removing redundant list converter.+    * `tf.function` will retrace when called with a different variable instead of simply using the `dtype` & `shape`.+    * [Improve support](https://github.com/tensorflow/tensorflow/issues/33862) for dynamically-sized TensorArray inside `tf.function`.+  * `tf.math`:+    * Narrow down `argmin`/`argmax` contract to always return the smallest index for ties.+    * `tf.math.reduce_variance` and `tf.math.reduce_std` return correct computation for complex types and no longer support integer types.+    * Add Bessel functions of order 0,1 to `tf.math.special`.+    * `tf.divide` now always returns a tensor to be consistent with documentation and other APIs.+  * `tf.image`:+    * Replaces [`tf.image.non_max_suppression_padded`](https://www.tensorflow.org/api_docs/python/tf/image/non_max_suppression_padded?hl=en&version=nightly) with a new implementation that supports batched inputs, which is considerably faster on TPUs and GPUs. Boxes with area=0 will be neglected. Existing usage with single inputs should still work as before.

Replaced

tensorflow-jenkins

comment created time in a month

Pull request review commenttensorflow/tensorflow

Update release notes for TensorFlow 2.3.0

 # Release 2.3.0 +## Major Features and Improvements+  * `tf.data` adds two new mechanisms to solve input pipeline bottlenecks and save resources:+    * [snapshot](https://www.tensorflow.org/api_docs/python/tf/data/experimental/snapshot)+    * [tf.data service](https://www.tensorflow.org/api_docs/python/tf/data/experimental/service). ++  In addition checkout the detailed [guide](https://www.tensorflow.org/guide/data_performance_analysis) for analyzing input pipeline performance with TF Profiler.++  * [`tf.distribute.TPUStrategy`](https://www.tensorflow.org/api_docs/python/tf/distribute/TPUStrategy) is now a stable API and no longer considered experimental for TensorFlow. (earlier `tf.distribute.experimental.TPUStrategy`).++  * TF Profiler introduces two new tools: a memory profiler to visualize your model’s memory usage over time and a python tracer which allows you to trace python function calls in your model. Usability improvements include better diagnostic messages and profile options to customize the host and device trace verbosity level.++  * Introduces experimental support for Keras Preprocessing Layers API ([`tf.keras.layers.experimental.preprocessing.*`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing?version=nightly)) to handle data preprocessing operations, with support for composite tensor inputs. Please see below for additional details on these layers.+  +  * TFLite now properly supports dynamic shapes during conversion and inference. We’ve also added opt-in support on Android and iOS for [XNNPACK](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/delegates/xnnpack), a highly optimized set of CPU kernels, as well as opt-in support for [executing quantized models on the GPU](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/gpu_advanced.md#running-quantized-models-experimental). ++  * Libtensorflow packages are available in GCS starting this release. We have also started to release a nightly version of these packages. + ## Breaking Changes+* Increases the **minimum bazel version** required to build TF to **3.1.0**.+* `tf.data`+  *  Makes the following (breaking) changes to the `tf.data`.+    * C++ API: - `IteratorBase::RestoreInternal`, `IteratorBase::SaveInternal`, and `DatasetBase::CheckExternalState` become pure-virtual and subclasses are now expected to provide an implementation.+    * The deprecated `DatasetBase::IsStateful` method is removed in favor of `DatasetBase::CheckExternalState`.+    * Deprecated overrides of `DatasetBase::MakeIterator` and `MakeIteratorFromInputElement` are removed.+  * The signature of `tensorflow::data::IteratorBase::SaveInternal` and `tensorflow::data::IteratorBase::SaveInput` has been extended with `SerializationContext` argument to enable overriding the default policy for the handling external state during iterator checkpointing. This is not a backwards compatible change and all subclasses of `IteratorBase` *need to be updated* accordingly.+* `tf.keras`+    * Add a new `BackupAndRestore` callback for handling distributed training failures & restarts. Please take a look at this [tutorial](https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras) for details on how to use the callback. +* `tf.image.extract_glimpse` has been updated to correctly process the case+   where `centered=False` and `normalized=False`. This is a breaking change as+   the output is different from (incorrect) previous versions. Note this+   breaking change only impacts `tf.image.extract_glimpse` and+   `tf.compat.v2.image.extract_glimpse` API endpoints. The behavior of+   `tf.compat.v1.image.extract_glimpse` does not change. The behavior of+   exsiting C++ kernel `ExtractGlimpse` does not change as well, so saved

as well -> either? Maybe my english isn't good enough.

tensorflow-jenkins

comment created time in a month

Pull request review commenttensorflow/tensorflow

Update release notes for TensorFlow 2.3.0

 # Release 2.3.0 +## Major Features and Improvements+  * `tf.data` adds two new mechanisms to solve input pipeline bottlenecks and save resources:+    * [snapshot](https://www.tensorflow.org/api_docs/python/tf/data/experimental/snapshot)+    * [tf.data service](https://www.tensorflow.org/api_docs/python/tf/data/experimental/service). ++  In addition checkout the detailed [guide](https://www.tensorflow.org/guide/data_performance_analysis) for analyzing input pipeline performance with TF Profiler.++  * [`tf.distribute.TPUStrategy`](https://www.tensorflow.org/api_docs/python/tf/distribute/TPUStrategy) is now a stable API and no longer considered experimental for TensorFlow. (earlier `tf.distribute.experimental.TPUStrategy`).++  * TF Profiler introduces two new tools: a memory profiler to visualize your model’s memory usage over time and a python tracer which allows you to trace python function calls in your model. Usability improvements include better diagnostic messages and profile options to customize the host and device trace verbosity level.++  * Introduces experimental support for Keras Preprocessing Layers API ([`tf.keras.layers.experimental.preprocessing.*`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing?version=nightly)) to handle data preprocessing operations, with support for composite tensor inputs. Please see below for additional details on these layers.+  +  * TFLite now properly supports dynamic shapes during conversion and inference. We’ve also added opt-in support on Android and iOS for [XNNPACK](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/delegates/xnnpack), a highly optimized set of CPU kernels, as well as opt-in support for [executing quantized models on the GPU](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/gpu_advanced.md#running-quantized-models-experimental). ++  * Libtensorflow packages are available in GCS starting this release. We have also started to release a nightly version of these packages. + ## Breaking Changes+* Increases the **minimum bazel version** required to build TF to **3.1.0**.+* `tf.data`+  *  Makes the following (breaking) changes to the `tf.data`.+    * C++ API: - `IteratorBase::RestoreInternal`, `IteratorBase::SaveInternal`, and `DatasetBase::CheckExternalState` become pure-virtual and subclasses are now expected to provide an implementation.+    * The deprecated `DatasetBase::IsStateful` method is removed in favor of `DatasetBase::CheckExternalState`.+    * Deprecated overrides of `DatasetBase::MakeIterator` and `MakeIteratorFromInputElement` are removed.+  * The signature of `tensorflow::data::IteratorBase::SaveInternal` and `tensorflow::data::IteratorBase::SaveInput` has been extended with `SerializationContext` argument to enable overriding the default policy for the handling external state during iterator checkpointing. This is not a backwards compatible change and all subclasses of `IteratorBase` *need to be updated* accordingly.+* `tf.keras`+    * Add a new `BackupAndRestore` callback for handling distributed training failures & restarts. Please take a look at this [tutorial](https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras) for details on how to use the callback. +* `tf.image.extract_glimpse` has been updated to correctly process the case+   where `centered=False` and `normalized=False`. This is a breaking change as+   the output is different from (incorrect) previous versions. Note this+   breaking change only impacts `tf.image.extract_glimpse` and+   `tf.compat.v2.image.extract_glimpse` API endpoints. The behavior of+   `tf.compat.v1.image.extract_glimpse` does not change. The behavior of+   exsiting C++ kernel `ExtractGlimpse` does not change as well, so saved+   models will not be impacted.++## Bug Fixes and Other Changes++### TF Core:+  * Set `tf2_behavior` to 1 to enable V2 for early loading cases.+  * Add a function to dynamically choose the implementation based on underlying device placement.

which function? link?

tensorflow-jenkins

comment created time in a month

Pull request review commenttensorflow/tensorflow

Update release notes for TensorFlow 2.3.0

 # Release 2.3.0 +## Major Features and Improvements+  * `tf.data` adds two new mechanisms to solve input pipeline bottlenecks and save resources:+    * [snapshot](https://www.tensorflow.org/api_docs/python/tf/data/experimental/snapshot)+    * [tf.data service](https://www.tensorflow.org/api_docs/python/tf/data/experimental/service). ++  In addition checkout the detailed [guide](https://www.tensorflow.org/guide/data_performance_analysis) for analyzing input pipeline performance with TF Profiler.++  * [`tf.distribute.TPUStrategy`](https://www.tensorflow.org/api_docs/python/tf/distribute/TPUStrategy) is now a stable API and no longer considered experimental for TensorFlow. (earlier `tf.distribute.experimental.TPUStrategy`).++  * TF Profiler introduces two new tools: a memory profiler to visualize your model’s memory usage over time and a python tracer which allows you to trace python function calls in your model. Usability improvements include better diagnostic messages and profile options to customize the host and device trace verbosity level.++  * Introduces experimental support for Keras Preprocessing Layers API ([`tf.keras.layers.experimental.preprocessing.*`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing?version=nightly)) to handle data preprocessing operations, with support for composite tensor inputs. Please see below for additional details on these layers.+  +  * TFLite now properly supports dynamic shapes during conversion and inference. We’ve also added opt-in support on Android and iOS for [XNNPACK](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/delegates/xnnpack), a highly optimized set of CPU kernels, as well as opt-in support for [executing quantized models on the GPU](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/gpu_advanced.md#running-quantized-models-experimental). ++  * Libtensorflow packages are available in GCS starting this release. We have also started to release a nightly version of these packages. + ## Breaking Changes+* Increases the **minimum bazel version** required to build TF to **3.1.0**.+* `tf.data`+  *  Makes the following (breaking) changes to the `tf.data`.+    * C++ API: - `IteratorBase::RestoreInternal`, `IteratorBase::SaveInternal`, and `DatasetBase::CheckExternalState` become pure-virtual and subclasses are now expected to provide an implementation.+    * The deprecated `DatasetBase::IsStateful` method is removed in favor of `DatasetBase::CheckExternalState`.+    * Deprecated overrides of `DatasetBase::MakeIterator` and `MakeIteratorFromInputElement` are removed.+  * The signature of `tensorflow::data::IteratorBase::SaveInternal` and `tensorflow::data::IteratorBase::SaveInput` has been extended with `SerializationContext` argument to enable overriding the default policy for the handling external state during iterator checkpointing. This is not a backwards compatible change and all subclasses of `IteratorBase` *need to be updated* accordingly.+* `tf.keras`+    * Add a new `BackupAndRestore` callback for handling distributed training failures & restarts. Please take a look at this [tutorial](https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras) for details on how to use the callback. +* `tf.image.extract_glimpse` has been updated to correctly process the case+   where `centered=False` and `normalized=False`. This is a breaking change as+   the output is different from (incorrect) previous versions. Note this+   breaking change only impacts `tf.image.extract_glimpse` and+   `tf.compat.v2.image.extract_glimpse` API endpoints. The behavior of+   `tf.compat.v1.image.extract_glimpse` does not change. The behavior of+   exsiting C++ kernel `ExtractGlimpse` does not change as well, so saved+   models will not be impacted.

"or models using tf.raw_ops.ExtractGlimpse"

tensorflow-jenkins

comment created time in a month

Pull request review commenttensorflow/tensorflow

Update release notes for TensorFlow 2.3.0

 # Release 2.3.0 +## Major Features and Improvements+  * `tf.data` adds two new mechanisms to solve input pipeline bottlenecks and save resources:+    * [snapshot](https://www.tensorflow.org/api_docs/python/tf/data/experimental/snapshot)+    * [tf.data service](https://www.tensorflow.org/api_docs/python/tf/data/experimental/service). ++  In addition checkout the detailed [guide](https://www.tensorflow.org/guide/data_performance_analysis) for analyzing input pipeline performance with TF Profiler.++  * [`tf.distribute.TPUStrategy`](https://www.tensorflow.org/api_docs/python/tf/distribute/TPUStrategy) is now a stable API and no longer considered experimental for TensorFlow. (earlier `tf.distribute.experimental.TPUStrategy`).++  * TF Profiler introduces two new tools: a memory profiler to visualize your model’s memory usage over time and a python tracer which allows you to trace python function calls in your model. Usability improvements include better diagnostic messages and profile options to customize the host and device trace verbosity level.

Can we add a link to docs?

tensorflow-jenkins

comment created time in a month

Pull request review commenttensorflow/tensorflow

Update release notes for TensorFlow 2.3.0

 # Release 2.3.0 +## Major Features and Improvements+  * `tf.data` adds two new mechanisms to solve input pipeline bottlenecks and save resources:+    * [snapshot](https://www.tensorflow.org/api_docs/python/tf/data/experimental/snapshot)+    * [tf.data service](https://www.tensorflow.org/api_docs/python/tf/data/experimental/service). ++  In addition checkout the detailed [guide](https://www.tensorflow.org/guide/data_performance_analysis) for analyzing input pipeline performance with TF Profiler.++  * [`tf.distribute.TPUStrategy`](https://www.tensorflow.org/api_docs/python/tf/distribute/TPUStrategy) is now a stable API and no longer considered experimental for TensorFlow. (earlier `tf.distribute.experimental.TPUStrategy`).++  * TF Profiler introduces two new tools: a memory profiler to visualize your model’s memory usage over time and a python tracer which allows you to trace python function calls in your model. Usability improvements include better diagnostic messages and profile options to customize the host and device trace verbosity level.++  * Introduces experimental support for Keras Preprocessing Layers API ([`tf.keras.layers.experimental.preprocessing.*`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing?version=nightly)) to handle data preprocessing operations, with support for composite tensor inputs. Please see below for additional details on these layers.+  +  * TFLite now properly supports dynamic shapes during conversion and inference. We’ve also added opt-in support on Android and iOS for [XNNPACK](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/delegates/xnnpack), a highly optimized set of CPU kernels, as well as opt-in support for [executing quantized models on the GPU](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/gpu_advanced.md#running-quantized-models-experimental). ++  * Libtensorflow packages are available in GCS starting this release. We have also started to release a nightly version of these packages. 

Link to where (or a doc which includes the information?)

tensorflow-jenkins

comment created time in a month

pull request commenttensorflow/tensorflow

TF Chat Room

But yeah, the other two commits look like a mistake to me.

Rishit-dagli

comment created time in a month

pull request commenttensorflow/tensorflow

TF Chat Room

The initial commit (f5d859777b3340319d44cc4f8897cad9039d0ddd) is fine, although we should add something like "(not actively monitored by the TensorFlow team)".

@theadactyl where do you think this pointer would be best? I'm actually thinking the community site: https://www.tensorflow.org/community/forums, rather than resources, which seems to be more about consumable content than interactive forums.

Rishit-dagli

comment created time in a month

PR opened tensorflow/community

Reviewers
Add note about compat.v1 exports
+6 -0

0 comment

1 changed file

pr created time in 2 months

create barnchtensorflow/community

branch : martinwicke-patch-1

created branch time in 2 months

issue commenttensorflow/tensorflow

tf.image.resize_images() - weird padding behaviour?

That op should also be available in tf.raw_ops.

JoelKronander

comment created time in 2 months

issue commenttensorflow/tensorflow

Sub-pixel shuffling tensor operation

I think you can use depth_to_space or a combination of flatten + reshape to achieve the same result, although I'm not sure whether there are behavior differences in cases where things don't divide neatly.

jhetherly

comment created time in 2 months

issue commenttensorflow/community

Ecosystem Issue/PR Grooming Sprints and public roadmap

I agree. It's less of a "keep our cards close to our chest" problem than a process problem. Keeping several tracking systems in sync and up to date is work, and someone has to do that. I want to avoid just adding more stale sources of information.

bhack

comment created time in 2 months

issue commenttensorflow/community

Ecosystem Issue/PR Grooming Sprints and public roadmap

@dynamicwebpaige @goldiegadde FYI

We have been picking off issues, but definitely not FIFO. I'm wondering whether it's possible to surface our prioritization to create a roadmap? I know we're doing that for release milestones, but it might also make sense more generally to give a better sense of progress.

bhack

comment created time in 2 months

pull request commenttensorflow/tensorflow

[tf.image.convert_image_dtype] - Add extra warning if scaling is skipped

Cool. Some stylistic comments, I think in principle this will work.

DEKHTIARJonathan

comment created time in 2 months

Pull request review commenttensorflow/tensorflow

[tf.image.convert_image_dtype] - Add extra warning if scaling is skipped

 class ResizeMethod(object):   MITCHELLCUBIC = 'mitchellcubic'  +def _check_equal_input_output_dtypes(func):+  """This decorator issue will issue a warning if the input dtype and output+  dtype are different. Help to prevent silent dtype conversion from integer to+  floating point format."""++  def wrapper(*args, **kwargs):+    try:+      check_dtype = _check_equal_input_output_dtypes._fns_flagged[func.__name__]+    except KeyError:+      _check_equal_input_output_dtypes._fns_flagged[func.__name__] = True+      check_dtype = True++    if check_dtype:+        if args:+            input_dtype = args[0].dtype+        else:+            input_dtype = kwargs["images"].dtype++    output = func(*args, **kwargs)++    if check_dtype and (input_dtype != output.dtype):+        _check_equal_input_output_dtypes._fns_flagged[func.__name__] = False+        logging.warning(+          "The operation `{func_name}` has silently converted the "+          "data type from `{input_dtype}` to `{output_dtype}`. "+          "This might have an adverse effect on data numerical "+          "range.".format(

I would rephrase to:

"This might have changed the image format in ways that make it incompatible with other tf.image functions."

DEKHTIARJonathan

comment created time in 2 months

Pull request review commenttensorflow/tensorflow

[tf.image.convert_image_dtype] - Add extra warning if scaling is skipped

 class ResizeMethod(object):   MITCHELLCUBIC = 'mitchellcubic'  +def _check_equal_input_output_dtypes(func):+  """This decorator issue will issue a warning if the input dtype and output+  dtype are different. Help to prevent silent dtype conversion from integer to+  floating point format."""++  def wrapper(*args, **kwargs):+    try:+      check_dtype = _check_equal_input_output_dtypes._fns_flagged[func.__name__]+    except KeyError:+      _check_equal_input_output_dtypes._fns_flagged[func.__name__] = True+      check_dtype = True++    if check_dtype:+        if args:+            input_dtype = args[0].dtype+        else:+            input_dtype = kwargs["images"].dtype++    output = func(*args, **kwargs)++    if check_dtype and (input_dtype != output.dtype):+        _check_equal_input_output_dtypes._fns_flagged[func.__name__] = False+        logging.warning(+          "The operation `{func_name}` has silently converted the "+          "data type from `{input_dtype}` to `{output_dtype}`. "+          "This might have an adverse effect on data numerical "+          "range.".format(+              func_name=func.__name__,+              input_dtype=input_dtype,+              output_dtype=output.dtype

Can we make this print the line number of the call to the function which triggered the warning? That would mean looking back in the stacktrace a few frames up ("a few" meaning how every many needed. If this utility function is used by different functions (eventually) the number of frames to unroll for this might have to be a parameter).

DEKHTIARJonathan

comment created time in 2 months

Pull request review commenttensorflow/tensorflow

[tf.image.convert_image_dtype] - Add extra warning if scaling is skipped

 class ResizeMethod(object):   MITCHELLCUBIC = 'mitchellcubic'  +def _check_equal_input_output_dtypes(func):+  """This decorator issue will issue a warning if the input dtype and output+  dtype are different. Help to prevent silent dtype conversion from integer to+  floating point format."""++  def wrapper(*args, **kwargs):+    try:+      check_dtype = _check_equal_input_output_dtypes._fns_flagged[func.__name__]+    except KeyError:+      _check_equal_input_output_dtypes._fns_flagged[func.__name__] = True+      check_dtype = True

Can we change this to the simpler (and more efficient, at least after we made this into a utility function):

if input_dtype == output_dtype:
  return

if funcname in _CHECK_EQUAL_INPUT_OUTPUT_FNS_FLAGGED:
  return

# print warning
_CHECK_EQUAL_INPUT_OUTPUT_FNS_FLAGGED.add(funcname)


DEKHTIARJonathan

comment created time in 2 months

Pull request review commenttensorflow/tensorflow

[tf.image.convert_image_dtype] - Add extra warning if scaling is skipped

 class ResizeMethod(object):   MITCHELLCUBIC = 'mitchellcubic'  +def _check_equal_input_output_dtypes(func):+  """This decorator issue will issue a warning if the input dtype and output+  dtype are different. Help to prevent silent dtype conversion from integer to+  floating point format."""++  def wrapper(*args, **kwargs):+    try:+      check_dtype = _check_equal_input_output_dtypes._fns_flagged[func.__name__]+    except KeyError:+      _check_equal_input_output_dtypes._fns_flagged[func.__name__] = True+      check_dtype = True++    if check_dtype:+        if args:+            input_dtype = args[0].dtype+        else:+            input_dtype = kwargs["images"].dtype++    output = func(*args, **kwargs)++    if check_dtype and (input_dtype != output.dtype):+        _check_equal_input_output_dtypes._fns_flagged[func.__name__] = False+        logging.warning(+          "The operation `{func_name}` has silently converted the "+          "data type from `{input_dtype}` to `{output_dtype}`. "+          "This might have an adverse effect on data numerical "+          "range.".format(+              func_name=func.__name__,+              input_dtype=input_dtype,+              output_dtype=output.dtype+          ))+    return output++  return wrapper+++_check_equal_input_output_dtypes._fns_flagged = dict()

It doesn't matter much, but I think it would be a little bit nicer to make this a plain global constant: _CHECK_EQUAL_INPUT_OUTPUT_DTYPES_FLAGGED = set()

DEKHTIARJonathan

comment created time in 2 months

Pull request review commenttensorflow/tensorflow

[tf.image.convert_image_dtype] - Add extra warning if scaling is skipped

 class ResizeMethod(object):   MITCHELLCUBIC = 'mitchellcubic'  +def _check_equal_input_output_dtypes(func):+  """This decorator issue will issue a warning if the input dtype and output+  dtype are different. Help to prevent silent dtype conversion from integer to+  floating point format."""++  def wrapper(*args, **kwargs):

A decorator is very ugly in stack traces, can we make this into a utility function which is called from _resize_images_common instead?

DEKHTIARJonathan

comment created time in 2 months

Pull request review commenttensorflow/tensorflow

[tf.image.convert_image_dtype] - Add extra warning if scaling is skipped

 class ResizeMethod(object):   MITCHELLCUBIC = 'mitchellcubic'  +def _check_equal_input_output_dtypes(func):+  """This decorator issue will issue a warning if the input dtype and output+  dtype are different. Help to prevent silent dtype conversion from integer to+  floating point format."""

Please add information about what exactly is compared: first input dtype to output dtype.

DEKHTIARJonathan

comment created time in 2 months

pull request commenttensorflow/tensorflow

[tf.image.convert_image_dtype] - Add extra warning if scaling is skipped

I think the comments in your code snippet are wrong?

DEKHTIARJonathan

comment created time in 2 months

pull request commenttensorflow/tensorflow

add convex hull cpu implmentation

On the face of it, this looks like tensorflow/graphics might be the best fit.

@sofienbouaziz WDYT?

musikisomorphie

comment created time in 2 months

pull request commenttensorflow/tensorflow

[INTEL MKL] Added input name in the bfloat16 namescope interface so t…

Thanks Reed. That leaves the question Karmel asked earlier.

cuixiaom

comment created time in 3 months

pull request commenttensorflow/tensorflow

[tf.image.convert_image_dtype] - Add extra warning if scaling is skipped

Np. Just left a review requesting changes so status is clear.

DEKHTIARJonathan

comment created time in 3 months

pull request commenttensorflow/tensorflow

[tf.image.convert_image_dtype] - Add extra warning if scaling is skipped

Ah. Sorry, I mistyped this: I meant to say, we should add the following check to tf.image.resize_image (and other endpoints which share the "feature" of potentially silently changing dtypes):

if output_dtype != input_dtype: WARN
DEKHTIARJonathan

comment created time in 3 months

issue commenttensorflow/tensorflow

tf.ConfigProto usage on TF 2.0

The C++ API still uses sessions, so use the session config as you would in 1.x.

samsamoa

comment created time in 3 months

pull request commenttensorflow/tensorflow

[tf.image.convert_image_dtype] - Add extra warning if scaling is skipped

Fair point. Looking at the size of the Python logic in that function, it looks to me like we can add a simple if input_dtype == output_dtype: WARN without fear of terrible performance implications.

We have code to limit the warnings to the first n in platform/tf_logging, but we have deprecated that code since it's too close to generic python logging. Fundamentally, any such limit would involve a global (and that's a fine use of a global in my opinion).

However, that is a question that @tensorflow/api-owners should maybe be involved in.

DEKHTIARJonathan

comment created time in 3 months

pull request commenttensorflow/tensorflow

[tf.image.convert_image_dtype] - Add extra warning if scaling is skipped

If you're strictly talking about the compat.v1 version I can see what you mean. But remember that TensorFlow supports (preferentially) eager execution, so the Python function will be executed for each batch.

I know that lots of users are confused. And if you look at other packages (e.g. OpenCV), you'll see that they have taken pretty similar approaches. For better or worse, image representations are underspecified, and in order to write processing functions, a standard is necessary. The one chosen in TF is: float images are assumed to be [0,1) normalized (unless HDR), and integer images are [0,max] representing fix point [0,1).

The bug in resize I am referring to is that it accepts non-float images, but does not renornalize them. It therefore can return float images which are not in a native format.

I don't believe we can change this behavior, but we can warn about integer images passed directly to resize.

Independently, I agree this isn't well-understood, and If you have a suggestion on how to improve the documentation I am definitely interested.

DEKHTIARJonathan

comment created time in 3 months

issue commenttensorflow/tensorflow

Python3.8 support

We prefer to not release it if we find there are issues. Sometimes that makes for a longer wait.

amitport

comment created time in 3 months

pull request commenttensorflow/tensorflow

[tf.image.convert_image_dtype] - Add extra warning if scaling is skipped

I actually think this is a bug (borderline) in resize_images. I would prefer if resize_images warned about automatically casting (but not properly converting) an image. I.e., if it warned about integer inputs.

I do have a performance concern about a warning in these functions, they are often in input pipelines, and if we add a warning, it should be in C++ (inside the resize kernels which do the actual conversion), and it should be smart about not warning for every image, since most likely this affects either every image or none.

DEKHTIARJonathan

comment created time in 3 months

pull request commenttensorflow/tensorflow

Fix issue in tf.image.extract_glimpse

Sorry, you do change the endpoint, but the docs are inconsistent.

yongtang

comment created time in 3 months

Pull request review commenttensorflow/tensorflow

Fix issue in tf.image.extract_glimpse

 def extract_glimpse(   >>> tf.image.extract_glimpse(x, size=(2, 2), offsets=[[1, 1]],   ...                         centered=False, normalized=False)   <tf.Tensor: shape=(1, 2, 2, 1), dtype=float32, numpy=-  array([[[[0.],-           [1.]],-          [[3.],-           [4.]]]], dtype=float32)>+  array([[[[4.],+           [5.]],+          [[7.],+           [8.]]]], dtype=float32)>

However, maybe the docstring should change to encode the current v1 behavior.

yongtang

comment created time in 3 months

Pull request review commenttensorflow/tensorflow

Fix issue in tf.image.extract_glimpse

 def extract_glimpse(   >>> tf.image.extract_glimpse(x, size=(2, 2), offsets=[[1, 1]],   ...                         centered=False, normalized=False)   <tf.Tensor: shape=(1, 2, 2, 1), dtype=float32, numpy=-  array([[[[0.],-           [1.]],-          [[3.],-           [4.]]]], dtype=float32)>+  array([[[[4.],+           [5.]],+          [[7.],+           [8.]]]], dtype=float32)>

This should not change, right? Since this is the v1 function?

yongtang

comment created time in 3 months

more