profile
viewpoint
Martin Wicke martinwicke San Francisco

tensorflow/skflow 3209

Simplified interface for TensorFlow (mimicking Scikit Learn) for Deep Learning

tensorflow/datasets 1818

TFDS is a collection of datasets ready to use with TensorFlow

studioml/studio 346

Studio: Simplify and expedite model building process

martinwicke/tf-dev-summit-tensorboard-tutorial 341

Code that accompanies my talk at TF Dev Summit 2016

martinwicke/tensorflow-tutorial 228

A tutorial on TensorFlow

eddysystems/eddy 51

eddy - autocorrect for java

tensorflow/java 34

Java bindings for TensorFlow

tensorflow/tfx-bsl 9

Common code for TFX

tensorflow/java-models 4

Models in Java

pull request commenttensorflow/community

RFC: Improved pip package structure

For tests which naturally live in Estimator, yes, they should be fine. If the dependency tree is a real tree without cycles, then putting integration tests in the highest spot needed to see all dependencies for the test will make this clean, independent of lazy loading.

annarev

comment created time in 18 days

Pull request review commenttensorflow/community

Adding a link to Examples

 Make sure you’ve thought through and addressed the following sections. If a se * Platforms: does this work on all platforms supported by TensorFlow? If not, why is that ok? Will it work on embedded/mobile? Does it impact automatic code generation or mobile stripping tooling? Will it work with transformation tools? * Execution environments (Cloud services, accelerator hardware): what impact do you expect and how will you confirm? -### Best Practices, Tutorials and Examples+### Best Practices * Does this proposal change best practices for some aspect of using/developing TensorFlow? How will these changes be communicated/enforced?++### Tutorials and Examples * If design changes existing API or creates new ones, the design owner should create end-to-end examples (ideally, a tutorial) which reflects how new feature will be used. Some things to consider related to the tutorial:     - The minimum requirements for this are to consider how this would be used in a Keras-based workflow, as well as a non-Keras (low-level) workflow. If either isn’t applicable, explain why.-    - It should show the usage of the new feature in an end to end example (from data reading to serving, if applicable). Many new features have unexpected effects in parts far away from the place of change that can be found by running through an end-to-end example.-    - This should be written as if it is documentation of the new feature, i.e., consumable by a user, not a TensorFlow developer. The code does not need to work (since feature is not implemented yet).+    - It should show the usage of the new feature in an end to end example (from data reading to serving, if applicable). Many new features have unexpected effects in parts far away from the place of change that can be found by running through an end-to-end example. TFX [Examples](https://github.com/tensorflow/tfx/tree/master/tfx/examples) have historically been good in identifying such unexpected side-effects and are as such one recommended path for testing things end-to-end.+    - This should be written as if it is documentation of the new feature, i.e., consumable by a user, not a TensorFlow developer. +    - The code does not need to work (since the feature is not implemented yet) but the expectation is that the code does work before the feature can be launched or promoted. 

Yes, good call.

ematejska

comment created time in a month

Pull request review commenttensorflow/community

Adding a link to Examples

 Make sure you’ve thought through and addressed the following sections. If a se * Does this proposal change best practices for some aspect of using/developing TensorFlow? How will these changes be communicated/enforced?

This doesn't copy the formatting changes, or is that a separate PR?

ematejska

comment created time in a month

pull request commenttensorflow/tensorflow

Update release notes for TensorFlow 1.15.1

Ah. Should we add the CVE when we have it? Or is it better to release now? I suppose the affects versions are listed in the cve anyway.

tensorflow-jenkins

comment created time in a month

issue commenttensorflow/tensorflow

tf.train.AdamOptimizer doesn't work with custom TPU training loop

New features (such as TPU support outside of TPUEstimator) will not necessarily be supported for old (compat.v1) APIs. I believe this is such a case.

sharvil

comment created time in a month

issue commenttensorflow/tensorflow

Saving best models instead of most recent models with tf.train.Saver.

Ah, sorry for the mistake. I corrected it above. You should use zeros, not constant, and flip the args.

The 7 is the batch size, but when creating the serving graph, the batch size will be eliminated anyway, so you can safely use any number you want, it won't affect the result at all.

julj

comment created time in a month

issue commenttensorflow/tensorflow

Porting codebase utilizing tf.slim to TF-2.0

@pichuan @tomerk How far are we with the slim port?

calledbymountains

comment created time in 2 months

issue commenttensorflow/tensorflow

Replacement for experimental_run_tf_function after removal from tf.keras.Model.compile

@tanzhenyu or @robieta would know more details.

tgaddair

comment created time in 2 months

pull request commenttensorflow/tensorflow

Added tf.strings.to_number() usage example

Thank you!

HotPotatoC

comment created time in 2 months

pull request commenttensorflow/tensorflow

Added tf.strings.to_number() usage example

Thank you and happy New year!

HotPotatoC

comment created time in 2 months

issue commenttensorflow/tensorflow

tf.round != np.round

Is there a reason we cannot make it the default?

I think this is a bug. @tensorflow/api-owners

m-colombo

comment created time in 2 months

issue commenttensorflow/tensorflow

importing tensorflow inside a function/object causes a memory leak

Yeah, it seems weird that we'd survive the import, only to error out later. Let's either import it inside the cluster creation, or print the error directly on import (but still continue), and later error with a note to look for the earlier error.

Honestly, I think the local import is preferable in this case.

rggjan

comment created time in 2 months

issue commenttensorflow/tensorflow

importing tensorflow inside a function/object causes a memory leak

@annarev is this related to lazy loaders or estimator/keras integration?

rggjan

comment created time in 2 months

Pull request review commenttensorflow/tensorflow

Cherrypick Windows DLL warning changes and related notes

 TensorFlow 2.1 will be the last TF release supporting Python 2. Python 2 support  ## Major Features and Improvements * The `tensorflow` pip package now includes GPU support by default (same as `tensorflow-gpu`) for both Linux and Windows. This runs on machines with and without NVIDIA GPUs. `tensorflow-gpu` is still available, and CPU-only packages can be downloaded at `tensorflow-cpu` for users who are concerned about package size.+* **Windows users:** officially-released `tensorflow` Pip packages are now built with Visual+  Studio 2019 version 16.4 in order to take advantage of the new `/d2ReducedOptimizeHugeFunctions` compiler flag. To use these new packages, you must install "Microsoft Visual C++ Redistributable for Visual Studio 2015, 2017 and 2019", available from Microsoft's website [here](https://support.microsoft.com/help/2977003/the-latest-supported-visual-c-downloads).+  * This does not change the minimum required version for building TensorFlow from source on Windows.

Although it does mean that you cannot to EIGEN_STRONG_INLINE, right?

angerson

comment created time in 2 months

issue commenttensorflow/tensorflow

Saving best models instead of most recent models with tf.train.Saver.

I believe that you should be able to use

def serving_input_receiver_fn(image_shape):
  input_images = tf.constant(tf.float32, [7] + image_shape)  # 7 is arbitrary and will be eliminated
  return tf.estimator.export.build_raw_serving_input_receiver_fn(input_images)

Now, that still uses placeholders internally, but you don't have to have any in your code.

julj

comment created time in 2 months

Pull request review commenttensorflow/community

RFC: Keras categorical input.

+# Keras categorical inputs++| Status        | Proposed                                             |+:-------------- |:---------------------------------------------------- |+| **Author(s)** | Zhenyu Tan (tanzheny@google.com), Francois Chollet (fchollet@google.com)|+| **Sponsor**   | Karmel Allison (karmel@google.com), Martin Wicke (wicke@google.com) |+| **Updated**   | 2019-12-12                                           |++## Objective++This document proposes 4 new preprocessing Keras layers (`CategoryLookup`, `CategoryCrossing`, `CategoryEncoding`, `CategoryHashing`), and 1 additional op (`to_sparse`) to allow users to:+* Perform feature engineering for categorical inputs+* Replace feature columns and `tf.keras.layers.DenseFeatures` with proposed layers+* Introduce sparse inputs that work with Keras linear models and other layers that support sparsity++Other proposed layers for replacement of feature columns such as `tf.feature_column.bucketized_column` and `tf.feature_column.numeric_column` has been discussed [here](https://github.com/keras-team/governance/blob/master/rfcs/20190502-preprocessing-layers.md) and are not the focus of this document.++## Example Workflows++Two example workflows are presented below. These workflows can be found at this [colab](https://colab.sandbox.google.com/drive/1cEJhSYLcc2MKH7itwcDvue4PfvrLN-OR#scrollTo=22sa0D19kxXY).++### Workflow 1++The first example gives an equivalent code snippet to canned `LinearEstimator` [tutorial](https://www.tensorflow.org/tutorials/estimator/linear) on the Titanic dataset:++```python+CATEGORICAL_COLUMNS = ['sex', 'n_siblings_spouses', 'parch', 'class', 'deck', 'embark_town', 'alone']+NUMERICAL_COLUMNS = ['age', 'fare']+# input list to create functional model.+model_inputs = []+# input list to feed linear model.+linear_inputs = []+for feature_name in CATEGORICAL_COLUMNS:+	feature_input = tf.keras.Input(shape=(1,), dtype=tf.string, name=feature_name, sparse=True)+	vocab_list = sorted(dftrain[feature_name].unique())+	# Map string values to indices+	x = tf.keras.layers.CategoryLookup(vocabulary=vocab_list, name=feature_name)(feature_input)+  x = tf.keras.layers.CategoryEncoding(num_categories=len(vocab_list))(x)+	linear_inputs.append(x)+	model_inputs.append(feature_input)++for feature_name in NUMERICAL_COLUMNS:+	feature_input = tf.keras.Input(shape=(1,), name=feature_name)+	linear_inputs.append(feature_input)+	model_inputs.append(feature_input)++linear_model = tf.keras.experimental.LinearModel(units=1)+linear_logits = linear_model(linear_inputs)+model = tf.keras.Model(model_inputs, linear_logits)++model.compile('sgd', loss=tf.keras.losses.BinaryCrossEntropy(from_logits=True), metrics=['accuracy'])++dftrain = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/train.csv')

It seems that this is referenced before assignment? Does this code run?

tanzhenyu

comment created time in 2 months

issue commenttensorflow/tensorflow

Performance: Training is much slower in TF v2.0.0 VS v1.14.0 when using `Tf.Keras` and `model.fit_generator`

@robieta we have to make sure that this behavior (shuffle is true except for generators) makes it into the documentation as well.

Raukk

comment created time in 2 months

issue commenttensorflow/tensorflow

tensorflow/workspace: re2 dependency does not use release/master branch

It's fundamentally the same process as we have for all other third party libraries so I'm not concerned about that. If the LTS release is available in addition to head it might be possible to test this without using open source testing, but I don't think it is.

perfinion

comment created time in 2 months

issue commenttensorflow/tensorflow

tensorflow/workspace: re2 dependency does not use release/master branch

TF could not use features not available on the latest LTS. That's not hard test or enforce. The bigger problem will be backwards incompatible changes, and although I expect those to be exceedingly rare, absl is explicitly not ruling them out.

perfinion

comment created time in 2 months

issue commenttensorflow/tensorflow

tf.train.AdamOptimizer doesn't work with custom TPU training loop

Is there any reason not to use tf.keras.optimizers.Adam?

sharvil

comment created time in 3 months

pull request commenttensorflow/community

RFC: Improved pip package structure

I think lazy loading is a good option. It is complicated by our current practice of testing which mixes cross-package integration tests with the rest. Otherwise it would not be unclean enough to bother me.

annarev

comment created time in 3 months

issue commenttensorflow/tensorflow

tensorflow/workspace: re2 dependency does not use release/master branch

Can we find out who owns re2 and ask them when a release with absl support will be available?

On Mon, Dec 2, 2019, 14:18 Austin Anderson notifications@github.com wrote:

Here's where the dependency was changed: 517ad0e https://github.com/tensorflow/tensorflow/commit/517ad0e87f8d1f23aa68236bdc474188037347dc

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/tensorflow/tensorflow/issues/34726?email_source=notifications&email_token=AAEM57IBTJW4UWO2R25CXADQWWCRZA5CNFSM4JTKBAO2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEFWB6HI#issuecomment-560733981, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAEM57OP6LPXVDIXVIL44HLQWWCRZANCNFSM4JTKBAOQ .

perfinion

comment created time in 3 months

push eventtensorflow-jenkins/tensorflow

Martin Wicke

commit sha fee475d43d8fee01b0cbe1cab40e847b5d5b30eb

Update RELEASE.md

view details

push time in 3 months

push eventtensorflow-jenkins/tensorflow

Martin Wicke

commit sha 710eaa7d0edb00f4fd1a9d4cc496a5099231239d

Update RELEASE.md

view details

push time in 3 months

issue commenttensorflow/tensorflow

tf_upgrade_v2 is not reporting issues related to invalid imports in TensorFlow 2

@tomerk -- I think to some extent this is inevitable (we are only considering "tf." when warning/modifying), but we did discuss whether it might be feasible to handle some common patterns.

nbro

comment created time in 3 months

issue commentpybind/pybind11

Set the __text_signature__ attribute of callables

Since PEP575 was withdrawn, is it sensible to pursue the __text_signature__ path?

I agree that signature would be much nicer, but I'll take the generated docstring over nothing at all. The generated string to stash in __doc__ would have to be modified (sadly, stripped of useful information), but that's still feasible?

anntzer

comment created time in 3 months

issue commenttensorflow/serving

official release linked with sentencepiece

The serving team is working on enabling dynamic loading of op libraries, but I think it'll take a while longer.

taylorchu

comment created time in 3 months

issue commenttensorflow/tensorflow

TPU support is incomplete

We're targeting this for 2.1.

martinwicke

comment created time in 3 months

issue closedtensorflow/tensorflow

Can I say the version compatibility of tensorflow is just not great.

Can I say the version compatibility of tensorflow is just like shit. The code under older versions can not work at all under the new versions and you never know which version you should use. The defintions of functions in different versions changes greatly and you never know which function is really what you need. When you want to run a new code, you have to try all the versions to make it work if you don't know its original tf version. Or you must change the code to statisfy the demand of the old versions. And sometimes you waste so much time and then you find you still can't make it work. Anybody has the same problem? I think it's one of the most important reasons why more and more people are turning to pytorch.

closed time in 4 months

yang-yk

issue commenttensorflow/tensorflow

Can I say the version compatibility of tensorflow is just not great.

@yang-yk the issue tracker is for feature requests or bug reports. Your complaint is not constructive in form or content.

TensorFlow conforms to semver for backwards compatibility. If we violated our promises, that would be a bug we can fix.

yang-yk

comment created time in 4 months

issue commenttensorflow/tensorflow

Rename this repo to Big Migraine !!

The issue tracker is for bug reports, or feature requests.

uday60

comment created time in 4 months

issue closedtensorflow/tensorflow

Rename this repo to Big Migraine !!

I don;t know which version to use for which purpose ... huge compatibility disaster ! Training on multiple gpu even bill gates cannot do... AVX instructions are missing for pip install... Tensorflow lite, Tensorflow TFX, Tensorflow serving.... crap after crap. whether to use keras or tensorflow There are no performance matrix comparing and converting your models... Even the book publishers and article writers are confused which version should they be using....

closed time in 4 months

uday60

issue commenttensorflow/tensorflow

TF 2.0 Upgrade Script: Unable to handle the @ operator for matrix multiplication

I don't yet see a v0.1.8 tag or release on GitHub, am I just not seeing it?

pzobel

comment created time in 4 months

issue commenttensorflow/tfx

Using TFX components with TF 1.x contrib ops

I never made a list of ops, but the RFC (https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md) contains a list of what happened to which projects.   Many ops moved to core or addons or io, although the ops which were used only in TFT wouldn't have been moved elsewhere.

unclepeddy

comment created time in 4 months

pull request commenttensorflow/community

RFC: TFX Standardized Inputs

All major questions should be resolved with the stakeholders involved beforehand. We can resolve details in the review (often, naming and such), but there shouldn't be open architecture or directional questions.

brills

comment created time in 4 months

pull request commenttensorflow/tensorflow

fixed cond with grouped summaries in one branch

The Android App failure is a flake unrelated to your change.

lsgrep

comment created time in 4 months

issue commenttensorflow/community

Can we automatic label Nagging Assignees?

@chanshah is continuously working through our backlog, and you may have noticed that we have much better label coverage recently. I love the Issue Grooming announcement though -- it is something we could imitate.

bhack

comment created time in 4 months

pull request commenttensorflow/community

RFC: Best practices for custom operations in TensorFlow

Yes, this can be merged. Comments have already been integrated as far as I can tell.

alextp

comment created time in 4 months

pull request commenttensorflow/tensorflow

[Intel Mkl] Upgrading MKL-DNN to 0.20.6 to fix SGEMM regression

Since there are other issues with .18, this looks good to me, as long as we're sure this fixes the regression.

claynerobison

comment created time in 4 months

Pull request review commenttensorflow/community

RFC: TFX Standardized Inputs

+<!-- mdlint off(HEADERS_TOO_MANY_H1) -->++# Standardized TFX Inputs++Status        | Proposed+:------------ | :------------------------------------------------------------+**RFC #**     | [162](https://github.com/tensorflow/community/pull/162)+**Author(s)** | Zhuo Peng (zhuo@google.com), Kester Tong (kestert@google.com)+**Sponsor**   | Konstantinos Katsiapis (katsiapis@google.com)+**Updated**   | 2019-10-03++# Objective++*   To define a common in-memory data representation that:+    *   is powerful enough to encode the following logical training data format:+        flat+        ([`tf.Example`](https://github.com/tensorflow/tensorflow/blob/abfba15cd9734cec7ecd3d0661b146fc251c842d/tensorflow/core/example/example.proto#L88)),+        sequence+        ([`tf.SequenceExample`](https://github.com/tensorflow/tensorflow/blob/abfba15cd9734cec7ecd3d0661b146fc251c842d/tensorflow/core/example/example.proto#L298))+        or structured data (e.g.+        [Protocol Buffers](https://developers.google.com/protocol-buffers) or+        [Apache Avro](https://avro.apache.org/)).+    *   all TFX components can understand and can support their own unique use+        cases with.+*   To define an I/O abstraction layer that produces the above in-memory+    representation from supported physical storage formats, while hiding TFX’s+    choice of such storage formats from TFX users.+*   To define a bridge from the above in-memory representation to TF feedables+    (i.e. Tensors and certain+    [CompositeTensors](https://github.com/tensorflow/tensorflow/blob/abfba15cd9734cec7ecd3d0661b146fc251c842d/tensorflow/python/framework/composite_tensor.py#L1)).++# Motivation++## Fragmented in-memory data representations across TFX++A TFX component may use its input data (data generated by ExampleGen) in two+ways:++*   it may need to understand the data and conduct analysis. Usually this+    happens in [Apache Beam](https://beam.apache.org/), and does not involve a+    TF Model. For example:+    *   [TFDV](https://github.com/tensorflow/data-validation) and+        [TFT](https://github.com/tensorflow/transform) compute some statistics+        over the data.+    *   [TFMA](https://github.com/tensorflow/model-analysis) may slice the+        dataset by certain columns in the data.+*   it may feed the data to TensorFlow. Note that feeding TF alone may not+    require understanding the data. For example, TFMA may feed a TF model with+    serialized tf.Example, which may be the raw form of the data.++Currently, each TFX component has its own in-memory data representation to cover+the two use cases by a different approach:++|    | TFDV | TFT  | TFMA | BulkInference |+|:---| :--- | :--- | :--- | :------------ |+| In-memory data representation | Arrow RecordBatches | Dict[str, np.ndarray] | str (raw data records), Dict[str, np.ndarray] | str (raw data records) |+| Understand the data and conduct analysis | input data is encoded losslessly as RecordBatches. | the in-mem representation may be lossy. | Relies on the model’s input layer, and the format is Dict[str, np.ndarray]. | N/A |+| Feed TF | N/A | the in-mem representation is TF feedable. | Feed “raw data” to the model. | Feed “raw data” to the model |++This has created many issues:++*   Users of individual components need to adapt their data (if input format not+    already supported) to each component they want to use.+*   Individual components rely on unenforceable assumptions on how to interpret+    the input data consistently.+*   The complexity of adding new logical data representations (for example,+    tf.SequenceExample) scales with the number of components.++## The need for supporting new physical storage formats in TFX++Two factors drive this need:++*   TFX needs to offer users more choices of the storage format, e.g,+    [Apache Parquet](https://parquet.apache.org/).+*   TFX wants to be able to choose the optimal storage format based on user’s+    workload, in a user-transparent manner. A unified I/O abstraction would make+    it easier to support a new physical format in TFX, since one would not have+    to understand every single TFX component in order to implement such support.++## TFX interoperability with the rest of the world++If we choose a commonly available and adopted exchange format as our in-memory+representation, our users will be able to use TFX components with much less+effort on data conversion. This aligns with TFX’s long term vision.++# User Benefit++## TFX End Users++While this change is transparent to end users, it will facilitate the design and+implementation of many user-facing features, for example:++*   Columnar storage format in TFX.+*   Structured training examples.++## Individual TFX component users++We use TFXIO to refer to the proposed I/O abstraction layer. All TFX components+will start using TFXIO to ingest the data and have a unified way of representing+the data. Individual TFX component users would be able to implement TFXIO for+their own data formats / storage formats that are not supported by TFX. By+design, any such implementation will be readily accessible by all TFX+components.++## TFX developers++Developers working on TFX infrastructure will not have to understand the+internals of each component any more in order to make changes to I/O and parsing+(for example, adding support for a new storage format for the training+examples).++Developers working on TFX components would benefit from sharing common+operations against the unified in-memory representation, or even higher-level+computations. For instance, suppose that we implement a sketch-based algorithm+to compute approximate heavy hitters over this in-memory representation. We can+now share this implementation inside both TFDV and TFT for their top-K feature+value computation.++# Design Proposal++This design proposes **a common in-memory data representation**, **a way to+translate that into TF feedables** (np.ndarray or EagerTensors) and **a set of+APIs** each component can use to get both.++![alt_text](20191017-tfx-standardized-inputs/overview.png)++## Common in-memory data representation++[Apache Arrow](https://arrow.apache.org/) will be used as the common in-memory+data representation. Beam-based TFX components will accept+<code>PCollection[pyarrow.[RecordBatch](https://arrow.apache.org/docs/python/data.html#record-batches)]</code>.++Each logical data format will have its own encoding convention,+[discussed](#logical-data-encoding-in-arrow) in the detailed design.++We chose Apache Arrow because:++*   It’s Expressive enough.+    *   Lossless encoding of (conformant) tf.Example, tf.SequenceExample+    *   Can encode structured data (proto)+*   It’s a columnar format. It works well with common TFX workloads:+    *   Column (feature)-wise analysis+    *   Feed a batch of columns (features) to TensorFlow.+*   It’s OSS friendly.+    *   Community support for more storage format I/O (e.g. Apache Parquet)+    *   Friendly to other OSS data formats, both in-memory and on disk (e.g.+        Pandas)+    *   Friendly to numpy / TF: many Arrow array types share the same memory+        layout with numpy ndarrays and certain type of TF (composite) Tensors.+*   TF neutral.+    *   Leaves the possibility of supporting other ML libraries open.++## Translation from Arrow to TF feedables++The analogy to this is parsing tf.Examples into TF feedables -- extra+information is needed in this translation because a+[`Feature`](https://github.com/tensorflow/tensorflow/blob/abfba15cd9734cec7ecd3d0661b146fc251c842d/tensorflow/core/example/feature.proto#L76)+can be converted to a Tensor, a SparseTensor or a+[RaggedTensor](https://www.tensorflow.org/guide/ragged_tensor) depending on the+[feature specs](https://github.com/tensorflow/tensorflow/blob/635e23a774936b5fe6fa3ef3cb6e54b55d93f324/tensorflow/python/ops/parsing_ops.py#L46-L49).+Currently this extra information is implicitly contained in the pipeline schema+(an instance of the+[TFMD Schema](https://github.com/tensorflow/metadata/blob/master/tensorflow_metadata/proto/v0/schema.proto))+proto.++Similarly, an Arrow column can be translated to various TF feedables.+[An extension to the pipeline schema](#tensorrepresentation) is proposed to for+a user to express the intention for conversion.++The conversion can be efficient (zero-copy) in certain cases. It is+[discussed](#efficient-arrow-tensor-conversion) in the detailed design.++## Standardized Inputs APIs++We propose a set of APIs that TFX components will call, and need to be+implemented for each of the supported combination of {physical, logical} format.++```py+class TFXIO(object):+  """Abstract basic class of all Standardized TFX inputs API implementations."""+  def __init__(+      self, pipeline_env,+      schema: Optional[tfmd.Schema]=None+  ):+    pass++  @abc.abstractmethod+  def BeamSource(self,+                 projections: Optional[List[Text]]=None+  ) -> beam.PTransform:+    """Returns a beam PTransform that produces PCollection[pa.RecordBatch].++    May NOT raise an error if the TFMD schema was not provided at construction time.++    Args:+      projections: if not None, only the specified subset of columns will be+      read.+    """++  @abc.abstractmethod+  def TensorAdapter(self) -> TensorAdapter:+    """Returns a TensorAdapter that converts pa.RecordBatch to TF inputs.++    May raise an error if the TFMD schema was not provided at construction time.+    """++  @abc.abstractmethod+  def ArrowSchema(self) -> pyarrow.Schema:+    """Returns the schema of the Arrow RecordBatch generated by BeamSource().++    May raise an error if the TFMD schema was not provided at construction time.+    """++  @abc.abstractmethod+  def TFDataset(self, ...) -> tf.data.Dataset:+    """Returns a Dataset of TF inputs.++    May raise an error if the TFMD schema was not provided at construction time.+    """+```++Where `TensorAdapter` is:++```py+class TensorAdapter(object):++  def __init__(+      self,+      tensor_representations: Dict[str, TensorRepresentation]):+    """Initializer.++    Args:+      tensor_representations: keys are the names of the output tensors; values+      describe how an output tensor should be derived from a RecordBatch.+    """+    pass++  def TypeSpecs(self) -> Dict[str, tf.TypeSpec]:+    """Returns tf.TypeSpec for each tensor to be produced by ToBatchTensors().++    TypeSpecs can be used to construct placeholders or tf.function signatures.+    """++  def ToBatchTensors(+      self, record_batch: pyarrow.RecordBatch,+      projections: Optional[List[TensorName]]=None+  ) -> Dict[str, TFFeedable]:  # TFFeedable: np.ndarrays or tf.EagerTensor+                               # (or compositions of them, i.e.+                               # CompositeTensors).+    """Converts a record batch to batched tensors.++    Each will conform to the corresponding TypeSpec.++    Args:+      projections: if not None, only specified subset of tensors will be+      converted.+    """+```++Note that we will provide a default implementation of `TensorAdapter`, but TFXIO+implementations can implement their own `TensorAdapter`. A custom+`TensorAdapter` would allow a `TFXIO` implmentation to rely on a TF graph to+do parsing -- the same graph can be used in both `BeamSource` and+`TensorAdapter`.++# Detailed Design++## Logical data encoding in Arrow++On a high level, a batch of logical entities (“examples”) is encoded into a+[`pyarrow.RecordBatch`](https://arrow.apache.org/docs/python/generated/pyarrow.RecordBatch.html#pyarrow.RecordBatch).+Features or fields (from structured records) are encoded as columns in the+RecordBatch.++Note that+[`pyarrow.Table`](https://arrow.apache.org/docs/python/data.html#tables) offers+an abstraction similar to RecordBatch with the key difference being that a+column in a Table might contain multiple chunks of contiguous memory regions+while a column in a RecordBatch contains only one chunk. RecordBatch is chosen+because we want to enforce that TFXIO implementations produce batched data in+the most efficient way (one chunk per batch). Users of TFXIO may construct a+Table from one or more RecordBatches since easy conversion from one to the other+is supported by Apache Arrow.++This design aims to support the logical structure of tf.Example,+tf.SequenceExample or structured data like Protocol Buffers. Thus only a subset+of Arrow array types are needed. All TFX components will guarantee to understand+those types, but no more. Below is a summary of supported encodings:++| Logical representation | Arrow encoding |+| :--------------------- | :------------- |+| Feature with no value | `NullArray`                                  |+| Univalent feature (one value per example) | `FixedSizeListArray` (list_size = 1) |+| Multivalent feature (multiple values per example) | `[FixedSize]ListArray` |+| Sequence feature (list of lists of values per example) | `[FixedSize]ListArray<[FixedSize]ListArray>` |+| Proto-like structured data | `ListArray<StructArray<{subfield:ListArray<recursion>}>>` |++However the design is flexible to support more complicated logical structures,+for example, k-nested sequences (tf.SequenceExample is 2-nested).++Next we show that these encodings cover the logical data formats we aim to+support:++### tf.Example++[Conformant](https://github.com/tensorflow/tensorflow/blob/abfba15cd9734cec7ecd3d0661b146fc251c842d/tensorflow/core/example/example.proto#L78)+tf.Examples are assumed. I/O + parsing should throw an error upon non-conformant+instances.++A key requirement derived from the conformant-ness is for the encoding to be+able to distinguish the following two cases:++*   a feature is present, but it’s value list is empty++    ```+    {+      features {+        "my_feature": {+          bytes_list {+          }+        }+    }+    ```++*   a feature is not present++    ```+    {+      features {+      }+    }+    ```++    or++    ```+    {+      features {+        "my_feature": {}  # none of the oneof is set+      }+    }+    ```++Each feature can be encoded as:++```+[FixedSize]ListArray<int64|float32|binary>+```++Then, the feature value in case a) is encoded as an empty sub-list, while the+feature value in case b) is encoded as null.++If we know that all the lists in a `ListArray` are of equal length (from the+schema of the data, see below sections), `FixedSizeListArray` can be used to+obviate the `O(N)` space overhead for lengths of lists.++### tf.SequenceExample++[Conformant](https://github.com/tensorflow/tensorflow/blob/abfba15cd9734cec7ecd3d0661b146fc251c842d/tensorflow/core/example/example.proto#L184)+tf.SequenceExamples are assumed. I/O + parsing should throw an error upon+non-conformant instances.++A context feature will be encoded similarly to a feature in tf.Example. A+sequence feature will be encoded as:++```+[FixedSize]ListArray<[FixedSize]ListArray<int64|float32|binary>>+```++To avoid name conflicts with context features, all the sequence features can be+grouped into one `StructArray`:++```+StructArray<{'sequence_feature1': ListArray<ListArray<int64|float32|binary>>, ...}>+```++### Structured data (e.g. Protocol Buffers / Apache Avro)++A batch of structured records can be encoded as follows:++*   Each direct leaf field of the structure can be encoded similarly to+    tf.Example. (`ListArray` of primitive types).+*   Each sub-message can be encoded as:++    ```+    ListArray<StructArray<recursion...>>>+    ```++## Arrow to TF Feedable conversion++### TensorRepresentation++One or more Arrow columns can potentially be converted to multiple types of TF+feedables.++For example, a `ListArray<int64>` can be converted to:++*   a Tensor, if given a default value to pad+*   a SparseTensor to represent a ragged array+*   a RaggedTensor++The choice depends on user’s intents, which currently is+[implicitly](https://github.com/tensorflow/transform/blob/11afcff467f779ba6163686395582e69603987d1/tensorflow_transform/tf_metadata/schema_utils.py#L172)+expressed in the pipeline schema.++We propose to create a new [TFMD](https://github.com/tensorflow/metadata)+(TensorFlow MetaData) Proto, `TensorRepresentation` to carry those intents implicitly:++```protobuf+message TensorRepresentation {+  oneof {+    DenseTensor { … }  // column_name, dtype, shape, default_value+    VarLenSparseTensor { … }  // column_name, dtype+    SparseTensor { }  // dtype, value_column_name, indice_column_names+    VarLenRaggedTensor { … } // dtype+    RaggedTensor { } // dtype, value_column_name, row_partition_column_names, ...+    StructuredTensor { } // column_names+  }+}+```++This proto is used in two places:++*   It’s part of TFMD schema:++    ```protobuf+      message TensorRepresentationGroup {+        map<string, TensorRepresentation> tensor_representation = 2;+      };++      message Schema {+       repeated Feature feature = 1;+       // …+       map<string, TensorRepresentationGroup> tensor_representation_group = 42;+      }+    ```++    Note :++    *   `TensorRepresentationGroup` allows different instances of one TFX+        component to use different sets of `TensorRepresentation`s.+    *   `tensor_representation_group` is **optional**. If the user does not+        specify any, a default representation will be derived from+        schema.feature to keep backwards compatibility.+    *   this field is **not** a sub-message of Schema::Feature, because a TF+        feedable may comprise multiple columns++    Being part of the schema makes it possible to serialize and materialize the+    intents for other components to use, which allows TFT’s materialization+    functionality to have its own TFXIO implementation that hides the+    data/physical format from the user.++    When generating the initial schema from the statistics of the data, TFDV can+    propose a default set of `TensorRepresentationGroup`. The user may revise+    the proposal and TFDV can validate `TensorRepresentationGroup`s in a+    continuous manner.++*   The default implementation of TensorAdapter takes an optional `Dict[str,+    TensorRepresentation]` at construction time. If a TFXIO implementation+    choose to use the default TensorAdapter, it needs to provide them (may come+    directly from the Schema).++### Efficient Arrow->Tensor conversion++The key to efficient conversions is to avoid copying of data. The prerequisites+to do so are:++*   Same memory alignment+*   Same memory layout++Currently 64-byte alignment is the standard in both Tensorflow's `TensorBuffer`+and Apache Arrow's `Buffer`. Forthermore, it can be guaranteed by implementing+our own version of `arrow::MemoryPool` that is backed by a+`tensorflow::Allocator`.++The memory layout will be the same if right types are chosen at both ends thus+zero-copy conversion can be done, for example:++*   `FixedLengthListArray` (or `ListArray` of equal-length lists) -> dense+    Tensors.+*   `ListArray<ListArray<...>>` ->+    [RaggedTensors](https://github.com/tensorflow/tensorflow/blob/3c2dabf53dd085c21e38a28b467e52c566c0dfaf/tensorflow/python/ops/ragged/ragged_tensor.py#L1).+*   `ListArray<StructArray<... recursion>>` ->+    [StructuredTensors](https://github.com/tensorflow/community/blob/master/rfcs/20190910-struct-tensor.md)++In other cases, copies can be avoided for the values, but some computation is+needed:++*   `ListArray<ListArray<...>>` -> `tf.SparseTensor`+    *   Need to compute the sparse indices from `ListArray`'s list offsets.++The remaining cases require a copy:++*   `ListArray<ListArray<...>>`(of non-equal-length lists) -> dense Tensors++With TensorRepresentation available in the Schema, a TFXIO implementation may+optimize its decoder to choose the most efficient Arrow type.++#### Conversion of string features++Arrow’s string arrays (`BinaryArray`) have a different memory layout than+TensorFlow’s string Tensors, even with+[`tensorflow::tstring`](https://github.com/tensorflow/community/blob/master/rfcs/20190411-string-unification.md).+There is always some overhead in conversion, but with `tensorflow::tstring` a+Tensor of `string_view`s is possible, thus the overhead will be a function of+the number of strings being converted, instead of the lengths of the strings.++#### TF APIs for conversions++In TF 1.x we will use np.ndarray as a bridge as Arrow has zero-copy conversion+to numpy’s ndarrays. (not for string arrays).++Starting from TF 2.x, we will be able to create EagerTensors from Python+memoryview(s) so that strings can be covered.++## TFMD Schema++[The TFMD Schema](https://github.com/tensorflow/metadata/blob/master/tensorflow_metadata/proto/v0/schema.proto)+is a pipeline-level artifact and in the scope of this proposal, it may serve two+purposes:++*   To provide optional inputs to the parsing logic for optimizations.+*   To carry user’s intents of converting data to TF feedables.++The two purposes don’t have to be served in the following cases:++*   TFDV should not require a schema to work and it does not need TF feedables.+*   Some TFXIO implementation may not need the schema for either purposes.++Therefore the TFMD schema is optional, and a TFXIO implementation:++*   should guarantee that the `BeamSource()`can return a valid+    `PCollection[RecordBatch]` without a schema.+    *   Other interfaces may raise an error when a schema was not provided.+*   does not have to require a TFMD schema for all its interfaces to work.++## (TensorFlow) Trainer integration++For TFX to freely choose the storage format for training examples for a user, we+**cannot** expose file-based or record-based interface to that user in the TF+trainer, because:++*   the user might not know how to open those files.+*   there might not be an efficient representation of a “record” (this is true+    for columnar storage formats like Apache Parquet) but only an efficient+    representation of a batch of records.++Thus we propose that to most users, the TF Trainer only exposes a handle to a+`tf.data.Dataset` of parsed (composite) Tensors.++Each `TFXIO` implementation will implement a `TFDataset()` interface to return+such a `tf.data.Dataset`. This dataset contains logically a set of batched+(composite) Tensors that are of the same type as the corresponding+`TensorAdapter()` would return for a `RecordBatch`. See+[this section](#recommended-way-of-implementing-a-tfxio) about how to minimize+the code needs to be written for a new `TFXIO` implementation.++The `TFDataset()` interface will accept common knobs that a user may need to+tweak:++*   Batch size+*   Random shuffle++## Code organization and OSS++### `tfx_bsl` package++TFXIO will be used by all TFX components as well as the TFX framework, making it+be almost at the bottom of the dependency chain. Moreover, a lot of+implementations details will be in C++, with python wrapping around, and we want+to make sure our TFX components pip packages remain pure Python for easy+maintenance. Therefore we propose a new python package+[tfx_bsl](https://github.com/tensorflow/tfx-bsl) (TFX Shared Basic Libraries) to+contain the implementations of `TFXIO` and other libraries shared across TFX+components.++![alt_text](20191017-tfx-standardized-inputs/oss_lib_org.png)++## Recommended way of implementing a TFXIO++To maximize code sharing, the following way of implementing a `TFXIO` is+suggested:++![alt_text](20191017-tfx-standardized-inputs/impl_tfxio.png)++One would only need to implement the IO+Parsing-to-arrow in C++ once, and reuse+it in the BeamSource() and a format-specific Dataset Op that produces a+DT_VARIANT tensor that points to the parsed Arrow RecordBatch. Then we provide+one C++ library that translates the Arrow RecordBatch to Tensors, which can also+be reused in a TF op (as the downstream of the Dataset, or in a Python wrapper).++# Alternatives Considered++We’ve considered an alternative where+[**StructuredTensor**](https://github.com/tensorflow/community/blob/master/rfcs/20190910-struct-tensor.md)+is the unified in-memory representation, and **tf.Data** is the unified I/O+abstraction.++StructuredTensor is equally powerful as Arrow’s StructArray so it’s able to+represent all our logical representations.++Because StructuredTensor is a CompositeTensor, we could imagine that I/O ++parsing of a logical data format in a physical storage format is handled by a+specific tf.Data Dataset that yields StructuredTensors.++We also need to be able to construct a beam source from such a Dataset (although+there is a non-trivial gap).++The advantages of this approach are:++*   Less effort to integrate: adding a new format == adding a new tf.Dataset.+*   Unified story across TF and beam.+*   No third-party dependencies.+    *   Note that we can still provide an Arrow -> StructuredTensor adapter to+        achieve interoperability.++What this alternative does not change / address:++*   We still need the TensorAdapter API to convert from StructuredTensor to+    other TF feedables, unless we are willing to only offer StructuredTensors to+    our end-users (which might be the case eventually, but not likely to happen+    soon). So a good portion of this proposal will remain mostly unchanged.+*   We still cannot expose file-based or record-based interface to end-users in+    the TF trainer. As that is a direct result of I/O + parsing being abstracted+    out.++The disadvantages of this approach are:++*   This tightly couples TF with TFX components, in the following ways:++    *   Components will need TF to read the data in.+    *   Components that analyze the data (e.g. TFDV) will operate against+        StructuredTensors, and the easiest way to to conduct certain+        computations (for example, slicing a StructuredTensor or computing the+        mean) with StructuredTensors is through TF ops and their python+        bindings.++    Such a coupling is not in our favor because:++    *   Some TFX components functionally do not require TF to work. For example,+        TFDV can analyze any data set. TFMA can analyze a model while treating+        the model as a blackbox. In both cases, the ML library that trained the+        model is irrelevant and TF should not be assumed. And coupling with TF+        does not only introduce a heavy dependency, but also forces the user to+        learn about TF if they need to implement a TFXIO for their data format.++    *   Operations against StructuredTensors boils down to Python + TF ops. The+        overhead of either is much higher than just calling an Arrow C++ API+        that does the same operation.++    *   The main extension point of TF is Ops which don't understand the nested+        structures. Compared to using Arrow’s C++ APIs, implementing an Op that+        deals with StructuredTensors will be much complicated.++# Questions and Discussion Topics++## OSS build / release issues++### We don’t have a strong representation in the Arrow community++This has led to some issues. For example, the PyPI/Wheel packaging for pyarrow+currently is unfunded and lacks volunteers and there is a risk of support being+dropped, but we do rely on the pyarrow wheel as TFX is released on PyPI.++### ABI compatibility with libarrow++The OSS library, tfx_bsl will depend on Arrow and TensorFlow’s DSOs (dynamic+shared objects). Because both libraries currently expose C++ APIs, there are+always risks of incompatible ABIs as TensorFlow and Arrow are likely to be built+using different toolchains, we cannot completely eliminate the risks.++With [Modular TensorFlow](https://github.com/tensorflow/community/pull/77),+which replaced all the C++ APIs with C-APIs, we will be able to eliminate the

Modular TensorFlow is not yet implemented. We should probably first check whether arrow and TF can coexist. The main problems are usually protobuf and absl.

brills

comment created time in 4 months

issue commenttensorflow/tensorflow

Performance: Training is much slower in TF v2.0.0 VS v1.14.0 when using `Tf.Keras` and `model.fit_generator`

@karmel, @robieta, this looks like a problem with plain numpy input and fit_generator, both CPU and GPU. Can you take a look?

Raukk

comment created time in 5 months

pull request commenttensorflow/tensorflow

Cast `inputs` array to float64 in case of big endian architecture for normalization

We haven't deleted the contrib folder on the 2.0 branch because of build complexities. We are not running any of the tests in it though. All code inside should be considered dead.

kbhute-ibm

comment created time in 5 months

Pull request review commenttensorflow/tensorflow

Update the docstring of tf.strings.substr to cover negative len case

 For each string in the input `Tensor`, creates a substring starting at index If `len` defines a substring that would extend beyond the length of the input string, then as many characters as possible are used.

Maybe instead of adding another paragraph, change this sentence to:

If 'len' defines a substring that would extend beyond the length of the input string, or if 'len' is negative, tten as many characters as possible are used.

yongtang

comment created time in 5 months

issue commenttensorflow/tensorflow

Windows chief can not establish session with unix worker

I've always used telnet for that sort of diagnostic, though I think recent *nixes don't include it.

shahriar49

comment created time in 5 months

issue commenttensorflow/tensorflow

Windows chief can not establish session with unix worker

Is there a firewall active by default on either machine? Can you connect manually to the port?

It could be that the Unix worker doesn't accept connections by default (but can initiate them).

shahriar49

comment created time in 5 months

issue closedtensorflow/tensorflow

scientific research on the evaluation of Tensorflow

Hello, everyone! We are students from Linkoping University, Sweden. We are doing research on evaluation of Tensorflow, so we need help! There is a questionnaire we need you to answer! Just 1-2 minutes! Thank you so much for your patience and kindness!

https://docs.google.com/forms/d/e/1FAIpQLSfvqIUxxI5mm82dr_M8Ja7y_eGG0mqXzfCLQ0c6ehjqSjAQkA/viewform?usp=sf_link

closed time in 5 months

Karinsapple

issue commenttensorflow/tensorflow

scientific research on the evaluation of Tensorflow

The issue tracker is for bug reports, and I assume your post will not be widely read here. You can try posting to discuss@tensorflow.org.

Karinsapple

comment created time in 5 months

pull request commenttensorflow/tensorflow

[ROCM] Patch to enable rocm for r2.0 release branch

2.0 itself will not be built with this, it is too late for that. However, we can accept the cherry-pick once we have 2.0 binaries finalized, so that if people build from the 2.0 branch, they get ROCm capabilities.

@goldiegadde does that sound sensible?

sunway513

comment created time in 5 months

pull request commenttensorflow/tensorflow

Use static python_version dependency configuration in setup.py

Most likely, your commit uses a different email address than your GitHub account (on mobile, haven't checked). Amending the commit might be easiest.

On Tue, Sep 24, 2019, 17:56 Hiroyuki Tanaka notifications@github.com wrote:

What should i do with "CLAs are signed, but unable to verify author consent"?

— You are receiving this because your review was requested. Reply to this email directly, view it on GitHub https://github.com/tensorflow/tensorflow/pull/32758?email_source=notifications&email_token=AAEM57JE2SRJ2USFOGAJYE3QLKZKJA5CNFSM4IZYRCZ2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD7QHG6I#issuecomment-534803321, or mute the thread https://github.com/notifications/unsubscribe-auth/AAEM57O6PV2S7URLSZOGGS3QLKZKJANCNFSM4IZYRCZQ .

aflc

comment created time in 5 months

issue commenttensorflow/tensorflow

tf.gradients() gives the conjugate of what is expected

This one? Screen Shot 2019-09-24 at 13 49 24

whdc

comment created time in 5 months

issue closedtensorflow/tensorflow

//tensorflow/contrib/distributions/python/kernel_tests/independent_test.py test fails with Assertion error

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): No
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 18.04 s390x
  • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: NA
  • TensorFlow installed from (source or binary): Source
  • TensorFlow version (use command below): 1.14.0
  • Python version: 2.7.15
  • Bazel version (if compiling from source): 0.24.1
  • GCC/Compiler version (if compiling from source): gcc (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0
  • CUDA/cuDNN version: NA
  • GPU model and memory: NA

Describe the current behavior

FAIL: testMnistLikeDynamicShape (__main__.ProductDistributionTest)
testMnistLikeDynamicShape (__main__.ProductDistributionTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/test/.local/lib/python2.7/site-packages/absl/third_party/unittest3_backport/case.py", line 37, in testPartExecutor
    yield
  File "/home/test/.local/lib/python2.7/site-packages/absl/third_party/unittest3_backport/case.py", line 162, in run
    testMethod()
  File "tensorflow/contrib/distributions/python/kernel_tests/independent_test.py", line 275, in testMnistLikeDynamicShape
    self._testMnistLike(static_shape=False)
  File "tensorflow/contrib/distributions/python/kernel_tests/independent_test.py", line 269, in _testMnistLike
    rtol=1e-6, atol=0.)
  File "/home/test/.local/lib/python2.7/site-packages/tensorflow/python/framework/test_util.py", line 1073, in decorated
    return f(*args, **kwds)
  File "/home/test/.local/lib/python2.7/site-packages/tensorflow/python/framework/test_util.py", line 2303, in assertAllClose
    self._assertAllCloseRecursive(a, b, rtol=rtol, atol=atol, msg=msg)
  File "/home/test/.local/lib/python2.7/site-packages/tensorflow/python/framework/test_util.py", line 2272, in _assertAllCloseRecursive
    (path_str, path_str, msg)))
  File "/home/test/.local/lib/python2.7/site-packages/tensorflow/python/framework/test_util.py", line 2207, in _assertArrayLikeAllClose
    a, b, rtol=rtol, atol=atol, err_msg="\n".join(msgs), equal_nan=True)
  File "/home/test/.local/lib/python2.7/site-packages/numpy/testing/_private/utils.py", line 1501, in assert_allclose
    verbose=verbose, header=header, equal_nan=equal_nan)
  File "/home/test/.local/lib/python2.7/site-packages/numpy/testing/_private/utils.py", line 827, in assert_array_compare
    raise AssertionError(msg)
AssertionError:
Not equal to tolerance rtol=1e-06, atol=0
Mismatched value: a is different from b.
not close where = (array([1, 2, 2, 2, 3]), array([3, 1, 2, 4, 4]), array([6, 1, 1, 8, 6]))
not close lhs = [-463.41407785 -467.87059292 -444.45405599 -469.33429804 -457.64799854]
not close rhs = [-463.41458 -467.87006 -444.45358 -469.3348  -457.6486 ]
not close dif = [0.00050345 0.00053677 0.00047323 0.00051031 0.00059155]
not close tol = [0.00046341 0.00046787 0.00044445 0.00046933 0.00045765]
dtype = float64, shape = (4, 5, 10)
Mismatch: 2.5%
Max absolute difference: 0.00059155
Max relative difference: 1.29257756e-06
 x: array([[[-465.912459, -448.916315, -457.207675, -486.805523,
         -456.784984, -448.14827 , -453.583166, -486.295655,
         -468.533898, -481.740375],...
 y: array([[[-465.9126 , -448.9159 , -457.20764, -486.8053 , -456.7849 ,
         -448.1483 , -453.5835 , -486.2955 , -468.53412, -481.74048],
        [-472.38965, -483.41187, -464.7721 , -467.14288, -478.4115 ,...

======================================================================
FAIL: testMnistLikeStaticShape (__main__.ProductDistributionTest)
testMnistLikeStaticShape (__main__.ProductDistributionTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/test/.local/lib/python2.7/site-packages/absl/third_party/unittest3_backport/case.py", line 37, in testPartExecutor
    yield
  File "/home/test/.local/lib/python2.7/site-packages/absl/third_party/unittest3_backport/case.py", line 162, in run
    testMethod()
  File "tensorflow/contrib/distributions/python/kernel_tests/independent_test.py", line 272, in testMnistLikeStaticShape
    self._testMnistLike(static_shape=True)
  File "tensorflow/contrib/distributions/python/kernel_tests/independent_test.py", line 269, in _testMnistLike
    rtol=1e-6, atol=0.)
  File "/home/test/.local/lib/python2.7/site-packages/tensorflow/python/framework/test_util.py", line 1073, in decorated
    return f(*args, **kwds)
  File "/home/test/.local/lib/python2.7/site-packages/tensorflow/python/framework/test_util.py", line 2303, in assertAllClose
    self._assertAllCloseRecursive(a, b, rtol=rtol, atol=atol, msg=msg)
  File "/home/test/.local/lib/python2.7/site-packages/tensorflow/python/framework/test_util.py", line 2272, in _assertAllCloseRecursive
    (path_str, path_str, msg)))
  File "/home/test/.local/lib/python2.7/site-packages/tensorflow/python/framework/test_util.py", line 2207, in _assertArrayLikeAllClose
    a, b, rtol=rtol, atol=atol, err_msg="\n".join(msgs), equal_nan=True)
  File "/home/test/.local/lib/python2.7/site-packages/numpy/testing/_private/utils.py", line 1501, in assert_allclose
    verbose=verbose, header=header, equal_nan=equal_nan)
  File "/home/test/.local/lib/python2.7/site-packages/numpy/testing/_private/utils.py", line 827, in assert_array_compare
    raise AssertionError(msg)
AssertionError:
Not equal to tolerance rtol=1e-06, atol=0
Mismatched value: a is different from b.
not close where = (array([1, 2, 2, 2, 3]), array([3, 1, 2, 4, 4]), array([6, 1, 1, 8, 6]))
not close lhs = [-463.41407785 -467.87059292 -444.45405599 -469.33429804 -457.64799854]
not close rhs = [-463.41458 -467.87006 -444.45358 -469.3348  -457.6486 ]
not close dif = [0.00050345 0.00053677 0.00047323 0.00051031 0.00059155]
not close tol = [0.00046341 0.00046787 0.00044445 0.00046933 0.00045765]
dtype = float64, shape = (4, 5, 10)
Mismatch: 2.5%
Max absolute difference: 0.00059155
Max relative difference: 1.29257756e-06
 x: array([[[-465.912459, -448.916315, -457.207675, -486.805523,
         -456.784984, -448.14827 , -453.583166, -486.295655,
         -468.533898, -481.740375],...
 y: array([[[-465.9126 , -448.9159 , -457.20764, -486.8053 , -456.7849 ,
         -448.1483 , -453.5835 , -486.2955 , -468.53412, -481.74048],
        [-472.38965, -483.41187, -464.7721 , -467.14288, -478.4115 ,...

----------------------------------------------------------------------
Ran 10 tests in 1.019s

FAILED (failures=2)

Describe the expected behavior

The test should pass on s390x.

Code to reproduce the issue

python tensorflow/contrib/distributions/python/kernel_tests/independent_test.py

closed time in 5 months

anujajakhade

issue commenttensorflow/tensorflow

//tensorflow/contrib/distributions/python/kernel_tests/independent_test.py test fails with Assertion error

This this would be merely fixing a test, and since contrib won't be distributed with TensorFlow going forward, I think we should not worry about this.

There is no user visible impact of any change to fix this. If we had a fix for the underlying problem that would be interesting.

anujajakhade

comment created time in 5 months

PR closed tensorflow/tensorflow

Cast `inputs` array to float64 in case of big endian architecture for normalization cla: yes contrib prtype:bugfix ready to pull size:XS

These changes fixes https://github.com/tensorflow/tensorflow/issues/26135

+8 -1

8 comments

1 changed file

kbhute-ibm

pr closed time in 5 months

pull request commenttensorflow/tensorflow

Cast `inputs` array to float64 in case of big endian architecture for normalization

No tests for contrib will be run in TF 2.0. In fact, contrib has plainly ceased to exist outside the 1.x branches.

Can you request a cherry-pick into the 1.15 branch?

@goldiegadde FYI, this is a harmless cherrypick that we can merge into 1.15.

kbhute-ibm

comment created time in 5 months

pull request commenttensorflow/tensorflow

forward_compatible env variable caching perf optimization.

Are we picking this only into 1.15, or also 2.0?

It's a great change, but probably not worth it for 1.15 unless we make the same change in 2.0.

In fact, because forward compat dates are fixed, it would be better to completely expunge the forward compatibility tooling in the releases, but that's a bit more involved.

kkimdev

comment created time in 5 months

issue commenttensorflow/tensorflow

Survey on Pull Request Prioritization

Thank you for note. We generally respond to all PRs (except obviously fraudulent ones) so I am not sure we'd deliver meaningful input data. TensorFlow is also too large to deal with for a single person, so we have a hierarchical triage process that identifies the best person to decide on any specific PR.

Consequently, I could not properly fill out your survey, and I don't know that anyone else can.

I will close this issue. This is fantastic research!

IlyasAzeem

comment created time in 5 months

issue closedtensorflow/tensorflow

Survey on Pull Request Prioritization

Dear Pull Requests integrators, We are an international group of researchers investigating Pull Requests management activities. We implemented an automated approach, named CARTESIAN, for prioritizing Pull Requests (PRs) received by an open-source project according to the likelihood of acceptance/response. We would like to evaluate whether CARTESIAN can help integrators when reviewing PRs. As we noticed that your project receives many PRs daily, we have taken the liberty of contacting you. Thus, we experimented CARTESIAN on Your projects, to help you in prioritizing PRs. To know more about these results, please fill in the form below. Survey form link: https://forms.gle/bGkj3nRH9i6ArjUx8 Your participation is voluntary and confidential. We kindly request you, ONLY to INTEGRATORS, to participate in this study which is expected to take about 15 minutes of your time. You might withdraw at any time. Best regards, Muhammad Ilyas Azeem, National Engineering Research Center of Fundamental Software, Chinese Academy of Sciences, China Andrea Di Sorbo, University of Sannio, Italy Sebastiano Panichella, Zurich University of Applied Science, Switzerland Alexander Serebrenik, Eindhoven University of Technology, The Netherlands

closed time in 5 months

IlyasAzeem

issue closedtensorflow/tensorflow

Plain English explanation of CLA?

URL(s) with the issue:

https://github.com/tensorflow/tensorflow/blob/master/CONTRIBUTING.md#contributor-license-agreements

Description of issue (what needs changing):

I have no idea what any of this legal mumbo-jumbo actually entails. "Grant of Patent License", "Grant of Copyright License". Do I own my contributions? Does Google own my contributions? What does all of this mess mean? If I create something and "give it away", I want to make it free as in gratis and as in libre widely, and not just to Google. Is that happening here? Does Google charge/restrict people (e.g. corporations) using tensorflow? Can Google charge/restrict people using tensorflow and/or my contributions?

Submit a pull request?

No.

closed time in 5 months

csbrown

issue commenttensorflow/tensorflow

Plain English explanation of CLA?

I have no idea what any of this legal mumbo-jumbo actually entails. "Grant of Patent License", "Grant of Copyright License". Do I own my contributions?

Yes, you own your contributions. The preamble of the agreement addresses this.

Does Google own my contributions?

No, Google only receives a license to your contribution, as detailed in Sections 2 and 3 of the agreement, which enables Google to incorporate the contribution into the project.

What does all of this mess mean? If I create something and "give it away", I want to make it free as in gratis and as in libre widely, and not just to Google. Is that happening here?

You are entirely welcome to make your contributions as free as possible. The Contributor License Agreement grants Google broad permission to use the contribution, which Google then uses to release your contribution as part of TensorFlow under the Apache 2 license. However, since you retain ownership of your contributions, you are also free to publish those contributions (and/or anything else that you personally hold the rights to) under the most liberal terms possible by publishing it on GitHub (or elsewhere) under a license such as the CC0.

Does Google charge/restrict people (e.g. corporations) using tensorflow? Can Google charge/restrict people using tensorflow and/or my contributions?

TensorFlow is released under the Apache 2 license which is a perpetual open source license. This means that Google cannot prevent people, corporations, you, or anyone from ever using TensorFlow under the terms of the Apache 2 license.

Let me know if you have more questions.

csbrown

comment created time in 5 months

pull request commenttensorflow/tensorflow

[ROCM] Patch to enable rocm for r2.0 release branch

This will not enable ROCm, right? It's still guarded by ifdefs?

This is a large change, and not one I would like to accept this late in the release cycle. What is the goal here?

sunway513

comment created time in 5 months

pull request commenttensorflow/tensorflow

Release Notes for 2.0.0-rc0

I think so.

goldiegadde

comment created time in 5 months

pull request commenttensorflow/tensorflow

Release Notes for 2.0.0-rc0

You mean, removed freeze_graph tool?

goldiegadde

comment created time in 5 months

created repositorytensorflow/java-models

Models in Java

created time in 5 months

created repositorytensorflow/java

Java bindings for TensorFlow

created time in 5 months

issue commenttensorflow/tensorflow

Back-propagating gradients through a sparse tensor?

Is the gradient only undefined in 2.0? That would be very surprising. I find it more likely that we never had this gradient. It would be great to add it -- an issue with this feature request would be a good start.

On Tue, Sep 3, 2019 at 2:51 AM László Mérő notifications@github.com wrote:

On TF 2.0rc this does not work, the gradient for the sparse_dense_matmul op is not defined.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/tensorflow/tensorflow/issues/6998?email_source=notifications&email_token=AAEM57JSXUFX6JEVEALA3UTQHYXSPA5CNFSM4C5IVNBKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD5XUMBQ#issuecomment-527386118, or mute the thread https://github.com/notifications/unsubscribe-auth/AAEM57JCALE2R2XNQXMUE2DQHYXSPANCNFSM4C5IVNBA .

zergylord

comment created time in 5 months

PR closed tensorflow/tensorflow

Reviewers
Create global C++/.h/.proto .clang-format cla: yes size:S

This commit creates a .clang-format file for global C++/.h/.proto .clang-format. This code can be changed later on. This just makes the TensorFlow project simpler on the fact of code formatting.

FORMAT OPTIONS

Style Guide: Google The .clang-format file follows Google's style guide.

+14 -0

5 comments

1 changed file

aaronhma

pr closed time in 6 months

pull request commenttensorflow/tensorflow

Create global C++/.h/.proto .clang-format

We are now auto-formatting all code that is checked in. So for C++ code, whatever you do at home is fine, we'll auto-format anyway.

Using the Google style clang-format will work fine for that purpose, the differences are slight at best.

I'd rather not check this into the repo though, because what will happen is that if you reformat code you didn't write, that'll cause larger than necessary diffs.

For that reason, I'll close this PR.

Thank you!

aaronhma

comment created time in 6 months

Pull request review commenttensorflow/tensorflow

Update release notes for TensorFlow 1.15.0

+# Release 1.15.0+This is the last 1.x release for TensorFlow. We do not expect to update the 1.x branch with features, although we will issue patch releases to fix vulnerabilities for at least one year. ++## Major Features and Improvements+* TensorFlow 1.15 contains a complete implementation of the 2.0 API in its compat.v2 module. It contains a copy of the 1.15 main module (without contrib) in the compat.v1 module. TensorFlow 1.15 is able to emulate 2.0 behavior using the enable_v2_behavior() function.+This enables writing forward compatible code: by explicitly importing either tensorflow.compat.v1 or tensorflow.compat.v2, you can ensure that your code works without modifications against an installation of 1.15 or 2.0.+* EagerTensor now supports buffer interface for tensors.+* Add toggles `tf.enable_control_flow_v2()` and `tf.disable_control_flow_v2()` for enabling/disabling v2 control flow.+* Enable v2 control flow as part of `tf.enable_v2_behavior()` and `TF2_BEHAVIOR=1`.+* AutoGraph translates Python control flow into TensorFlow expressions, allowing users to write regular Python inside `tf.function`-decorated functions. AutoGraph is also applied in functions used with `tf.data`, `tf.distribute` and `tf.keras` APIs.+* Adds enable_tensor_equality(), which switches the behavior such that: +  Tensors are no longer hashable+  Tensors can be compared with == and !=, yielding a Boolean Tensor with element-wise comparison results. This will be the default behavior in 2.0++## Breaking Changes+* Tensorflow code now produces 2 different pip packages: tensorflow_core containing all the code (in the future it will contain only the private implementation) and tensorflow which is a virtual pip package doing forwarding to tensorflow_core (and in the future will contain only the public API of tensorflow). We don't expect this to be breaking, unless you were importing directly from the implementation.+* TensorFlow 1.15 is built using devtoolset7(GCC7) on Ubuntu 16. This may lead to ABI incompatibilities with extensions built against earlier versions of TensorFlow.

space before (

tensorflow-jenkins

comment created time in 6 months

Pull request review commenttensorflow/tensorflow

Update release notes for TensorFlow 1.15.0

+# Release 1.15.0+This is the last 1.x release for TensorFlow. We do not expect to update the 1.x branch with features, although we will issue patch releases to fix vulnerabilities for at least one year. ++## Major Features and Improvements+* TensorFlow 1.15 contains a complete implementation of the 2.0 API in its compat.v2 module. It contains a copy of the 1.15 main module (without contrib) in the compat.v1 module. TensorFlow 1.15 is able to emulate 2.0 behavior using the enable_v2_behavior() function.+This enables writing forward compatible code: by explicitly importing either tensorflow.compat.v1 or tensorflow.compat.v2, you can ensure that your code works without modifications against an installation of 1.15 or 2.0.

tensorflow.compat.v1, tensorflow.compat.v2

tensorflow-jenkins

comment created time in 6 months

Pull request review commenttensorflow/tensorflow

Update release notes for TensorFlow 1.15.0

+# Release 1.15.0+This is the last 1.x release for TensorFlow. We do not expect to update the 1.x branch with features, although we will issue patch releases to fix vulnerabilities for at least one year. ++## Major Features and Improvements+* TensorFlow 1.15 contains a complete implementation of the 2.0 API in its compat.v2 module. It contains a copy of the 1.15 main module (without contrib) in the compat.v1 module. TensorFlow 1.15 is able to emulate 2.0 behavior using the enable_v2_behavior() function.+This enables writing forward compatible code: by explicitly importing either tensorflow.compat.v1 or tensorflow.compat.v2, you can ensure that your code works without modifications against an installation of 1.15 or 2.0.+* EagerTensor now supports buffer interface for tensors.+* Add toggles `tf.enable_control_flow_v2()` and `tf.disable_control_flow_v2()` for enabling/disabling v2 control flow.+* Enable v2 control flow as part of `tf.enable_v2_behavior()` and `TF2_BEHAVIOR=1`.+* AutoGraph translates Python control flow into TensorFlow expressions, allowing users to write regular Python inside `tf.function`-decorated functions. AutoGraph is also applied in functions used with `tf.data`, `tf.distribute` and `tf.keras` APIs.+* Adds enable_tensor_equality(), which switches the behavior such that: +  Tensors are no longer hashable+  Tensors can be compared with == and !=, yielding a Boolean Tensor with element-wise comparison results. This will be the default behavior in 2.0++## Breaking Changes+* Tensorflow code now produces 2 different pip packages: tensorflow_core containing all the code (in the future it will contain only the private implementation) and tensorflow which is a virtual pip package doing forwarding to tensorflow_core (and in the future will contain only the public API of tensorflow). We don't expect this to be breaking, unless you were importing directly from the implementation.+* TensorFlow 1.15 is built using devtoolset7(GCC7) on Ubuntu 16. This may lead to ABI incompatibilities with extensions built against earlier versions of TensorFlow.+* Deprecated the use of `constraint=` and `.constraint` with ResourceVariable.+* `tf.keras`:+  * `OMP_NUM_THREADS` is no longer used by the default Keras config. To configure the number of threads, use `tf.config.threading` APIs.+  * `tf.keras.model.save_model` and `model.save` now defaults to saving a TensorFlow SavedModel.+  * `keras.backend.resize_images` (and consequently, `keras.layers.Upsampling2D`) behavior has changed, a bug in the resizing implementation was fixed.+  * Layers now default to float32, and automatically cast their inputs to the layer's dtype. If you had a model that used float64, it will probably silently use float32 in TensorFlow2, and a warning will be issued that starts with Layer "layer-name" is casting an input tensor from dtype float64 to the layer's dtype of float32. To fix, either set the default dtype to float64 with `tf.keras.backend.set_floatx('float64')`, or pass `dtype='float64'` to each of the Layer constructors. See `tf.keras.layers.Layer` for more information.+  * Some `tf.assert_*` methods now raise assertions at operation creation time (i.e. when this Python line executes) if the input tensors' values are known at that time, not during the session.run(). When this happens, a noop is returned and the input tensors are marked non-feedable. In other words, if they are used as keys in `feed_dict` argument to session.run(), an error will be raised. Also, because some assert ops don't make it into the graph, the graph structure changes. A different graph can result in different per-op random seeds when they are not given explicitly (most often).++## Bug Fixes and Other Changes+* `tf.data`:+  * Promoting `unbatch` from experimental to core API.+  * Adding support for datasets as inputs to `from_tensors` and `from_tensor_slices` and batching and unbatching of nested datasets.+* `tf.keras`:+  * `tf.keras.estimator.model_to_estimator` now supports exporting to tf.train.Checkpoint format, which allows the saved checkpoints to be compatible with `model.load_weights`.+  * Saving a Keras Model using `tf.saved_model.save` now saves the list of variables, trainable variables, regularization losses, and the call function.+  * Deprecated `tf.keras.experimental.export_saved_model` and `tf.keras.experimental.function`. Please use `tf.keras.models.save_model(..., save_format='tf')` and `tf.keras.models.load_model` instead.+  * Add an `implementation=3` mode for `tf.keras.layers.LocallyConnected2D` and `tf.keras.layers.LocallyConnected1D` layers using `tf.SparseTensor` to store weights,  allowing a dramatic speedup for large sparse models.+  * Enable the Keras compile API `experimental_run_tf_function` flag by default. This flag enables single training/eval/predict execution path. With this 1. All input types are converted to `Dataset`. 2. When distribution strategy is not specified this goes through the no-op distribution strategy path. 3. Execution is wrapped in tf.function unless `run_eagerly=True` is set in compile.+  * Raise error if `batch_size` argument is used when input is dataset/generator/keras sequence.+* `tf.lite`+  * Add `GATHER` support to NN API delegate.+  * tflite object detection script has a debug mode.+  * Add delegate support for QUANTIZE.+  * Added evaluation script for COCO minival.+  * Add delegate support for `QUANTIZED_16BIT_LSTM`.+  * Converts hardswish subgraphs into atomic ops.+* Add support for defaulting the value of `cycle_length` argument of `tf.data.Dataset.interleave` to the number of schedulable CPU cores.+* `parallel_for`: Add converter for `MatrixDiag`.+* Add `narrow_range` attribute to `QuantizeAndDequantizeV2` and V3.+* Added new op: `tf.strings.unsorted_segment_join`.+* Add HW acceleration support for `topK_v2`.+* Add new `TypeSpec` classes.+* CloudBigtable version updated to v0.10.0 BEGIN_PUBLIC CloudBigtable version updated to v0.10.0+* Expose `Head` as public API.+* Update docstring for gather to properly describe the non-empty batch_dims case.+* Added `tf.sparse.from_dense` utility function.+* Improved ragged tensor support in `TensorFlowTestCase`.+* Makes the a-normal form transformation in Pyct configurable as to which nodes are converted to variables and which are not.+* `ResizeInputTensor` now works for all delegates.+* Add `EXPAND_DIMS` support to NN API delegate TEST:  expand_dims_test+* `tf.cond` emits a StatelessIf op if the branch functions are stateless and do not touch any resources.+* `tf.cond`, `tf.while` and `if` and `while` in AutoGraph now accept a nonscalar predicate if has a single element. This does not affect non-V2 control flow.+* `tf.while_loop` emits a StatelessWhile op if the cond and body functions are stateless and do not touch any resources.+* Refactors code in Quant8 LSTM support to reduce TFLite binary size.+* Add support of local soft device placement for eager op.+* Pass partial_pivoting to the `_TridiagonalSolveGrad`.

I'd remove this one, it's too obscure.

tensorflow-jenkins

comment created time in 6 months

Pull request review commenttensorflow/tensorflow

Update release notes for TensorFlow 1.15.0

+# Release 1.15.0+This is the last 1.x release for TensorFlow. We do not expect to update the 1.x branch with features, although we will issue patch releases to fix vulnerabilities for at least one year. ++## Major Features and Improvements+* TensorFlow 1.15 contains a complete implementation of the 2.0 API in its compat.v2 module. It contains a copy of the 1.15 main module (without contrib) in the compat.v1 module. TensorFlow 1.15 is able to emulate 2.0 behavior using the enable_v2_behavior() function.+This enables writing forward compatible code: by explicitly importing either tensorflow.compat.v1 or tensorflow.compat.v2, you can ensure that your code works without modifications against an installation of 1.15 or 2.0.+* EagerTensor now supports buffer interface for tensors.+* Add toggles `tf.enable_control_flow_v2()` and `tf.disable_control_flow_v2()` for enabling/disabling v2 control flow.+* Enable v2 control flow as part of `tf.enable_v2_behavior()` and `TF2_BEHAVIOR=1`.+* AutoGraph translates Python control flow into TensorFlow expressions, allowing users to write regular Python inside `tf.function`-decorated functions. AutoGraph is also applied in functions used with `tf.data`, `tf.distribute` and `tf.keras` APIs.+* Adds enable_tensor_equality(), which switches the behavior such that: +  Tensors are no longer hashable+  Tensors can be compared with == and !=, yielding a Boolean Tensor with element-wise comparison results. This will be the default behavior in 2.0++## Breaking Changes+* Tensorflow code now produces 2 different pip packages: tensorflow_core containing all the code (in the future it will contain only the private implementation) and tensorflow which is a virtual pip package doing forwarding to tensorflow_core (and in the future will contain only the public API of tensorflow). We don't expect this to be breaking, unless you were importing directly from the implementation.+* TensorFlow 1.15 is built using devtoolset7(GCC7) on Ubuntu 16. This may lead to ABI incompatibilities with extensions built against earlier versions of TensorFlow.+* Deprecated the use of `constraint=` and `.constraint` with ResourceVariable.+* `tf.keras`:+  * `OMP_NUM_THREADS` is no longer used by the default Keras config. To configure the number of threads, use `tf.config.threading` APIs.+  * `tf.keras.model.save_model` and `model.save` now defaults to saving a TensorFlow SavedModel.+  * `keras.backend.resize_images` (and consequently, `keras.layers.Upsampling2D`) behavior has changed, a bug in the resizing implementation was fixed.+  * Layers now default to float32, and automatically cast their inputs to the layer's dtype. If you had a model that used float64, it will probably silently use float32 in TensorFlow2, and a warning will be issued that starts with Layer "layer-name" is casting an input tensor from dtype float64 to the layer's dtype of float32. To fix, either set the default dtype to float64 with `tf.keras.backend.set_floatx('float64')`, or pass `dtype='float64'` to each of the Layer constructors. See `tf.keras.layers.Layer` for more information.+  * Some `tf.assert_*` methods now raise assertions at operation creation time (i.e. when this Python line executes) if the input tensors' values are known at that time, not during the session.run(). When this happens, a noop is returned and the input tensors are marked non-feedable. In other words, if they are used as keys in `feed_dict` argument to session.run(), an error will be raised. Also, because some assert ops don't make it into the graph, the graph structure changes. A different graph can result in different per-op random seeds when they are not given explicitly (most often).++## Bug Fixes and Other Changes+* `tf.data`:+  * Promoting `unbatch` from experimental to core API.+  * Adding support for datasets as inputs to `from_tensors` and `from_tensor_slices` and batching and unbatching of nested datasets.+* `tf.keras`:+  * `tf.keras.estimator.model_to_estimator` now supports exporting to tf.train.Checkpoint format, which allows the saved checkpoints to be compatible with `model.load_weights`.+  * Saving a Keras Model using `tf.saved_model.save` now saves the list of variables, trainable variables, regularization losses, and the call function.+  * Deprecated `tf.keras.experimental.export_saved_model` and `tf.keras.experimental.function`. Please use `tf.keras.models.save_model(..., save_format='tf')` and `tf.keras.models.load_model` instead.+  * Add an `implementation=3` mode for `tf.keras.layers.LocallyConnected2D` and `tf.keras.layers.LocallyConnected1D` layers using `tf.SparseTensor` to store weights,  allowing a dramatic speedup for large sparse models.+  * Enable the Keras compile API `experimental_run_tf_function` flag by default. This flag enables single training/eval/predict execution path. With this 1. All input types are converted to `Dataset`. 2. When distribution strategy is not specified this goes through the no-op distribution strategy path. 3. Execution is wrapped in tf.function unless `run_eagerly=True` is set in compile.+  * Raise error if `batch_size` argument is used when input is dataset/generator/keras sequence.+* `tf.lite`+  * Add `GATHER` support to NN API delegate.+  * tflite object detection script has a debug mode.+  * Add delegate support for QUANTIZE.+  * Added evaluation script for COCO minival.+  * Add delegate support for `QUANTIZED_16BIT_LSTM`.+  * Converts hardswish subgraphs into atomic ops.+* Add support for defaulting the value of `cycle_length` argument of `tf.data.Dataset.interleave` to the number of schedulable CPU cores.+* `parallel_for`: Add converter for `MatrixDiag`.+* Add `narrow_range` attribute to `QuantizeAndDequantizeV2` and V3.+* Added new op: `tf.strings.unsorted_segment_join`.+* Add HW acceleration support for `topK_v2`.+* Add new `TypeSpec` classes.+* CloudBigtable version updated to v0.10.0 BEGIN_PUBLIC CloudBigtable version updated to v0.10.0

some bad formatting here

tensorflow-jenkins

comment created time in 6 months

Pull request review commenttensorflow/tensorflow

Update release notes for TensorFlow 1.15.0

+# Release 1.15.0+This is the last 1.x release for TensorFlow. We do not expect to update the 1.x branch with features, although we will issue patch releases to fix vulnerabilities for at least one year. ++## Major Features and Improvements+* TensorFlow 1.15 contains a complete implementation of the 2.0 API in its compat.v2 module. It contains a copy of the 1.15 main module (without contrib) in the compat.v1 module. TensorFlow 1.15 is able to emulate 2.0 behavior using the enable_v2_behavior() function.+This enables writing forward compatible code: by explicitly importing either tensorflow.compat.v1 or tensorflow.compat.v2, you can ensure that your code works without modifications against an installation of 1.15 or 2.0.+* EagerTensor now supports buffer interface for tensors.+* Add toggles `tf.enable_control_flow_v2()` and `tf.disable_control_flow_v2()` for enabling/disabling v2 control flow.+* Enable v2 control flow as part of `tf.enable_v2_behavior()` and `TF2_BEHAVIOR=1`.+* AutoGraph translates Python control flow into TensorFlow expressions, allowing users to write regular Python inside `tf.function`-decorated functions. AutoGraph is also applied in functions used with `tf.data`, `tf.distribute` and `tf.keras` APIs.+* Adds enable_tensor_equality(), which switches the behavior such that: +  Tensors are no longer hashable+  Tensors can be compared with == and !=, yielding a Boolean Tensor with element-wise comparison results. This will be the default behavior in 2.0++## Breaking Changes+* Tensorflow code now produces 2 different pip packages: tensorflow_core containing all the code (in the future it will contain only the private implementation) and tensorflow which is a virtual pip package doing forwarding to tensorflow_core (and in the future will contain only the public API of tensorflow). We don't expect this to be breaking, unless you were importing directly from the implementation.+* TensorFlow 1.15 is built using devtoolset7(GCC7) on Ubuntu 16. This may lead to ABI incompatibilities with extensions built against earlier versions of TensorFlow.+* Deprecated the use of `constraint=` and `.constraint` with ResourceVariable.+* `tf.keras`:+  * `OMP_NUM_THREADS` is no longer used by the default Keras config. To configure the number of threads, use `tf.config.threading` APIs.+  * `tf.keras.model.save_model` and `model.save` now defaults to saving a TensorFlow SavedModel.+  * `keras.backend.resize_images` (and consequently, `keras.layers.Upsampling2D`) behavior has changed, a bug in the resizing implementation was fixed.+  * Layers now default to float32, and automatically cast their inputs to the layer's dtype. If you had a model that used float64, it will probably silently use float32 in TensorFlow2, and a warning will be issued that starts with Layer "layer-name" is casting an input tensor from dtype float64 to the layer's dtype of float32. To fix, either set the default dtype to float64 with `tf.keras.backend.set_floatx('float64')`, or pass `dtype='float64'` to each of the Layer constructors. See `tf.keras.layers.Layer` for more information.+  * Some `tf.assert_*` methods now raise assertions at operation creation time (i.e. when this Python line executes) if the input tensors' values are known at that time, not during the session.run(). When this happens, a noop is returned and the input tensors are marked non-feedable. In other words, if they are used as keys in `feed_dict` argument to session.run(), an error will be raised. Also, because some assert ops don't make it into the graph, the graph structure changes. A different graph can result in different per-op random seeds when they are not given explicitly (most often).++## Bug Fixes and Other Changes+* `tf.data`:+  * Promoting `unbatch` from experimental to core API.+  * Adding support for datasets as inputs to `from_tensors` and `from_tensor_slices` and batching and unbatching of nested datasets.+* `tf.keras`:+  * `tf.keras.estimator.model_to_estimator` now supports exporting to tf.train.Checkpoint format, which allows the saved checkpoints to be compatible with `model.load_weights`.+  * Saving a Keras Model using `tf.saved_model.save` now saves the list of variables, trainable variables, regularization losses, and the call function.+  * Deprecated `tf.keras.experimental.export_saved_model` and `tf.keras.experimental.function`. Please use `tf.keras.models.save_model(..., save_format='tf')` and `tf.keras.models.load_model` instead.+  * Add an `implementation=3` mode for `tf.keras.layers.LocallyConnected2D` and `tf.keras.layers.LocallyConnected1D` layers using `tf.SparseTensor` to store weights,  allowing a dramatic speedup for large sparse models.+  * Enable the Keras compile API `experimental_run_tf_function` flag by default. This flag enables single training/eval/predict execution path. With this 1. All input types are converted to `Dataset`. 2. When distribution strategy is not specified this goes through the no-op distribution strategy path. 3. Execution is wrapped in tf.function unless `run_eagerly=True` is set in compile.+  * Raise error if `batch_size` argument is used when input is dataset/generator/keras sequence.+* `tf.lite`+  * Add `GATHER` support to NN API delegate.+  * tflite object detection script has a debug mode.+  * Add delegate support for QUANTIZE.+  * Added evaluation script for COCO minival.+  * Add delegate support for `QUANTIZED_16BIT_LSTM`.+  * Converts hardswish subgraphs into atomic ops.+* Add support for defaulting the value of `cycle_length` argument of `tf.data.Dataset.interleave` to the number of schedulable CPU cores.+* `parallel_for`: Add converter for `MatrixDiag`.+* Add `narrow_range` attribute to `QuantizeAndDequantizeV2` and V3.+* Added new op: `tf.strings.unsorted_segment_join`.+* Add HW acceleration support for `topK_v2`.+* Add new `TypeSpec` classes.+* CloudBigtable version updated to v0.10.0 BEGIN_PUBLIC CloudBigtable version updated to v0.10.0+* Expose `Head` as public API.+* Update docstring for gather to properly describe the non-empty batch_dims case.

batch_dims

tensorflow-jenkins

comment created time in 6 months

Pull request review commenttensorflow/tensorflow

Update release notes for TensorFlow 1.15.0

+# Release 1.15.0+This is the last 1.x release for TensorFlow. We do not expect to update the 1.x branch with features, although we will issue patch releases to fix vulnerabilities for at least one year. ++## Major Features and Improvements+* TensorFlow 1.15 contains a complete implementation of the 2.0 API in its compat.v2 module. It contains a copy of the 1.15 main module (without contrib) in the compat.v1 module. TensorFlow 1.15 is able to emulate 2.0 behavior using the enable_v2_behavior() function.+This enables writing forward compatible code: by explicitly importing either tensorflow.compat.v1 or tensorflow.compat.v2, you can ensure that your code works without modifications against an installation of 1.15 or 2.0.+* EagerTensor now supports buffer interface for tensors.+* Add toggles `tf.enable_control_flow_v2()` and `tf.disable_control_flow_v2()` for enabling/disabling v2 control flow.+* Enable v2 control flow as part of `tf.enable_v2_behavior()` and `TF2_BEHAVIOR=1`.+* AutoGraph translates Python control flow into TensorFlow expressions, allowing users to write regular Python inside `tf.function`-decorated functions. AutoGraph is also applied in functions used with `tf.data`, `tf.distribute` and `tf.keras` APIs.+* Adds enable_tensor_equality(), which switches the behavior such that: +  Tensors are no longer hashable+  Tensors can be compared with == and !=, yielding a Boolean Tensor with element-wise comparison results. This will be the default behavior in 2.0++## Breaking Changes+* Tensorflow code now produces 2 different pip packages: tensorflow_core containing all the code (in the future it will contain only the private implementation) and tensorflow which is a virtual pip package doing forwarding to tensorflow_core (and in the future will contain only the public API of tensorflow). We don't expect this to be breaking, unless you were importing directly from the implementation.+* TensorFlow 1.15 is built using devtoolset7(GCC7) on Ubuntu 16. This may lead to ABI incompatibilities with extensions built against earlier versions of TensorFlow.+* Deprecated the use of `constraint=` and `.constraint` with ResourceVariable.+* `tf.keras`:+  * `OMP_NUM_THREADS` is no longer used by the default Keras config. To configure the number of threads, use `tf.config.threading` APIs.+  * `tf.keras.model.save_model` and `model.save` now defaults to saving a TensorFlow SavedModel.+  * `keras.backend.resize_images` (and consequently, `keras.layers.Upsampling2D`) behavior has changed, a bug in the resizing implementation was fixed.+  * Layers now default to float32, and automatically cast their inputs to the layer's dtype. If you had a model that used float64, it will probably silently use float32 in TensorFlow2, and a warning will be issued that starts with Layer "layer-name" is casting an input tensor from dtype float64 to the layer's dtype of float32. To fix, either set the default dtype to float64 with `tf.keras.backend.set_floatx('float64')`, or pass `dtype='float64'` to each of the Layer constructors. See `tf.keras.layers.Layer` for more information.

float32 and float64

tensorflow-jenkins

comment created time in 6 months

Pull request review commenttensorflow/tensorflow

Update release notes for TensorFlow 1.15.0

+# Release 1.15.0+This is the last 1.x release for TensorFlow. We do not expect to update the 1.x branch with features, although we will issue patch releases to fix vulnerabilities for at least one year. ++## Major Features and Improvements+* TensorFlow 1.15 contains a complete implementation of the 2.0 API in its compat.v2 module. It contains a copy of the 1.15 main module (without contrib) in the compat.v1 module. TensorFlow 1.15 is able to emulate 2.0 behavior using the enable_v2_behavior() function.+This enables writing forward compatible code: by explicitly importing either tensorflow.compat.v1 or tensorflow.compat.v2, you can ensure that your code works without modifications against an installation of 1.15 or 2.0.+* EagerTensor now supports buffer interface for tensors.+* Add toggles `tf.enable_control_flow_v2()` and `tf.disable_control_flow_v2()` for enabling/disabling v2 control flow.+* Enable v2 control flow as part of `tf.enable_v2_behavior()` and `TF2_BEHAVIOR=1`.+* AutoGraph translates Python control flow into TensorFlow expressions, allowing users to write regular Python inside `tf.function`-decorated functions. AutoGraph is also applied in functions used with `tf.data`, `tf.distribute` and `tf.keras` APIs.+* Adds enable_tensor_equality(), which switches the behavior such that: +  Tensors are no longer hashable+  Tensors can be compared with == and !=, yielding a Boolean Tensor with element-wise comparison results. This will be the default behavior in 2.0++## Breaking Changes+* Tensorflow code now produces 2 different pip packages: tensorflow_core containing all the code (in the future it will contain only the private implementation) and tensorflow which is a virtual pip package doing forwarding to tensorflow_core (and in the future will contain only the public API of tensorflow). We don't expect this to be breaking, unless you were importing directly from the implementation.+* TensorFlow 1.15 is built using devtoolset7(GCC7) on Ubuntu 16. This may lead to ABI incompatibilities with extensions built against earlier versions of TensorFlow.+* Deprecated the use of `constraint=` and `.constraint` with ResourceVariable.+* `tf.keras`:+  * `OMP_NUM_THREADS` is no longer used by the default Keras config. To configure the number of threads, use `tf.config.threading` APIs.+  * `tf.keras.model.save_model` and `model.save` now defaults to saving a TensorFlow SavedModel.+  * `keras.backend.resize_images` (and consequently, `keras.layers.Upsampling2D`) behavior has changed, a bug in the resizing implementation was fixed.+  * Layers now default to float32, and automatically cast their inputs to the layer's dtype. If you had a model that used float64, it will probably silently use float32 in TensorFlow2, and a warning will be issued that starts with Layer "layer-name" is casting an input tensor from dtype float64 to the layer's dtype of float32. To fix, either set the default dtype to float64 with `tf.keras.backend.set_floatx('float64')`, or pass `dtype='float64'` to each of the Layer constructors. See `tf.keras.layers.Layer` for more information.+  * Some `tf.assert_*` methods now raise assertions at operation creation time (i.e. when this Python line executes) if the input tensors' values are known at that time, not during the session.run(). When this happens, a noop is returned and the input tensors are marked non-feedable. In other words, if they are used as keys in `feed_dict` argument to session.run(), an error will be raised. Also, because some assert ops don't make it into the graph, the graph structure changes. A different graph can result in different per-op random seeds when they are not given explicitly (most often).++## Bug Fixes and Other Changes+* `tf.data`:+  * Promoting `unbatch` from experimental to core API.+  * Adding support for datasets as inputs to `from_tensors` and `from_tensor_slices` and batching and unbatching of nested datasets.+* `tf.keras`:+  * `tf.keras.estimator.model_to_estimator` now supports exporting to tf.train.Checkpoint format, which allows the saved checkpoints to be compatible with `model.load_weights`.+  * Saving a Keras Model using `tf.saved_model.save` now saves the list of variables, trainable variables, regularization losses, and the call function.+  * Deprecated `tf.keras.experimental.export_saved_model` and `tf.keras.experimental.function`. Please use `tf.keras.models.save_model(..., save_format='tf')` and `tf.keras.models.load_model` instead.+  * Add an `implementation=3` mode for `tf.keras.layers.LocallyConnected2D` and `tf.keras.layers.LocallyConnected1D` layers using `tf.SparseTensor` to store weights,  allowing a dramatic speedup for large sparse models.+  * Enable the Keras compile API `experimental_run_tf_function` flag by default. This flag enables single training/eval/predict execution path. With this 1. All input types are converted to `Dataset`. 2. When distribution strategy is not specified this goes through the no-op distribution strategy path. 3. Execution is wrapped in tf.function unless `run_eagerly=True` is set in compile.+  * Raise error if `batch_size` argument is used when input is dataset/generator/keras sequence.+* `tf.lite`+  * Add `GATHER` support to NN API delegate.+  * tflite object detection script has a debug mode.+  * Add delegate support for QUANTIZE.+  * Added evaluation script for COCO minival.+  * Add delegate support for `QUANTIZED_16BIT_LSTM`.+  * Converts hardswish subgraphs into atomic ops.+* Add support for defaulting the value of `cycle_length` argument of `tf.data.Dataset.interleave` to the number of schedulable CPU cores.+* `parallel_for`: Add converter for `MatrixDiag`.+* Add `narrow_range` attribute to `QuantizeAndDequantizeV2` and V3.+* Added new op: `tf.strings.unsorted_segment_join`.+* Add HW acceleration support for `topK_v2`.+* Add new `TypeSpec` classes.+* CloudBigtable version updated to v0.10.0 BEGIN_PUBLIC CloudBigtable version updated to v0.10.0+* Expose `Head` as public API.+* Update docstring for gather to properly describe the non-empty batch_dims case.+* Added `tf.sparse.from_dense` utility function.+* Improved ragged tensor support in `TensorFlowTestCase`.+* Makes the a-normal form transformation in Pyct configurable as to which nodes are converted to variables and which are not.+* `ResizeInputTensor` now works for all delegates.+* Add `EXPAND_DIMS` support to NN API delegate TEST:  expand_dims_test+* `tf.cond` emits a StatelessIf op if the branch functions are stateless and do not touch any resources.+* `tf.cond`, `tf.while` and `if` and `while` in AutoGraph now accept a nonscalar predicate if has a single element. This does not affect non-V2 control flow.+* `tf.while_loop` emits a StatelessWhile op if the cond and body functions are stateless and do not touch any resources.+* Refactors code in Quant8 LSTM support to reduce TFLite binary size.+* Add support of local soft device placement for eager op.+* Pass partial_pivoting to the `_TridiagonalSolveGrad`.+* Add HW acceleration support for `LogSoftMax`.+* Added a function nested_value_rowids for ragged tensors.

nested_value_rowids

tensorflow-jenkins

comment created time in 6 months

Pull request review commenttensorflow/tensorflow

Update release notes for TensorFlow 1.15.0

+# Release 1.15.0+This is the last 1.x release for TensorFlow. We do not expect to update the 1.x branch with features, although we will issue patch releases to fix vulnerabilities for at least one year. ++## Major Features and Improvements+* TensorFlow 1.15 contains a complete implementation of the 2.0 API in its compat.v2 module. It contains a copy of the 1.15 main module (without contrib) in the compat.v1 module. TensorFlow 1.15 is able to emulate 2.0 behavior using the enable_v2_behavior() function.+This enables writing forward compatible code: by explicitly importing either tensorflow.compat.v1 or tensorflow.compat.v2, you can ensure that your code works without modifications against an installation of 1.15 or 2.0.+* EagerTensor now supports buffer interface for tensors.+* Add toggles `tf.enable_control_flow_v2()` and `tf.disable_control_flow_v2()` for enabling/disabling v2 control flow.+* Enable v2 control flow as part of `tf.enable_v2_behavior()` and `TF2_BEHAVIOR=1`.+* AutoGraph translates Python control flow into TensorFlow expressions, allowing users to write regular Python inside `tf.function`-decorated functions. AutoGraph is also applied in functions used with `tf.data`, `tf.distribute` and `tf.keras` APIs.+* Adds enable_tensor_equality(), which switches the behavior such that: +  Tensors are no longer hashable

Make a bullet list

tensorflow-jenkins

comment created time in 6 months

Pull request review commenttensorflow/tensorflow

Update release notes for TensorFlow 1.15.0

+# Release 1.15.0+This is the last 1.x release for TensorFlow. We do not expect to update the 1.x branch with features, although we will issue patch releases to fix vulnerabilities for at least one year. ++## Major Features and Improvements+* TensorFlow 1.15 contains a complete implementation of the 2.0 API in its compat.v2 module. It contains a copy of the 1.15 main module (without contrib) in the compat.v1 module. TensorFlow 1.15 is able to emulate 2.0 behavior using the enable_v2_behavior() function.+This enables writing forward compatible code: by explicitly importing either tensorflow.compat.v1 or tensorflow.compat.v2, you can ensure that your code works without modifications against an installation of 1.15 or 2.0.+* EagerTensor now supports buffer interface for tensors.+* Add toggles `tf.enable_control_flow_v2()` and `tf.disable_control_flow_v2()` for enabling/disabling v2 control flow.+* Enable v2 control flow as part of `tf.enable_v2_behavior()` and `TF2_BEHAVIOR=1`.+* AutoGraph translates Python control flow into TensorFlow expressions, allowing users to write regular Python inside `tf.function`-decorated functions. AutoGraph is also applied in functions used with `tf.data`, `tf.distribute` and `tf.keras` APIs.+* Adds enable_tensor_equality(), which switches the behavior such that: +  Tensors are no longer hashable+  Tensors can be compared with == and !=, yielding a Boolean Tensor with element-wise comparison results. This will be the default behavior in 2.0++## Breaking Changes+* Tensorflow code now produces 2 different pip packages: tensorflow_core containing all the code (in the future it will contain only the private implementation) and tensorflow which is a virtual pip package doing forwarding to tensorflow_core (and in the future will contain only the public API of tensorflow). We don't expect this to be breaking, unless you were importing directly from the implementation.+* TensorFlow 1.15 is built using devtoolset7(GCC7) on Ubuntu 16. This may lead to ABI incompatibilities with extensions built against earlier versions of TensorFlow.+* Deprecated the use of `constraint=` and `.constraint` with ResourceVariable.+* `tf.keras`:+  * `OMP_NUM_THREADS` is no longer used by the default Keras config. To configure the number of threads, use `tf.config.threading` APIs.+  * `tf.keras.model.save_model` and `model.save` now defaults to saving a TensorFlow SavedModel.+  * `keras.backend.resize_images` (and consequently, `keras.layers.Upsampling2D`) behavior has changed, a bug in the resizing implementation was fixed.+  * Layers now default to float32, and automatically cast their inputs to the layer's dtype. If you had a model that used float64, it will probably silently use float32 in TensorFlow2, and a warning will be issued that starts with Layer "layer-name" is casting an input tensor from dtype float64 to the layer's dtype of float32. To fix, either set the default dtype to float64 with `tf.keras.backend.set_floatx('float64')`, or pass `dtype='float64'` to each of the Layer constructors. See `tf.keras.layers.Layer` for more information.+  * Some `tf.assert_*` methods now raise assertions at operation creation time (i.e. when this Python line executes) if the input tensors' values are known at that time, not during the session.run(). When this happens, a noop is returned and the input tensors are marked non-feedable. In other words, if they are used as keys in `feed_dict` argument to session.run(), an error will be raised. Also, because some assert ops don't make it into the graph, the graph structure changes. A different graph can result in different per-op random seeds when they are not given explicitly (most often).++## Bug Fixes and Other Changes+* `tf.data`:+  * Promoting `unbatch` from experimental to core API.+  * Adding support for datasets as inputs to `from_tensors` and `from_tensor_slices` and batching and unbatching of nested datasets.+* `tf.keras`:+  * `tf.keras.estimator.model_to_estimator` now supports exporting to tf.train.Checkpoint format, which allows the saved checkpoints to be compatible with `model.load_weights`.+  * Saving a Keras Model using `tf.saved_model.save` now saves the list of variables, trainable variables, regularization losses, and the call function.+  * Deprecated `tf.keras.experimental.export_saved_model` and `tf.keras.experimental.function`. Please use `tf.keras.models.save_model(..., save_format='tf')` and `tf.keras.models.load_model` instead.+  * Add an `implementation=3` mode for `tf.keras.layers.LocallyConnected2D` and `tf.keras.layers.LocallyConnected1D` layers using `tf.SparseTensor` to store weights,  allowing a dramatic speedup for large sparse models.+  * Enable the Keras compile API `experimental_run_tf_function` flag by default. This flag enables single training/eval/predict execution path. With this 1. All input types are converted to `Dataset`. 2. When distribution strategy is not specified this goes through the no-op distribution strategy path. 3. Execution is wrapped in tf.function unless `run_eagerly=True` is set in compile.+  * Raise error if `batch_size` argument is used when input is dataset/generator/keras sequence.+* `tf.lite`+  * Add `GATHER` support to NN API delegate.+  * tflite object detection script has a debug mode.+  * Add delegate support for QUANTIZE.+  * Added evaluation script for COCO minival.+  * Add delegate support for `QUANTIZED_16BIT_LSTM`.+  * Converts hardswish subgraphs into atomic ops.+* Add support for defaulting the value of `cycle_length` argument of `tf.data.Dataset.interleave` to the number of schedulable CPU cores.+* `parallel_for`: Add converter for `MatrixDiag`.+* Add `narrow_range` attribute to `QuantizeAndDequantizeV2` and V3.+* Added new op: `tf.strings.unsorted_segment_join`.+* Add HW acceleration support for `topK_v2`.+* Add new `TypeSpec` classes.+* CloudBigtable version updated to v0.10.0 BEGIN_PUBLIC CloudBigtable version updated to v0.10.0+* Expose `Head` as public API.+* Update docstring for gather to properly describe the non-empty batch_dims case.+* Added `tf.sparse.from_dense` utility function.+* Improved ragged tensor support in `TensorFlowTestCase`.+* Makes the a-normal form transformation in Pyct configurable as to which nodes are converted to variables and which are not.+* `ResizeInputTensor` now works for all delegates.+* Add `EXPAND_DIMS` support to NN API delegate TEST:  expand_dims_test+* `tf.cond` emits a StatelessIf op if the branch functions are stateless and do not touch any resources.+* `tf.cond`, `tf.while` and `if` and `while` in AutoGraph now accept a nonscalar predicate if has a single element. This does not affect non-V2 control flow.+* `tf.while_loop` emits a StatelessWhile op if the cond and body functions are stateless and do not touch any resources.+* Refactors code in Quant8 LSTM support to reduce TFLite binary size.+* Add support of local soft device placement for eager op.+* Pass partial_pivoting to the `_TridiagonalSolveGrad`.+* Add HW acceleration support for `LogSoftMax`.+* Added a function nested_value_rowids for ragged tensors.+* fixed a bug in histogram_op.cc.

remove this, it's too obscure.

tensorflow-jenkins

comment created time in 6 months

Pull request review commenttensorflow/tensorflow

Update release notes for TensorFlow 1.15.0

+# Release 1.15.0+This is the last 1.x release for TensorFlow. We do not expect to update the 1.x branch with features, although we will issue patch releases to fix vulnerabilities for at least one year. ++## Major Features and Improvements+* TensorFlow 1.15 contains a complete implementation of the 2.0 API in its compat.v2 module. It contains a copy of the 1.15 main module (without contrib) in the compat.v1 module. TensorFlow 1.15 is able to emulate 2.0 behavior using the enable_v2_behavior() function.

contrib, compat.v1, enable_v2_behavior()

tensorflow-jenkins

comment created time in 6 months

Pull request review commenttensorflow/tensorflow

[TF 2.0 Docs] Doc update for math functions

 op {   description: <<END *NOTE*: `Greater` supports broadcasting. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)++```python

And you need an empty line before the Example: otherwise it won't be recognized as a section heading.

SSaishruthi

comment created time in 6 months

Pull request review commenttensorflow/tensorflow

[TF 2.0 Docs] Doc update for math functions

 op {   description: <<END *NOTE*: `GreaterEqual` supports broadcasting. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)+  Example:

Here and elsewhere: the indentation is wrong. Please fix.

SSaishruthi

comment created time in 6 months

Pull request review commenttensorflow/tensorflow

Release Notes for 2.0.0-rc0

+# Release 2.0.0-rc0++## Major Features and Improvements++TensorFlow 2.0 focuses on **simplicity** and **ease of use**, featuring updates like:++* Easy model building with Keras and eager execution.+* Robust model deployment in production on any platform.+* Powerful experimentation for research.+* API simplification by reducing duplication and removing deprecated endpoints.++For details on best practices with 2.0, see [the Effective 2.0 guide](https://www.tensorflow.org/beta/guide/effective_tf2)+++For information on upgrading your existing TensorFlow 1.x models, please refer to our [Upgrade](https://medium.com/tensorflow/upgrading-your-code-to-tensorflow-2-0-f72c3a4d83b5) and [Migration](https://www.tensorflow.org/beta/guide/migration_guide) guides. We have also released a collection of [tutorials and getting started guides](https://www.tensorflow.org/beta).++## Highlights++* TF 2.0 delivers Keras as the central high level API used to build and train models. Keras provides several model-building APIs such as Sequential, Functional, and Subclassing along with eager execution, for immediate iteration and intuitive debugging, and tf.data, for building scalable input pipelines. Checkout [guide](https://www.tensorflow.org/beta/guide/keras/overview) for additional details.+* Distribution Strategy: TF 2.0 users will be able to use the [`tf.distribute.Strategy`](https://www.tensorflow.org/beta/guide/distribute_strategy) API to distribute training with minimal code changes, yielding great out-of-the-box performance. It supports distributed training with Keras model.fit, as well as with custom training loops. Multi-GPU support is available, along with experimental support for multi worker and Cloud TPUs. Check out the [guide](https://www.tensorflow.org/beta/guide/distribute_strategy) for more details.+* Functions, not Sessions. The traditional declarative programming model of building a graph and executing it via a `tf.Session` is discouraged, and replaced with by writing regular Python functions. Using the `tf.function` decorator, such functions can be turned into graphs which can be executed remotely, serialized, and optimized for performance.+* Unification of tf.train.Optimizers and tf.keras.Optimizers. Use tf.keras.Optimizers for TF2.0. `compute_gradients` is removed as public API, and use GradientTape to compute gradients.+* AutoGraph translates Python control flow into TensorFlow expressions, allowing users to write regular Python inside `tf.function`-decorated functions. AutoGraph is also applied in functions used with tf.data, tf.distribute and tf.keras APIs. +* Unification of exchange formats to SavedModel. All TensorFlow ecosystem projects (TensorFlow Lite, TensorFlow JS, TensorFlow Serving, TensorFlow Hub) accept SavedModels. Model state should be saved to and restored from SavedModels.+* API Changes: Many API symbols have been renamed or removed, and argument names have changed. Many of these changes are motivated by consistency and clarity. The 1.x API remains available in the compat.v1 module. A list of all symbol changes can be found [here](https://docs.google.com/spreadsheets/d/1FLFJLzg7WNP6JHODX5q8BDgptKafq_slHpnHVbJIteQ/edit#gid=0). + * API clean-up, included removing `tf.app`, `tf.flags`, and `tf.logging` in favor of [absl-py](https://github.com/abseil/abseil-py).+* No more global variables with helper methods like `tf.global_variables_initializer` and `tf.get_global_step`.++## Breaking Changes+* Many backwards incompatible API changes have been made to clean up the APIs and make them more consistent.+* `tf.contrib` has been deprecated, and functionality has been either migrated to the core TensorFlow API, to an ecosystem project such as [tensorflow/addons](https://www.github.com/tensorflow/addons) or [tensorflow/io](https://www.github.com/tensorflow/io), or removed entirely.+* Premade estimators in the tf.estimator.DNN/Linear/DNNLinearCombined family have been updated to use `tf.keras.optimizers` instead of the `tf.compat.v1.train.Optimizer`s. If you do not pass in an `optimizer=` arg or if you use a string, the premade estimator will use the Keras optimizer. This is checkpoint breaking, as the optimizers have separate variables. A checkpoint converter tool for converting optimizers is included with the release,  but if you want to avoid any change, switch to the v1 version of the estimator:  `tf.compat.v1.estimator.DNN/Linear/DNNLinearCombined*`.+* The equality operation on Tensors & Variables now compares on value instead of `id()`. As a result, both Tensors & Variables are no longer hashable types.+* Layers now default to float32, and automatically cast their inputs to the layer's dtype. If you had a model that used float64, it will probably silently use float32 in TensorFlow 2, and a warning will be issued that starts with "Layer <layer-name> is casting an input tensor from dtype float64 to the layer's dtype of float32". To fix, either set the default dtype to float64 with `tf.keras.backend.set_floatx('float64')`, or pass `dtype='float64'` to each of the Layer constructors. See `tf.keras.layers.Layer` for more information.+

Add here (@gunan to confirm words are correct):

TensorFlow 2.0 is built using devtoolset7 on Ubuntu 16. This may lead to ABI incompatibilities with extensions built against earlier versions of TensorFlow.

goldiegadde

comment created time in 6 months

Pull request review commenttensorflow/tensorflow

Release Notes for 2.0.0-rc0

+# Release 2.0.0-rc0++## Major Features and Improvements++TensorFlow 2.0 focuses on **simplicity** and **ease of use**, featuring updates like:++* Easy model building with Keras and eager execution.+* Robust model deployment in production on any platform.+* Powerful experimentation for research.+* API simplification by reducing duplication and removing deprecated endpoints.++For details on best practices with 2.0, see [the Effective 2.0 guide](https://www.tensorflow.org/beta/guide/effective_tf2)+++For information on upgrading your existing TensorFlow 1.x models, please refer to our [Upgrade](https://medium.com/tensorflow/upgrading-your-code-to-tensorflow-2-0-f72c3a4d83b5) and [Migration](https://www.tensorflow.org/beta/guide/migration_guide) guides. We have also released a collection of [tutorials and getting started guides](https://www.tensorflow.org/beta).++## Highlights++* TF 2.0 delivers Keras as the central high level API used to build and train models. Keras provides several model-building APIs such as Sequential, Functional, and Subclassing along with eager execution, for immediate iteration and intuitive debugging, and tf.data, for building scalable input pipelines. Checkout [guide](https://www.tensorflow.org/beta/guide/keras/overview) for additional details.+* Distribution Strategy: TF 2.0 users will be able to use the [`tf.distribute.Strategy`](https://www.tensorflow.org/beta/guide/distribute_strategy) API to distribute training with minimal code changes, yielding great out-of-the-box performance. It supports distributed training with Keras model.fit, as well as with custom training loops. Multi-GPU support is available, along with experimental support for multi worker and Cloud TPUs. Check out the [guide](https://www.tensorflow.org/beta/guide/distribute_strategy) for more details.+* Functions, not Sessions. The traditional declarative programming model of building a graph and executing it via a `tf.Session` is discouraged, and replaced with by writing regular Python functions. Using the `tf.function` decorator, such functions can be turned into graphs which can be executed remotely, serialized, and optimized for performance.+* Unification of tf.train.Optimizers and tf.keras.Optimizers. Use tf.keras.Optimizers for TF2.0. `compute_gradients` is removed as public API, and use GradientTape to compute gradients.+* AutoGraph translates Python control flow into TensorFlow expressions, allowing users to write regular Python inside `tf.function`-decorated functions. AutoGraph is also applied in functions used with tf.data, tf.distribute and tf.keras APIs. +* Unification of exchange formats to SavedModel. All TensorFlow ecosystem projects (TensorFlow Lite, TensorFlow JS, TensorFlow Serving, TensorFlow Hub) accept SavedModels. Model state should be saved to and restored from SavedModels.+* API Changes: Many API symbols have been renamed or removed, and argument names have changed. Many of these changes are motivated by consistency and clarity. The 1.x API remains available in the compat.v1 module. A list of all symbol changes can be found [here](https://docs.google.com/spreadsheets/d/1FLFJLzg7WNP6JHODX5q8BDgptKafq_slHpnHVbJIteQ/edit#gid=0). + * API clean-up, included removing `tf.app`, `tf.flags`, and `tf.logging` in favor of [absl-py](https://github.com/abseil/abseil-py).+* No more global variables with helper methods like `tf.global_variables_initializer` and `tf.get_global_step`.++## Breaking Changes+* Many backwards incompatible API changes have been made to clean up the APIs and make them more consistent.+* `tf.contrib` has been deprecated, and functionality has been either migrated to the core TensorFlow API, to an ecosystem project such as [tensorflow/addons](https://www.github.com/tensorflow/addons) or [tensorflow/io](https://www.github.com/tensorflow/io), or removed entirely.+* Premade estimators in the tf.estimator.DNN/Linear/DNNLinearCombined family have been updated to use `tf.keras.optimizers` instead of the `tf.compat.v1.train.Optimizer`s. If you do not pass in an `optimizer=` arg or if you use a string, the premade estimator will use the Keras optimizer. This is checkpoint breaking, as the optimizers have separate variables. A checkpoint converter tool for converting optimizers is included with the release,  but if you want to avoid any change, switch to the v1 version of the estimator:  `tf.compat.v1.estimator.DNN/Linear/DNNLinearCombined*`.+* The equality operation on Tensors & Variables now compares on value instead of `id()`. As a result, both Tensors & Variables are no longer hashable types.

change to:

Tensors are no longer hashable, but instead compare element-wise with == and !=. Use tf.compat.v1.disable_tensor_equality() to return to the previous behavior.

goldiegadde

comment created time in 6 months

Pull request review commenttensorflow/tensorflow

Release Notes for 2.0.0-rc0

+# Release 2.0.0-rc0++## Major Features and Improvements

It does link to those guides. Do you mean link directly to the upgrade script?

goldiegadde

comment created time in 6 months

Pull request review commenttensorflow/tensorflow

Release Notes for 2.0.0-rc0

+# Release 2.0.0-rc0

-rc1?

goldiegadde

comment created time in 6 months

Pull request review commenttensorflow/tensorflow

Release Notes for 2.0.0-rc0

+# Release 2.0.0-rc0++## Major Features and Improvements++TensorFlow 2.0 focuses on **simplicity** and **ease of use**, featuring updates like:++* Easy model building with Keras and eager execution.+* Robust model deployment in production on any platform.+* Powerful experimentation for research.+* API simplification by reducing duplication and removing deprecated endpoints.++For details on best practices with 2.0, see [the Effective 2.0 guide](https://www.tensorflow.org/beta/guide/effective_tf2)+++For information on upgrading your existing TensorFlow 1.x models, please refer to our [Upgrade](https://medium.com/tensorflow/upgrading-your-code-to-tensorflow-2-0-f72c3a4d83b5) and [Migration](https://www.tensorflow.org/beta/guide/migration_guide) guides. We have also released a collection of [tutorials and getting started guides](https://www.tensorflow.org/beta).++## Highlights++* TF 2.0 delivers Keras as the central high level API used to build and train models. Keras provides several model-building APIs such as Sequential, Functional, and Subclassing along with eager execution, for immediate iteration and intuitive debugging, and tf.data, for building scalable input pipelines. Checkout [guide](https://www.tensorflow.org/beta/guide/keras/overview) for additional details.+* Distribution Strategy: TF 2.0 users will be able to use the [`tf.distribute.Strategy`](https://www.tensorflow.org/beta/guide/distribute_strategy) API to distribute training with minimal code changes, yielding great out-of-the-box performance. It supports distributed training with Keras model.fit, as well as with custom training loops. Multi-GPU support is available, along with experimental support for multi worker and Cloud TPUs. Check out the [guide](https://www.tensorflow.org/beta/guide/distribute_strategy) for more details.+* Functions, not Sessions. The traditional declarative programming model of building a graph and executing it via a `tf.Session` is discouraged, and replaced with by writing regular Python functions. Using the `tf.function` decorator, such functions can be turned into graphs which can be executed remotely, serialized, and optimized for performance.+* Unification of tf.train.Optimizers and tf.keras.Optimizers. Use tf.keras.Optimizers for TF2.0. `compute_gradients` is removed as public API, and use GradientTape to compute gradients.+* AutoGraph translates Python control flow into TensorFlow expressions, allowing users to write regular Python inside `tf.function`-decorated functions. AutoGraph is also applied in functions used with tf.data, tf.distribute and tf.keras APIs. +* Unification of exchange formats to SavedModel. All TensorFlow ecosystem projects (TensorFlow Lite, TensorFlow JS, TensorFlow Serving, TensorFlow Hub) accept SavedModels. Model state should be saved to and restored from SavedModels.+* API Changes: Many API symbols have been renamed or removed, and argument names have changed. Many of these changes are motivated by consistency and clarity. The 1.x API remains available in the compat.v1 module. A list of all symbol changes can be found [here](https://docs.google.com/spreadsheets/d/1FLFJLzg7WNP6JHODX5q8BDgptKafq_slHpnHVbJIteQ/edit#gid=0). + * API clean-up, included removing `tf.app`, `tf.flags`, and `tf.logging` in favor of [absl-py](https://github.com/abseil/abseil-py).+* No more global variables with helper methods like `tf.global_variables_initializer` and `tf.get_global_step`.++## Breaking Changes+* Many backwards incompatible API changes have been made to clean up the APIs and make them more consistent.+* `tf.contrib` has been deprecated, and functionality has been either migrated to the core TensorFlow API, to an ecosystem project such as [tensorflow/addons](https://www.github.com/tensorflow/addons) or [tensorflow/io](https://www.github.com/tensorflow/io), or removed entirely.+* Premade estimators in the tf.estimator.DNN/Linear/DNNLinearCombined family have been updated to use `tf.keras.optimizers` instead of the `tf.compat.v1.train.Optimizer`s. If you do not pass in an `optimizer=` arg or if you use a string, the premade estimator will use the Keras optimizer. This is checkpoint breaking, as the optimizers have separate variables. A checkpoint converter tool for converting optimizers is included with the release,  but if you want to avoid any change, switch to the v1 version of the estimator:  `tf.compat.v1.estimator.DNN/Linear/DNNLinearCombined*`.+* The equality operation on Tensors & Variables now compares on value instead of `id()`. As a result, both Tensors & Variables are no longer hashable types.+* Layers now default to float32, and automatically cast their inputs to the layer's dtype. If you had a model that used float64, it will probably silently use float32 in TensorFlow 2, and a warning will be issued that starts with "Layer <layer-name> is casting an input tensor from dtype float64 to the layer's dtype of float32". To fix, either set the default dtype to float64 with `tf.keras.backend.set_floatx('float64')`, or pass `dtype='float64'` to each of the Layer constructors. See `tf.keras.layers.Layer` for more information.++Refer to our [public project status tracker](https://github.com/orgs/tensorflow/projects/4) and [issues tagged with `2.0`](https://github.com/tensorflow/tensorflow/issues?q=is%3Aopen+is%3Aissue+label%3A2.0) on GitHub for insight into recent issues and development progress.++If you experience any snags when using TF 2.0, please let us know at the [TF 2.0 Testing User Group](https://groups.google.com/a/tensorflow.org/forum/?utm_medium=email&utm_source=footer#!forum/testing). We have a support mailing list as well as weekly testing meetings, and would love to hear your migration feedback and questions.+++## Bug Fixes and Other Changes++* `tf.data`:+  * Add support for TensorArrays to `tf.data Dataset`.+  * Integrate Ragged Tensors with `tf.data`.+  * All core and experimental tf.data transformations that input user-defined functions can span multiple devices now.+  * Extending the TF 2.0 support for `shuffle(..., reshuffle_each_iteration=True)` and `cache()` to work across different Python iterators for the same dataset.+  * Removing the `experimental_numa_aware` option from `tf.data.Options`.+  * Add `num_parallel_reads` and passing in a Dataset containing filenames into `TextLineDataset` and `FixedLengthRecordDataset`.+  * Add support for defaulting the value of `cycle_length` argument of `tf.data.Dataset.interleave` to the number of schedulable CPU cores.+  * Promoting `tf.data.experimental.enumerate_dataset` to core as `tf.data.Dataset.enumerate`.+  * Promoting `tf.data.experimental.unbatch` to core as `tf.data.Dataset.unbatch`.+  * Adds option for introducing slack in the pipeline to reduce CPU contention, via `tf.data.Options().experimental_slack = True`+  * Added experimental support for parallel batching to `batch()` and `padded_batch()`. This functionality can be enabled through tf.data.Options()+  * Support cancellation of long-running `reduce`.+  * Now we use `dataset` node name as prefix instead of the op name, to identify the component correctly in metrics, for pipelines with repeated components.++* `tf.distribute`:+  * Enable `tf.distribute.experimental.MultiWorkerMirroredStrategy` working in eager mode.+  * Disable `run_eagerly` and distribution strategy if there are symbolic tensors added to the model using `add_metric` or `add_loss`.+  * Bug fix: loss and gradients should now more reliably be correctly scaled w.r.t. the global batch size when using a `tf.distribute.Strategy`.+  * Set default loss reduction as `AUTO` for improving reliability of loss scaling with distribution strategy and custom training loops. `AUTO` indicates that the reduction option will be determined by the usage context. For almost all cases this defaults to `SUM_OVER_BATCH_SIZE`. When used in distribution strategy scope, outside of built-in training loops such as `tf.keras` `compile` and `fit`, we expect reduction value to be 'None' or 'SUM'. Using other values will raise an error.+  * Support for multi-host `ncclAllReduce` in Distribution Strategy.++* `tf.estimator`:+  * Replace `tf.contrib.estimator.add_metrics` with `tf.estimator.add_metrics`+  * Use `tf.compat.v1.estimator.inputs` instead of `tf.estimator.inputs`+  * Replace contrib references with `tf.estimator.experimental.*` for apis in early_s in Estimator+  * Canned Estimators will now use keras optimizers by default. An error will be raised if tf.train.Optimizers are used, and you will have to switch to tf.keras.optimizers or tf.compat.v1 canned Estimators.+  * A checkpoint converter for canned Estimators has been provided to transition canned Estimators that are warm started from tf.train.Optimizers to tf.keras.optimizers.+  * Default aggregation for canned Estimators is now SUM_OVER_BATCH_SIZE. To maintain previous default behavior, please pass SUM as the loss aggregation method.+  * Canned Estimators don’t support `input_layer_partitioner` arg in the API. If you have this arg, you will have to switch to tf.compat.v1 canned Estimators.+  * `Estimator.export_savedmodel` has been renamed `export_saved_model`.+  * When saving to SavedModel, Estimators will strip default op attributes. This is almost always the correct behavior, as it is more forwards compatible, but if you require that default attributes are saved with the model, please use `tf.compat.v1.Estimator`.+  * Feature Columns have been upgraded to be more Eager-friendly and to work with Keras. As a result, tf.feature_column.input_layer has been deprecated in favor of `tf.keras.layers.DenseFeatures`. v1 feature columns have direct analogues in v2 except for `shared_embedding_columns`, which are not cross-compatible with v1 and v2. Use tf.feature_column.shared_embeddings instead.+  * Losses are scaled in canned estimator v2 and not in the optimizers anymore. If you are using Estimator + distribution strategy + optimikzer v1 then the behavior does not change. This implies that if you are using custom estimator with optimizer v2, you have to scale losses. We have new utilities to help scale losses `tf.nn.compute_average_loss`, `tf.nn.scale_regularization_loss`.++* `tf.keras`:+  * Premade models (including Linear and WideDeep) have been introduced for the purpose of replacing Premade estimators.+  * Model saving changes+  * `model.save` and `tf.saved_model.save` may now save to the TensorFlow SavedModel format. The model can be restored using `tf.keras.models.load_model`. HDF5 files are still supported, and may be used by specifying `save_format="h5"` when saving.+  * `tf.keras.model.save_model` and `model.save` now defaults to saving a TensorFlow SavedModel. HDF5 files are still supported.+  * Deprecated `tf.keras.experimental.export_saved_model` and `tf.keras.experimental.function`. Please use `tf.keras.models.save_model(..., save_format='tf')` and `tf.keras.models.load_model` instead.+  * Raw TensorFlow functions can now be used in conjunction with the Keras Functional API during model creation. This obviates the need for users to create Lambda layers in most cases when using the Functional API. Like Lambda layers, TensorFlow functions that result in Variable creation or assign ops are not supported.+  * Add support for passing list of lists to the `metrics` argument in Keras `compile.

missing a closing `

goldiegadde

comment created time in 6 months

Pull request review commenttensorflow/tensorflow

Update release notes for TensorFlow 1.15.0

+# Release 1.15.0++## Major Features and Improvements++## Breaking Changes++## Bug Fixes and Other Changes++* Promoting `unbatch` from experimental to core API.+* Adds `tf.enable_control_flow_v2()` and `tf.disable_control_flow_v2()`.+* EagerTensor now support buffer interface for tensors.+* This change bumps the version number of the FullyConnected Op to 5.+* tensorflow : crash when pointer become nullptr.+* `tf.keras.estimator.model_to_estimator` now supports exporting to tf.train.Checkpoint format, which allows the saved checkpoints to be compatible with `model.load_weights`.+* Add support for defaulting the value of `cycle_length` argument of `tf.data.Dataset.interleave` to the number of schedulable CPU cores.+* parallel_for: Add converter for `MatrixDiag`.+* Saving a Keras Model using `tf.saved_model.save` now saves the list of variables, trainable variables, regularization losses, and the call function.+* Add `narrow_range` attribute to `QuantizeAndDequantizeV2` and V3.+* `OMP_NUM_THREADS` is no longer used by the default Keras config. To configure the number of threads, use `tf.config.threading` APIs.+* Added new op: `tf.strings.unsorted_segment_join`.+* `tf.keras.model.save_model` and `model.save` now defaults to saving a TensorFlow SavedModel. A SavedModel code is only applicable to `tf.keras`.+* Tensorflow code now produces 2 different pip packages: tensorflow_core containing all the code (in the future it will contain only the private implementation) and tensorflow which is a virtual pip package doing forwarding to tensorflow_core (and in the future will contain only the public API of tensorflow)+* Adding support for datasets as inputs to `from_tensors` and `from_tensor_slices` and batching and unbatching of nested datasets.+* Add HW acceleration support for topK_v2+* Deprecated `tf.keras.experimental.export_saved_model` and `tf.keras.experimental.function`. Please use `tf.keras.models.save_model(..., save_format='tf')` and `tf.keras.models.load_model` instead.+* Add new `TypeSpec` classes+* CloudBigtable version updated to v0.10.0 BEGIN_PUBLIC CloudBigtable version updated to v0.10.0+* Deprecated the use of `constraint=` and `.constraint` with ResourceVariable.+* Expose Head as public API.+* AutoGraph is now applied automatically to user functions passed to APIs of `tf.data` and `tf.distribute`. If AutoGraph is disabled in the the calling code, it will also be disabled in the user functions.+* Update docstring for gather to properly describe the non-empty batch_dims case.+* Added `tf.sparse.from_dense` utility function.+* Add `GATHER` support to NN API delegate+* AutoGraph translates Python control flow into TensorFlow expressions, allowing users to write regular Python inside `tf.function`-decorated functions. AutoGraph is also applied in functions used with `tf.data`, `tf.distribute` and `tf.keras` APIs.+* Improved ragged tensor support in `TensorFlowTestCase`.+* Makes the a-normal form transformation in Pyct configurable as to which nodes are converted to variables and which are not.+* `keras.backend.resize_images` (and consequently, `keras.layers.Upsampling2D`) behavior has changed, a bug in the resizing implementation was fixed.+* `ResizeInputTensor` now works for all delegates+* Start of open development of TF, TFLite, XLA MLIR dialects.+* Add `EXPAND_DIMS` support to NN API delegate TEST:  expand_dims_test+* `tf.cond` emits a StatelessIf op if the branch functions are stateless and do not touch any resources.+* Add support of local soft device placement for eager op.+* Pass partial_pivoting to the `_TridiagonalSolveGrad`.+* Add HW acceleration support for `LogSoftMax`.+* Added a function nested_value_rowids for ragged tensors.+* fixed a bug in histogram_op.cc.+* Add guard to avoid acceleration of L2 Normalization with input rank != 4+* Added evaluation script for COCO minival+* Add delegate support for QUANTIZE+* tflite object detection script has a debug mode+* Add `tf.math.cumulative_logsumexp operation`.+* Add `tf.ragged.stack`.+* Add delegate support for `QUANTIZED_16BIT_LSTM`.+* Add an `implementation=3` mode for `tf.keras.layers.LocallyConnected2D` and `tf.keras.layers.LocallyConnected1D` layers using `tf.SparseTensor` to store weights,  allowing a dramatic speedup for large sparse models.+* `tf.while_loop` emits a StatelessWhile op if the cond and body functions are stateless and do not touch any resources.+* Refactors code in Quant8 LSTM support to reduce TFLite binary size.+* Layers now default to float32, and automatically cast their inputs to the layer's dtype. If you had a model that used float64, it will probably silently use float32 in TensorFlow2, and a warning will be issued that starts with Layer "layer-name" is casting an input tensor from dtype float64 to the layer's dtype of float32. To fix, either set the default dtype to float64 with `tf.keras.backend.set_floatx('float64')`, or pass `dtype='float64'` to each of the Layer constructors. See `tf.keras.layers.Layer` for more information.+* Fix memory allocation problem when calling `AddNewInputConstantTensor`.+* Delegate application failure leaves interpreter in valid state.+* Enable the Keras compile API `experimental_run_tf_function` flag by default. This flag enables single training/eval/predict execution path. With this 1. All input types are converted to `Dataset`. 2. When distribution strategy is not specified this goes through the no-op distribution strategy path. 3. Execution is wrapped in tf.function unless `run_eagerly=True` is set in compile.+* `tf.cond`, `tf.while` and if and while in AutoGraph now accept a nonscalar predicate if has a single element. This does not affec non-V2 control flow.+* Enables v2 control flow as part of `tf.enable_v2_behavior()` and `TF2_BEHAVIOR=1`.+* Add check for correct memory alignment to `MemoryAllocation::MemoryAllocation()`.+* Extracts `NNAPIDelegateKernel` from nnapi_delegate.cc+* Added support for `FusedBatchNormV3` in converter.+* A ragged to dense op for directly calculating tensors.+* Converts hardswish subgraphs into atomic ops.+* The equality operation on Tensors & Variables now compares on value instead of id(). As a result, both Tensors & Variables are no longer hashable types.+* Raise error if `batch_size` argument is used when input is dataset/generator/keras sequence.+* The equality operation on Tensors & Variables now compares on value instead of id(). As a result, both Tensors & Variables are no longer hashable types.

Remove, this is a duplicate.

tensorflow-jenkins

comment created time in 6 months

Pull request review commenttensorflow/tensorflow

Update release notes for TensorFlow 1.15.0

+# Release 1.15.0++## Major Features and Improvements++## Breaking Changes++## Bug Fixes and Other Changes++* Promoting `unbatch` from experimental to core API.+* Adds `tf.enable_control_flow_v2()` and `tf.disable_control_flow_v2()`.+* EagerTensor now support buffer interface for tensors.+* This change bumps the version number of the FullyConnected Op to 5.+* tensorflow : crash when pointer become nullptr.+* `tf.keras.estimator.model_to_estimator` now supports exporting to tf.train.Checkpoint format, which allows the saved checkpoints to be compatible with `model.load_weights`.+* Add support for defaulting the value of `cycle_length` argument of `tf.data.Dataset.interleave` to the number of schedulable CPU cores.+* parallel_for: Add converter for `MatrixDiag`.+* Saving a Keras Model using `tf.saved_model.save` now saves the list of variables, trainable variables, regularization losses, and the call function.+* Add `narrow_range` attribute to `QuantizeAndDequantizeV2` and V3.+* `OMP_NUM_THREADS` is no longer used by the default Keras config. To configure the number of threads, use `tf.config.threading` APIs.+* Added new op: `tf.strings.unsorted_segment_join`.+* `tf.keras.model.save_model` and `model.save` now defaults to saving a TensorFlow SavedModel. A SavedModel code is only applicable to `tf.keras`.+* Tensorflow code now produces 2 different pip packages: tensorflow_core containing all the code (in the future it will contain only the private implementation) and tensorflow which is a virtual pip package doing forwarding to tensorflow_core (and in the future will contain only the public API of tensorflow)+* Adding support for datasets as inputs to `from_tensors` and `from_tensor_slices` and batching and unbatching of nested datasets.+* Add HW acceleration support for topK_v2+* Deprecated `tf.keras.experimental.export_saved_model` and `tf.keras.experimental.function`. Please use `tf.keras.models.save_model(..., save_format='tf')` and `tf.keras.models.load_model` instead.+* Add new `TypeSpec` classes+* CloudBigtable version updated to v0.10.0 BEGIN_PUBLIC CloudBigtable version updated to v0.10.0+* Deprecated the use of `constraint=` and `.constraint` with ResourceVariable.+* Expose Head as public API.+* AutoGraph is now applied automatically to user functions passed to APIs of `tf.data` and `tf.distribute`. If AutoGraph is disabled in the the calling code, it will also be disabled in the user functions.+* Update docstring for gather to properly describe the non-empty batch_dims case.+* Added `tf.sparse.from_dense` utility function.+* Add `GATHER` support to NN API delegate+* AutoGraph translates Python control flow into TensorFlow expressions, allowing users to write regular Python inside `tf.function`-decorated functions. AutoGraph is also applied in functions used with `tf.data`, `tf.distribute` and `tf.keras` APIs.+* Improved ragged tensor support in `TensorFlowTestCase`.+* Makes the a-normal form transformation in Pyct configurable as to which nodes are converted to variables and which are not.+* `keras.backend.resize_images` (and consequently, `keras.layers.Upsampling2D`) behavior has changed, a bug in the resizing implementation was fixed.+* `ResizeInputTensor` now works for all delegates+* Start of open development of TF, TFLite, XLA MLIR dialects.+* Add `EXPAND_DIMS` support to NN API delegate TEST:  expand_dims_test+* `tf.cond` emits a StatelessIf op if the branch functions are stateless and do not touch any resources.+* Add support of local soft device placement for eager op.+* Pass partial_pivoting to the `_TridiagonalSolveGrad`.+* Add HW acceleration support for `LogSoftMax`.+* Added a function nested_value_rowids for ragged tensors.+* fixed a bug in histogram_op.cc.+* Add guard to avoid acceleration of L2 Normalization with input rank != 4+* Added evaluation script for COCO minival+* Add delegate support for QUANTIZE+* tflite object detection script has a debug mode+* Add `tf.math.cumulative_logsumexp operation`.+* Add `tf.ragged.stack`.+* Add delegate support for `QUANTIZED_16BIT_LSTM`.+* Add an `implementation=3` mode for `tf.keras.layers.LocallyConnected2D` and `tf.keras.layers.LocallyConnected1D` layers using `tf.SparseTensor` to store weights,  allowing a dramatic speedup for large sparse models.+* `tf.while_loop` emits a StatelessWhile op if the cond and body functions are stateless and do not touch any resources.+* Refactors code in Quant8 LSTM support to reduce TFLite binary size.+* Layers now default to float32, and automatically cast their inputs to the layer's dtype. If you had a model that used float64, it will probably silently use float32 in TensorFlow2, and a warning will be issued that starts with Layer "layer-name" is casting an input tensor from dtype float64 to the layer's dtype of float32. To fix, either set the default dtype to float64 with `tf.keras.backend.set_floatx('float64')`, or pass `dtype='float64'` to each of the Layer constructors. See `tf.keras.layers.Layer` for more information.+* Fix memory allocation problem when calling `AddNewInputConstantTensor`.+* Delegate application failure leaves interpreter in valid state.+* Enable the Keras compile API `experimental_run_tf_function` flag by default. This flag enables single training/eval/predict execution path. With this 1. All input types are converted to `Dataset`. 2. When distribution strategy is not specified this goes through the no-op distribution strategy path. 3. Execution is wrapped in tf.function unless `run_eagerly=True` is set in compile.+* `tf.cond`, `tf.while` and if and while in AutoGraph now accept a nonscalar predicate if has a single element. This does not affec non-V2 control flow.+* Enables v2 control flow as part of `tf.enable_v2_behavior()` and `TF2_BEHAVIOR=1`.+* Add check for correct memory alignment to `MemoryAllocation::MemoryAllocation()`.+* Extracts `NNAPIDelegateKernel` from nnapi_delegate.cc+* Added support for `FusedBatchNormV3` in converter.+* A ragged to dense op for directly calculating tensors.+* Converts hardswish subgraphs into atomic ops.+* The equality operation on Tensors & Variables now compares on value instead of id(). As a result, both Tensors & Variables are no longer hashable types.+* Raise error if `batch_size` argument is used when input is dataset/generator/keras sequence.+* The equality operation on Tensors & Variables now compares on value instead of id(). As a result, both Tensors & Variables are no longer hashable types.+* Some `tf.assert_*` methods now raise assertions at operation creation time (i.e. when this Python line executes) if the input tensors' values are known at that time, not during the session.run(). When this happens, a noop is returned and the input tensors are marked non-feedable. In other words, if they are used as keys in `feed_dict` argument to session.run(), an error will be raised. Also, because some assert ops don't make it into the graph, the graph structure changes. A different graph can result in different per-op random seeds when they are not given explicitly (most often).

Move to breaking changes

tensorflow-jenkins

comment created time in 6 months

Pull request review commenttensorflow/tensorflow

Update release notes for TensorFlow 1.15.0

+# Release 1.15.0++## Major Features and Improvements++## Breaking Changes++## Bug Fixes and Other Changes++* Promoting `unbatch` from experimental to core API.+* Adds `tf.enable_control_flow_v2()` and `tf.disable_control_flow_v2()`.+* EagerTensor now support buffer interface for tensors.+* This change bumps the version number of the FullyConnected Op to 5.+* tensorflow : crash when pointer become nullptr.+* `tf.keras.estimator.model_to_estimator` now supports exporting to tf.train.Checkpoint format, which allows the saved checkpoints to be compatible with `model.load_weights`.+* Add support for defaulting the value of `cycle_length` argument of `tf.data.Dataset.interleave` to the number of schedulable CPU cores.+* parallel_for: Add converter for `MatrixDiag`.+* Saving a Keras Model using `tf.saved_model.save` now saves the list of variables, trainable variables, regularization losses, and the call function.+* Add `narrow_range` attribute to `QuantizeAndDequantizeV2` and V3.+* `OMP_NUM_THREADS` is no longer used by the default Keras config. To configure the number of threads, use `tf.config.threading` APIs.+* Added new op: `tf.strings.unsorted_segment_join`.+* `tf.keras.model.save_model` and `model.save` now defaults to saving a TensorFlow SavedModel. A SavedModel code is only applicable to `tf.keras`.+* Tensorflow code now produces 2 different pip packages: tensorflow_core containing all the code (in the future it will contain only the private implementation) and tensorflow which is a virtual pip package doing forwarding to tensorflow_core (and in the future will contain only the public API of tensorflow)+* Adding support for datasets as inputs to `from_tensors` and `from_tensor_slices` and batching and unbatching of nested datasets.+* Add HW acceleration support for topK_v2+* Deprecated `tf.keras.experimental.export_saved_model` and `tf.keras.experimental.function`. Please use `tf.keras.models.save_model(..., save_format='tf')` and `tf.keras.models.load_model` instead.+* Add new `TypeSpec` classes+* CloudBigtable version updated to v0.10.0 BEGIN_PUBLIC CloudBigtable version updated to v0.10.0+* Deprecated the use of `constraint=` and `.constraint` with ResourceVariable.+* Expose Head as public API.+* AutoGraph is now applied automatically to user functions passed to APIs of `tf.data` and `tf.distribute`. If AutoGraph is disabled in the the calling code, it will also be disabled in the user functions.+* Update docstring for gather to properly describe the non-empty batch_dims case.+* Added `tf.sparse.from_dense` utility function.+* Add `GATHER` support to NN API delegate+* AutoGraph translates Python control flow into TensorFlow expressions, allowing users to write regular Python inside `tf.function`-decorated functions. AutoGraph is also applied in functions used with `tf.data`, `tf.distribute` and `tf.keras` APIs.+* Improved ragged tensor support in `TensorFlowTestCase`.+* Makes the a-normal form transformation in Pyct configurable as to which nodes are converted to variables and which are not.+* `keras.backend.resize_images` (and consequently, `keras.layers.Upsampling2D`) behavior has changed, a bug in the resizing implementation was fixed.+* `ResizeInputTensor` now works for all delegates+* Start of open development of TF, TFLite, XLA MLIR dialects.+* Add `EXPAND_DIMS` support to NN API delegate TEST:  expand_dims_test+* `tf.cond` emits a StatelessIf op if the branch functions are stateless and do not touch any resources.+* Add support of local soft device placement for eager op.+* Pass partial_pivoting to the `_TridiagonalSolveGrad`.+* Add HW acceleration support for `LogSoftMax`.+* Added a function nested_value_rowids for ragged tensors.+* fixed a bug in histogram_op.cc.+* Add guard to avoid acceleration of L2 Normalization with input rank != 4+* Added evaluation script for COCO minival+* Add delegate support for QUANTIZE+* tflite object detection script has a debug mode+* Add `tf.math.cumulative_logsumexp operation`.+* Add `tf.ragged.stack`.+* Add delegate support for `QUANTIZED_16BIT_LSTM`.+* Add an `implementation=3` mode for `tf.keras.layers.LocallyConnected2D` and `tf.keras.layers.LocallyConnected1D` layers using `tf.SparseTensor` to store weights,  allowing a dramatic speedup for large sparse models.+* `tf.while_loop` emits a StatelessWhile op if the cond and body functions are stateless and do not touch any resources.

would be nice to group these better: this one should go next to the corresponding tf.cond change above.

tensorflow-jenkins

comment created time in 6 months

Pull request review commenttensorflow/tensorflow

Update release notes for TensorFlow 1.15.0

+# Release 1.15.0++## Major Features and Improvements++## Breaking Changes++## Bug Fixes and Other Changes++* Promoting `unbatch` from experimental to core API.+* Adds `tf.enable_control_flow_v2()` and `tf.disable_control_flow_v2()`.+* EagerTensor now support buffer interface for tensors.+* This change bumps the version number of the FullyConnected Op to 5.

Delete.

tensorflow-jenkins

comment created time in 6 months

Pull request review commenttensorflow/tensorflow

Update release notes for TensorFlow 1.15.0

+# Release 1.15.0++## Major Features and Improvements++## Breaking Changes++## Bug Fixes and Other Changes++* Promoting `unbatch` from experimental to core API.+* Adds `tf.enable_control_flow_v2()` and `tf.disable_control_flow_v2()`.+* EagerTensor now support buffer interface for tensors.+* This change bumps the version number of the FullyConnected Op to 5.+* tensorflow : crash when pointer become nullptr.+* `tf.keras.estimator.model_to_estimator` now supports exporting to tf.train.Checkpoint format, which allows the saved checkpoints to be compatible with `model.load_weights`.+* Add support for defaulting the value of `cycle_length` argument of `tf.data.Dataset.interleave` to the number of schedulable CPU cores.+* parallel_for: Add converter for `MatrixDiag`.+* Saving a Keras Model using `tf.saved_model.save` now saves the list of variables, trainable variables, regularization losses, and the call function.+* Add `narrow_range` attribute to `QuantizeAndDequantizeV2` and V3.+* `OMP_NUM_THREADS` is no longer used by the default Keras config. To configure the number of threads, use `tf.config.threading` APIs.+* Added new op: `tf.strings.unsorted_segment_join`.+* `tf.keras.model.save_model` and `model.save` now defaults to saving a TensorFlow SavedModel. A SavedModel code is only applicable to `tf.keras`.+* Tensorflow code now produces 2 different pip packages: tensorflow_core containing all the code (in the future it will contain only the private implementation) and tensorflow which is a virtual pip package doing forwarding to tensorflow_core (and in the future will contain only the public API of tensorflow)+* Adding support for datasets as inputs to `from_tensors` and `from_tensor_slices` and batching and unbatching of nested datasets.+* Add HW acceleration support for topK_v2+* Deprecated `tf.keras.experimental.export_saved_model` and `tf.keras.experimental.function`. Please use `tf.keras.models.save_model(..., save_format='tf')` and `tf.keras.models.load_model` instead.+* Add new `TypeSpec` classes+* CloudBigtable version updated to v0.10.0 BEGIN_PUBLIC CloudBigtable version updated to v0.10.0+* Deprecated the use of `constraint=` and `.constraint` with ResourceVariable.+* Expose Head as public API.+* AutoGraph is now applied automatically to user functions passed to APIs of `tf.data` and `tf.distribute`. If AutoGraph is disabled in the the calling code, it will also be disabled in the user functions.+* Update docstring for gather to properly describe the non-empty batch_dims case.+* Added `tf.sparse.from_dense` utility function.+* Add `GATHER` support to NN API delegate+* AutoGraph translates Python control flow into TensorFlow expressions, allowing users to write regular Python inside `tf.function`-decorated functions. AutoGraph is also applied in functions used with `tf.data`, `tf.distribute` and `tf.keras` APIs.+* Improved ragged tensor support in `TensorFlowTestCase`.+* Makes the a-normal form transformation in Pyct configurable as to which nodes are converted to variables and which are not.+* `keras.backend.resize_images` (and consequently, `keras.layers.Upsampling2D`) behavior has changed, a bug in the resizing implementation was fixed.+* `ResizeInputTensor` now works for all delegates+* Start of open development of TF, TFLite, XLA MLIR dialects.+* Add `EXPAND_DIMS` support to NN API delegate TEST:  expand_dims_test+* `tf.cond` emits a StatelessIf op if the branch functions are stateless and do not touch any resources.+* Add support of local soft device placement for eager op.+* Pass partial_pivoting to the `_TridiagonalSolveGrad`.+* Add HW acceleration support for `LogSoftMax`.+* Added a function nested_value_rowids for ragged tensors.+* fixed a bug in histogram_op.cc.+* Add guard to avoid acceleration of L2 Normalization with input rank != 4+* Added evaluation script for COCO minival+* Add delegate support for QUANTIZE+* tflite object detection script has a debug mode+* Add `tf.math.cumulative_logsumexp operation`.+* Add `tf.ragged.stack`.+* Add delegate support for `QUANTIZED_16BIT_LSTM`.+* Add an `implementation=3` mode for `tf.keras.layers.LocallyConnected2D` and `tf.keras.layers.LocallyConnected1D` layers using `tf.SparseTensor` to store weights,  allowing a dramatic speedup for large sparse models.+* `tf.while_loop` emits a StatelessWhile op if the cond and body functions are stateless and do not touch any resources.+* Refactors code in Quant8 LSTM support to reduce TFLite binary size.+* Layers now default to float32, and automatically cast their inputs to the layer's dtype. If you had a model that used float64, it will probably silently use float32 in TensorFlow2, and a warning will be issued that starts with Layer "layer-name" is casting an input tensor from dtype float64 to the layer's dtype of float32. To fix, either set the default dtype to float64 with `tf.keras.backend.set_floatx('float64')`, or pass `dtype='float64'` to each of the Layer constructors. See `tf.keras.layers.Layer` for more information.+* Fix memory allocation problem when calling `AddNewInputConstantTensor`.+* Delegate application failure leaves interpreter in valid state.+* Enable the Keras compile API `experimental_run_tf_function` flag by default. This flag enables single training/eval/predict execution path. With this 1. All input types are converted to `Dataset`. 2. When distribution strategy is not specified this goes through the no-op distribution strategy path. 3. Execution is wrapped in tf.function unless `run_eagerly=True` is set in compile.+* `tf.cond`, `tf.while` and if and while in AutoGraph now accept a nonscalar predicate if has a single element. This does not affec non-V2 control flow.

if and while and "affect".

tensorflow-jenkins

comment created time in 6 months

Pull request review commenttensorflow/tensorflow

Update release notes for TensorFlow 1.15.0

+# Release 1.15.0++## Major Features and Improvements++## Breaking Changes++## Bug Fixes and Other Changes++* Promoting `unbatch` from experimental to core API.+* Adds `tf.enable_control_flow_v2()` and `tf.disable_control_flow_v2()`.+* EagerTensor now support buffer interface for tensors.+* This change bumps the version number of the FullyConnected Op to 5.+* tensorflow : crash when pointer become nullptr.+* `tf.keras.estimator.model_to_estimator` now supports exporting to tf.train.Checkpoint format, which allows the saved checkpoints to be compatible with `model.load_weights`.+* Add support for defaulting the value of `cycle_length` argument of `tf.data.Dataset.interleave` to the number of schedulable CPU cores.+* parallel_for: Add converter for `MatrixDiag`.+* Saving a Keras Model using `tf.saved_model.save` now saves the list of variables, trainable variables, regularization losses, and the call function.+* Add `narrow_range` attribute to `QuantizeAndDequantizeV2` and V3.+* `OMP_NUM_THREADS` is no longer used by the default Keras config. To configure the number of threads, use `tf.config.threading` APIs.+* Added new op: `tf.strings.unsorted_segment_join`.+* `tf.keras.model.save_model` and `model.save` now defaults to saving a TensorFlow SavedModel. A SavedModel code is only applicable to `tf.keras`.+* Tensorflow code now produces 2 different pip packages: tensorflow_core containing all the code (in the future it will contain only the private implementation) and tensorflow which is a virtual pip package doing forwarding to tensorflow_core (and in the future will contain only the public API of tensorflow)+* Adding support for datasets as inputs to `from_tensors` and `from_tensor_slices` and batching and unbatching of nested datasets.+* Add HW acceleration support for topK_v2+* Deprecated `tf.keras.experimental.export_saved_model` and `tf.keras.experimental.function`. Please use `tf.keras.models.save_model(..., save_format='tf')` and `tf.keras.models.load_model` instead.+* Add new `TypeSpec` classes+* CloudBigtable version updated to v0.10.0 BEGIN_PUBLIC CloudBigtable version updated to v0.10.0+* Deprecated the use of `constraint=` and `.constraint` with ResourceVariable.+* Expose Head as public API.+* AutoGraph is now applied automatically to user functions passed to APIs of `tf.data` and `tf.distribute`. If AutoGraph is disabled in the the calling code, it will also be disabled in the user functions.+* Update docstring for gather to properly describe the non-empty batch_dims case.+* Added `tf.sparse.from_dense` utility function.+* Add `GATHER` support to NN API delegate+* AutoGraph translates Python control flow into TensorFlow expressions, allowing users to write regular Python inside `tf.function`-decorated functions. AutoGraph is also applied in functions used with `tf.data`, `tf.distribute` and `tf.keras` APIs.+* Improved ragged tensor support in `TensorFlowTestCase`.+* Makes the a-normal form transformation in Pyct configurable as to which nodes are converted to variables and which are not.+* `keras.backend.resize_images` (and consequently, `keras.layers.Upsampling2D`) behavior has changed, a bug in the resizing implementation was fixed.+* `ResizeInputTensor` now works for all delegates+* Start of open development of TF, TFLite, XLA MLIR dialects.+* Add `EXPAND_DIMS` support to NN API delegate TEST:  expand_dims_test+* `tf.cond` emits a StatelessIf op if the branch functions are stateless and do not touch any resources.+* Add support of local soft device placement for eager op.+* Pass partial_pivoting to the `_TridiagonalSolveGrad`.+* Add HW acceleration support for `LogSoftMax`.+* Added a function nested_value_rowids for ragged tensors.+* fixed a bug in histogram_op.cc.+* Add guard to avoid acceleration of L2 Normalization with input rank != 4+* Added evaluation script for COCO minival+* Add delegate support for QUANTIZE+* tflite object detection script has a debug mode+* Add `tf.math.cumulative_logsumexp operation`.+* Add `tf.ragged.stack`.+* Add delegate support for `QUANTIZED_16BIT_LSTM`.+* Add an `implementation=3` mode for `tf.keras.layers.LocallyConnected2D` and `tf.keras.layers.LocallyConnected1D` layers using `tf.SparseTensor` to store weights,  allowing a dramatic speedup for large sparse models.+* `tf.while_loop` emits a StatelessWhile op if the cond and body functions are stateless and do not touch any resources.+* Refactors code in Quant8 LSTM support to reduce TFLite binary size.+* Layers now default to float32, and automatically cast their inputs to the layer's dtype. If you had a model that used float64, it will probably silently use float32 in TensorFlow2, and a warning will be issued that starts with Layer "layer-name" is casting an input tensor from dtype float64 to the layer's dtype of float32. To fix, either set the default dtype to float64 with `tf.keras.backend.set_floatx('float64')`, or pass `dtype='float64'` to each of the Layer constructors. See `tf.keras.layers.Layer` for more information.+* Fix memory allocation problem when calling `AddNewInputConstantTensor`.+* Delegate application failure leaves interpreter in valid state.+* Enable the Keras compile API `experimental_run_tf_function` flag by default. This flag enables single training/eval/predict execution path. With this 1. All input types are converted to `Dataset`. 2. When distribution strategy is not specified this goes through the no-op distribution strategy path. 3. Execution is wrapped in tf.function unless `run_eagerly=True` is set in compile.+* `tf.cond`, `tf.while` and if and while in AutoGraph now accept a nonscalar predicate if has a single element. This does not affec non-V2 control flow.+* Enables v2 control flow as part of `tf.enable_v2_behavior()` and `TF2_BEHAVIOR=1`.+* Add check for correct memory alignment to `MemoryAllocation::MemoryAllocation()`.+* Extracts `NNAPIDelegateKernel` from nnapi_delegate.cc+* Added support for `FusedBatchNormV3` in converter.+* A ragged to dense op for directly calculating tensors.+* Converts hardswish subgraphs into atomic ops.+* The equality operation on Tensors & Variables now compares on value instead of id(). As a result, both Tensors & Variables are no longer hashable types.+* Raise error if `batch_size` argument is used when input is dataset/generator/keras sequence.+* The equality operation on Tensors & Variables now compares on value instead of id(). As a result, both Tensors & Variables are no longer hashable types.+* Some `tf.assert_*` methods now raise assertions at operation creation time (i.e. when this Python line executes) if the input tensors' values are known at that time, not during the session.run(). When this happens, a noop is returned and the input tensors are marked non-feedable. In other words, if they are used as keys in `feed_dict` argument to session.run(), an error will be raised. Also, because some assert ops don't make it into the graph, the graph structure changes. A different graph can result in different per-op random seeds when they are not given explicitly (most often).+* The equality operation on Tensors & Variables now compares on value instead of id(). As a result, both Tensors & Variables are no longer hashable types.

remove duplicate

tensorflow-jenkins

comment created time in 6 months

Pull request review commenttensorflow/tensorflow

Update release notes for TensorFlow 1.15.0

+# Release 1.15.0++## Major Features and Improvements++## Breaking Changes++## Bug Fixes and Other Changes++* Promoting `unbatch` from experimental to core API.+* Adds `tf.enable_control_flow_v2()` and `tf.disable_control_flow_v2()`.+* EagerTensor now support buffer interface for tensors.+* This change bumps the version number of the FullyConnected Op to 5.+* tensorflow : crash when pointer become nullptr.+* `tf.keras.estimator.model_to_estimator` now supports exporting to tf.train.Checkpoint format, which allows the saved checkpoints to be compatible with `model.load_weights`.+* Add support for defaulting the value of `cycle_length` argument of `tf.data.Dataset.interleave` to the number of schedulable CPU cores.+* parallel_for: Add converter for `MatrixDiag`.+* Saving a Keras Model using `tf.saved_model.save` now saves the list of variables, trainable variables, regularization losses, and the call function.+* Add `narrow_range` attribute to `QuantizeAndDequantizeV2` and V3.+* `OMP_NUM_THREADS` is no longer used by the default Keras config. To configure the number of threads, use `tf.config.threading` APIs.+* Added new op: `tf.strings.unsorted_segment_join`.+* `tf.keras.model.save_model` and `model.save` now defaults to saving a TensorFlow SavedModel. A SavedModel code is only applicable to `tf.keras`.+* Tensorflow code now produces 2 different pip packages: tensorflow_core containing all the code (in the future it will contain only the private implementation) and tensorflow which is a virtual pip package doing forwarding to tensorflow_core (and in the future will contain only the public API of tensorflow)+* Adding support for datasets as inputs to `from_tensors` and `from_tensor_slices` and batching and unbatching of nested datasets.+* Add HW acceleration support for topK_v2+* Deprecated `tf.keras.experimental.export_saved_model` and `tf.keras.experimental.function`. Please use `tf.keras.models.save_model(..., save_format='tf')` and `tf.keras.models.load_model` instead.+* Add new `TypeSpec` classes+* CloudBigtable version updated to v0.10.0 BEGIN_PUBLIC CloudBigtable version updated to v0.10.0+* Deprecated the use of `constraint=` and `.constraint` with ResourceVariable.+* Expose Head as public API.+* AutoGraph is now applied automatically to user functions passed to APIs of `tf.data` and `tf.distribute`. If AutoGraph is disabled in the the calling code, it will also be disabled in the user functions.+* Update docstring for gather to properly describe the non-empty batch_dims case.+* Added `tf.sparse.from_dense` utility function.+* Add `GATHER` support to NN API delegate+* AutoGraph translates Python control flow into TensorFlow expressions, allowing users to write regular Python inside `tf.function`-decorated functions. AutoGraph is also applied in functions used with `tf.data`, `tf.distribute` and `tf.keras` APIs.+* Improved ragged tensor support in `TensorFlowTestCase`.+* Makes the a-normal form transformation in Pyct configurable as to which nodes are converted to variables and which are not.+* `keras.backend.resize_images` (and consequently, `keras.layers.Upsampling2D`) behavior has changed, a bug in the resizing implementation was fixed.+* `ResizeInputTensor` now works for all delegates+* Start of open development of TF, TFLite, XLA MLIR dialects.+* Add `EXPAND_DIMS` support to NN API delegate TEST:  expand_dims_test+* `tf.cond` emits a StatelessIf op if the branch functions are stateless and do not touch any resources.+* Add support of local soft device placement for eager op.+* Pass partial_pivoting to the `_TridiagonalSolveGrad`.+* Add HW acceleration support for `LogSoftMax`.+* Added a function nested_value_rowids for ragged tensors.+* fixed a bug in histogram_op.cc.+* Add guard to avoid acceleration of L2 Normalization with input rank != 4+* Added evaluation script for COCO minival+* Add delegate support for QUANTIZE+* tflite object detection script has a debug mode+* Add `tf.math.cumulative_logsumexp operation`.+* Add `tf.ragged.stack`.+* Add delegate support for `QUANTIZED_16BIT_LSTM`.+* Add an `implementation=3` mode for `tf.keras.layers.LocallyConnected2D` and `tf.keras.layers.LocallyConnected1D` layers using `tf.SparseTensor` to store weights,  allowing a dramatic speedup for large sparse models.+* `tf.while_loop` emits a StatelessWhile op if the cond and body functions are stateless and do not touch any resources.+* Refactors code in Quant8 LSTM support to reduce TFLite binary size.+* Layers now default to float32, and automatically cast their inputs to the layer's dtype. If you had a model that used float64, it will probably silently use float32 in TensorFlow2, and a warning will be issued that starts with Layer "layer-name" is casting an input tensor from dtype float64 to the layer's dtype of float32. To fix, either set the default dtype to float64 with `tf.keras.backend.set_floatx('float64')`, or pass `dtype='float64'` to each of the Layer constructors. See `tf.keras.layers.Layer` for more information.

Move to breaking changes.

tensorflow-jenkins

comment created time in 6 months

Pull request review commenttensorflow/tensorflow

Update release notes for TensorFlow 1.15.0

+# Release 1.15.0++## Major Features and Improvements++## Breaking Changes++## Bug Fixes and Other Changes++* Promoting `unbatch` from experimental to core API.+* Adds `tf.enable_control_flow_v2()` and `tf.disable_control_flow_v2()`.+* EagerTensor now support buffer interface for tensors.+* This change bumps the version number of the FullyConnected Op to 5.+* tensorflow : crash when pointer become nullptr.+* `tf.keras.estimator.model_to_estimator` now supports exporting to tf.train.Checkpoint format, which allows the saved checkpoints to be compatible with `model.load_weights`.+* Add support for defaulting the value of `cycle_length` argument of `tf.data.Dataset.interleave` to the number of schedulable CPU cores.+* parallel_for: Add converter for `MatrixDiag`.+* Saving a Keras Model using `tf.saved_model.save` now saves the list of variables, trainable variables, regularization losses, and the call function.+* Add `narrow_range` attribute to `QuantizeAndDequantizeV2` and V3.+* `OMP_NUM_THREADS` is no longer used by the default Keras config. To configure the number of threads, use `tf.config.threading` APIs.

Move this to breaking changes.

tensorflow-jenkins

comment created time in 6 months

Pull request review commenttensorflow/tensorflow

Update release notes for TensorFlow 1.15.0

+# Release 1.15.0++## Major Features and Improvements++## Breaking Changes++## Bug Fixes and Other Changes++* Promoting `unbatch` from experimental to core API.+* Adds `tf.enable_control_flow_v2()` and `tf.disable_control_flow_v2()`.+* EagerTensor now support buffer interface for tensors.+* This change bumps the version number of the FullyConnected Op to 5.+* tensorflow : crash when pointer become nullptr.+* `tf.keras.estimator.model_to_estimator` now supports exporting to tf.train.Checkpoint format, which allows the saved checkpoints to be compatible with `model.load_weights`.+* Add support for defaulting the value of `cycle_length` argument of `tf.data.Dataset.interleave` to the number of schedulable CPU cores.+* parallel_for: Add converter for `MatrixDiag`.+* Saving a Keras Model using `tf.saved_model.save` now saves the list of variables, trainable variables, regularization losses, and the call function.+* Add `narrow_range` attribute to `QuantizeAndDequantizeV2` and V3.+* `OMP_NUM_THREADS` is no longer used by the default Keras config. To configure the number of threads, use `tf.config.threading` APIs.+* Added new op: `tf.strings.unsorted_segment_join`.+* `tf.keras.model.save_model` and `model.save` now defaults to saving a TensorFlow SavedModel. A SavedModel code is only applicable to `tf.keras`.+* Tensorflow code now produces 2 different pip packages: tensorflow_core containing all the code (in the future it will contain only the private implementation) and tensorflow which is a virtual pip package doing forwarding to tensorflow_core (and in the future will contain only the public API of tensorflow)+* Adding support for datasets as inputs to `from_tensors` and `from_tensor_slices` and batching and unbatching of nested datasets.+* Add HW acceleration support for topK_v2+* Deprecated `tf.keras.experimental.export_saved_model` and `tf.keras.experimental.function`. Please use `tf.keras.models.save_model(..., save_format='tf')` and `tf.keras.models.load_model` instead.+* Add new `TypeSpec` classes+* CloudBigtable version updated to v0.10.0 BEGIN_PUBLIC CloudBigtable version updated to v0.10.0+* Deprecated the use of `constraint=` and `.constraint` with ResourceVariable.+* Expose Head as public API.

Head, ideally, use fully qualified name.

tensorflow-jenkins

comment created time in 6 months

Pull request review commenttensorflow/tensorflow

Update release notes for TensorFlow 1.15.0

+# Release 1.15.0++## Major Features and Improvements++## Breaking Changes++## Bug Fixes and Other Changes++* Promoting `unbatch` from experimental to core API.+* Adds `tf.enable_control_flow_v2()` and `tf.disable_control_flow_v2()`.+* EagerTensor now support buffer interface for tensors.+* This change bumps the version number of the FullyConnected Op to 5.+* tensorflow : crash when pointer become nullptr.+* `tf.keras.estimator.model_to_estimator` now supports exporting to tf.train.Checkpoint format, which allows the saved checkpoints to be compatible with `model.load_weights`.+* Add support for defaulting the value of `cycle_length` argument of `tf.data.Dataset.interleave` to the number of schedulable CPU cores.+* parallel_for: Add converter for `MatrixDiag`.+* Saving a Keras Model using `tf.saved_model.save` now saves the list of variables, trainable variables, regularization losses, and the call function.+* Add `narrow_range` attribute to `QuantizeAndDequantizeV2` and V3.+* `OMP_NUM_THREADS` is no longer used by the default Keras config. To configure the number of threads, use `tf.config.threading` APIs.+* Added new op: `tf.strings.unsorted_segment_join`.+* `tf.keras.model.save_model` and `model.save` now defaults to saving a TensorFlow SavedModel. A SavedModel code is only applicable to `tf.keras`.+* Tensorflow code now produces 2 different pip packages: tensorflow_core containing all the code (in the future it will contain only the private implementation) and tensorflow which is a virtual pip package doing forwarding to tensorflow_core (and in the future will contain only the public API of tensorflow)+* Adding support for datasets as inputs to `from_tensors` and `from_tensor_slices` and batching and unbatching of nested datasets.+* Add HW acceleration support for topK_v2+* Deprecated `tf.keras.experimental.export_saved_model` and `tf.keras.experimental.function`. Please use `tf.keras.models.save_model(..., save_format='tf')` and `tf.keras.models.load_model` instead.+* Add new `TypeSpec` classes+* CloudBigtable version updated to v0.10.0 BEGIN_PUBLIC CloudBigtable version updated to v0.10.0+* Deprecated the use of `constraint=` and `.constraint` with ResourceVariable.+* Expose Head as public API.+* AutoGraph is now applied automatically to user functions passed to APIs of `tf.data` and `tf.distribute`. If AutoGraph is disabled in the the calling code, it will also be disabled in the user functions.

Delete. This is covered by a different bullet.

tensorflow-jenkins

comment created time in 6 months

Pull request review commenttensorflow/tensorflow

Update release notes for TensorFlow 1.15.0

+# Release 1.15.0++## Major Features and Improvements++## Breaking Changes++## Bug Fixes and Other Changes++* Promoting `unbatch` from experimental to core API.+* Adds `tf.enable_control_flow_v2()` and `tf.disable_control_flow_v2()`.+* EagerTensor now support buffer interface for tensors.+* This change bumps the version number of the FullyConnected Op to 5.+* tensorflow : crash when pointer become nullptr.+* `tf.keras.estimator.model_to_estimator` now supports exporting to tf.train.Checkpoint format, which allows the saved checkpoints to be compatible with `model.load_weights`.+* Add support for defaulting the value of `cycle_length` argument of `tf.data.Dataset.interleave` to the number of schedulable CPU cores.+* parallel_for: Add converter for `MatrixDiag`.+* Saving a Keras Model using `tf.saved_model.save` now saves the list of variables, trainable variables, regularization losses, and the call function.+* Add `narrow_range` attribute to `QuantizeAndDequantizeV2` and V3.+* `OMP_NUM_THREADS` is no longer used by the default Keras config. To configure the number of threads, use `tf.config.threading` APIs.+* Added new op: `tf.strings.unsorted_segment_join`.+* `tf.keras.model.save_model` and `model.save` now defaults to saving a TensorFlow SavedModel. A SavedModel code is only applicable to `tf.keras`.+* Tensorflow code now produces 2 different pip packages: tensorflow_core containing all the code (in the future it will contain only the private implementation) and tensorflow which is a virtual pip package doing forwarding to tensorflow_core (and in the future will contain only the public API of tensorflow)+* Adding support for datasets as inputs to `from_tensors` and `from_tensor_slices` and batching and unbatching of nested datasets.+* Add HW acceleration support for topK_v2+* Deprecated `tf.keras.experimental.export_saved_model` and `tf.keras.experimental.function`. Please use `tf.keras.models.save_model(..., save_format='tf')` and `tf.keras.models.load_model` instead.+* Add new `TypeSpec` classes+* CloudBigtable version updated to v0.10.0 BEGIN_PUBLIC CloudBigtable version updated to v0.10.0+* Deprecated the use of `constraint=` and `.constraint` with ResourceVariable.

move to breaking changes

tensorflow-jenkins

comment created time in 6 months

Pull request review commenttensorflow/tensorflow

Update release notes for TensorFlow 1.15.0

+# Release 1.15.0++## Major Features and Improvements++## Breaking Changes++## Bug Fixes and Other Changes++* Promoting `unbatch` from experimental to core API.+* Adds `tf.enable_control_flow_v2()` and `tf.disable_control_flow_v2()`.+* EagerTensor now support buffer interface for tensors.+* This change bumps the version number of the FullyConnected Op to 5.+* tensorflow : crash when pointer become nullptr.+* `tf.keras.estimator.model_to_estimator` now supports exporting to tf.train.Checkpoint format, which allows the saved checkpoints to be compatible with `model.load_weights`.+* Add support for defaulting the value of `cycle_length` argument of `tf.data.Dataset.interleave` to the number of schedulable CPU cores.+* parallel_for: Add converter for `MatrixDiag`.+* Saving a Keras Model using `tf.saved_model.save` now saves the list of variables, trainable variables, regularization losses, and the call function.+* Add `narrow_range` attribute to `QuantizeAndDequantizeV2` and V3.+* `OMP_NUM_THREADS` is no longer used by the default Keras config. To configure the number of threads, use `tf.config.threading` APIs.+* Added new op: `tf.strings.unsorted_segment_join`.+* `tf.keras.model.save_model` and `model.save` now defaults to saving a TensorFlow SavedModel. A SavedModel code is only applicable to `tf.keras`.+* Tensorflow code now produces 2 different pip packages: tensorflow_core containing all the code (in the future it will contain only the private implementation) and tensorflow which is a virtual pip package doing forwarding to tensorflow_core (and in the future will contain only the public API of tensorflow)+* Adding support for datasets as inputs to `from_tensors` and `from_tensor_slices` and batching and unbatching of nested datasets.+* Add HW acceleration support for topK_v2+* Deprecated `tf.keras.experimental.export_saved_model` and `tf.keras.experimental.function`. Please use `tf.keras.models.save_model(..., save_format='tf')` and `tf.keras.models.load_model` instead.+* Add new `TypeSpec` classes+* CloudBigtable version updated to v0.10.0 BEGIN_PUBLIC CloudBigtable version updated to v0.10.0+* Deprecated the use of `constraint=` and `.constraint` with ResourceVariable.+* Expose Head as public API.+* AutoGraph is now applied automatically to user functions passed to APIs of `tf.data` and `tf.distribute`. If AutoGraph is disabled in the the calling code, it will also be disabled in the user functions.+* Update docstring for gather to properly describe the non-empty batch_dims case.+* Added `tf.sparse.from_dense` utility function.+* Add `GATHER` support to NN API delegate+* AutoGraph translates Python control flow into TensorFlow expressions, allowing users to write regular Python inside `tf.function`-decorated functions. AutoGraph is also applied in functions used with `tf.data`, `tf.distribute` and `tf.keras` APIs.

Move this to major changes.

tensorflow-jenkins

comment created time in 6 months

Pull request review commenttensorflow/tensorflow

Update release notes for TensorFlow 1.15.0

+# Release 1.15.0++## Major Features and Improvements++## Breaking Changes+

Add here (@gunan to confirm words are correct):

TensorFlow 1.15 is built using devtoolset7 on Ubuntu 16. This may lead to ABI incompatibilities with extensions built against earlier versions of TensorFlow.

tensorflow-jenkins

comment created time in 6 months

Pull request review commenttensorflow/tensorflow

Update release notes for TensorFlow 1.15.0

+# Release 1.15.0++## Major Features and Improvements++## Breaking Changes++## Bug Fixes and Other Changes++* Promoting `unbatch` from experimental to core API.+* Adds `tf.enable_control_flow_v2()` and `tf.disable_control_flow_v2()`.+* EagerTensor now support buffer interface for tensors.+* This change bumps the version number of the FullyConnected Op to 5.+* tensorflow : crash when pointer become nullptr.+* `tf.keras.estimator.model_to_estimator` now supports exporting to tf.train.Checkpoint format, which allows the saved checkpoints to be compatible with `model.load_weights`.+* Add support for defaulting the value of `cycle_length` argument of `tf.data.Dataset.interleave` to the number of schedulable CPU cores.+* parallel_for: Add converter for `MatrixDiag`.

parallel_for

tensorflow-jenkins

comment created time in 6 months

Pull request review commenttensorflow/tensorflow

Update release notes for TensorFlow 1.15.0

+# Release 1.15.0++## Major Features and Improvements++## Breaking Changes++## Bug Fixes and Other Changes++* Promoting `unbatch` from experimental to core API.+* Adds `tf.enable_control_flow_v2()` and `tf.disable_control_flow_v2()`.+* EagerTensor now support buffer interface for tensors.+* This change bumps the version number of the FullyConnected Op to 5.+* tensorflow : crash when pointer become nullptr.+* `tf.keras.estimator.model_to_estimator` now supports exporting to tf.train.Checkpoint format, which allows the saved checkpoints to be compatible with `model.load_weights`.+* Add support for defaulting the value of `cycle_length` argument of `tf.data.Dataset.interleave` to the number of schedulable CPU cores.+* parallel_for: Add converter for `MatrixDiag`.+* Saving a Keras Model using `tf.saved_model.save` now saves the list of variables, trainable variables, regularization losses, and the call function.+* Add `narrow_range` attribute to `QuantizeAndDequantizeV2` and V3.+* `OMP_NUM_THREADS` is no longer used by the default Keras config. To configure the number of threads, use `tf.config.threading` APIs.+* Added new op: `tf.strings.unsorted_segment_join`.+* `tf.keras.model.save_model` and `model.save` now defaults to saving a TensorFlow SavedModel. A SavedModel code is only applicable to `tf.keras`.

I don't understand the last sentence in this bullet.

Should this be in breaking changes?

tensorflow-jenkins

comment created time in 6 months

more