profile
viewpoint
Gunhan Gulsoy gunan @google @tensorflow Mountain View, CA

jenkinsci/google-source-plugin 6

This plugin provides the credential provider to use Google Cloud Platform OAuth Credentials (provided by the Google OAuth Plugin) to access source code from https://source.developer.google.com as well as https://googlesource.com.

gunan/tensorflow 2

Computation using data flow graphs for scalable machine learning

gunan/abseil-cpp 0

Abseil Common Libraries (C++)

gunan/community 0

Stores documents used by the TensorFlow developer community

gunan/docs 0

TensorFlow documentation

gunan/pipe 0

Reproduce the pipe not unblocking.

gunan/tensorflow-docker 0

Unofficial docker images that are able to build and run TF on unsupported operating systems.

issue commenttensorflow/tensorflow

Tensorflow 2.0.0 - DLL load failed - windows cpu

At the moment, pypi or pip doed not have a mechanism to deliver different packages based on CPU instruction sets. One potential option is to create a package that has all three, and TF python code picks whichever it can use. However, we are well over pypi package size limit, and we are not allowed to upload anything larger at the moment.

Therefore, it is infeasible for us to deliver what you recommended using pypi, without degrading user experience.

tessy1234

comment created time in 18 hours

issue closedtensorflow/tensorflow

Tensorflow 2.0.0 - DLL load failed - windows cpu

<em>Please make sure that this is a build/installation issue. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:build_template</em>

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Windows 10 64 bit
  • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: n/a
  • TensorFlow installed from (source or binary): binary
  • TensorFlow version: 2.0.0
  • Python version: 3.7.6
  • Installed using virtualenv? pip? conda?: pip and conda
  • Bazel version (if compiling from source): n/a
  • GCC/Compiler version (if compiling from source): n/a
  • CUDA/cuDNN version: n/a
  • GPU model and memory: n/a

Describe the problem

Installation of TensorFlow 2.0.0 using pip was successful, but TensorFlow import failed. On the contrary, installation of TensorFlow 2.0.0 using conda was successful and in addition TensorFlow import was successful. Why does conda env work fine but pip env don't?

Provide the exact sequence of commands / steps that you executed before running into the problem I created 2 conda environments: one for conda release and the other for pip release, both dedicated to tensorflow 2.0.0.

  1. tensorflow2_0_conda environment: conda installation using conda install python=3.7 tensorflow=2.0

  2. tensorflow2_0_pip environment: pip installation using conda install python=3.7 pip install tensorflow==2.0.0

Both installations completed without any error.

Import test: python -c "import tensorflow as tf"

ad 1) conda env works fine ad 2) pip env failed with error message (detail see logs entry) ImportError: DLL load failed ...

Any other info / logs Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.

(tensorflow2_0_pip)>python -c "import tensorflow as tf"
Traceback (most recent call last):
  File "C:\Anaconda3\envs\tensorflow2_0_pip\lib\site-packages\tensorflow_core\python\pywrap_tensorflow.py", line 58, in <module>
    from tensorflow.python.pywrap_tensorflow_internal import *
  File "C:\Anaconda3\envs\tensorflow2_0_pip\lib\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 28, in <module>
    _pywrap_tensorflow_internal = swig_import_helper()
  File "C:\Anaconda3\envs\tensorflow2_0_pip\lib\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
    _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
  File "C:\Anaconda3\envs\tensorflow2_0_pip\lib\imp.py", line 242, in load_module
    return load_dynamic(name, filename, file)
  File "C:\Anaconda3\envs\tensorflow2_0_pip\lib\imp.py", line 342, in load_dynamic
    return _load(spec)
ImportError: DLL load failed: A dynamic link library (DLL) initialization routine failed.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "C:\Anaconda3\envs\tensorflow2_0_pip\lib\site-packages\tensorflow\__init__.py", line 98, in <module>
    from tensorflow_core import *
  File "C:\Anaconda3\envs\tensorflow2_0_pip\lib\site-packages\tensorflow_core\__init__.py", line 40, in <module>
    from tensorflow.python.tools import module_util as _module_util
  File "C:\Anaconda3\envs\tensorflow2_0_pip\lib\site-packages\tensorflow\__init__.py", line 50, in __getattr__
    module = self._load()
  File "C:\Anaconda3\envs\tensorflow2_0_pip\lib\site-packages\tensorflow\__init__.py", line 44, in _load
    module = _importlib.import_module(self.__name__)
  File "C:\Anaconda3\envs\tensorflow2_0_pip\lib\importlib\__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "C:\Anaconda3\envs\tensorflow2_0_pip\lib\site-packages\tensorflow_core\python\__init__.py", line 49, in <module>
    from tensorflow.python import pywrap_tensorflow
  File "C:\Anaconda3\envs\tensorflow2_0_pip\lib\site-packages\tensorflow_core\python\pywrap_tensorflow.py", line 74, in <module>
    raise ImportError(msg)
ImportError: Traceback (most recent call last):
  File "C:\Anaconda3\envs\tensorflow2_0_pip\lib\site-packages\tensorflow_core\python\pywrap_tensorflow.py", line 58, in <module>
    from tensorflow.python.pywrap_tensorflow_internal import *
  File "C:\Anaconda3\envs\tensorflow2_0_pip\lib\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 28, in <module>
    _pywrap_tensorflow_internal = swig_import_helper()
  File "C:\Anaconda3\envs\tensorflow2_0_pip\lib\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
    _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
  File "C:\Anaconda3\envs\tensorflow2_0_pip\lib\imp.py", line 242, in load_module
    return load_dynamic(name, filename, file)
  File "C:\Anaconda3\envs\tensorflow2_0_pip\lib\imp.py", line 342, in load_dynamic
    return _load(spec)
ImportError: DLL load failed: A dynamic link library (DLL) initialization routine failed.


Failed to load the native TensorFlow runtime.

See https://www.tensorflow.org/install/errors

for some common reasons and solutions.  Include the entire stack trace
above this error message when asking for help.

(tensorflow2_0_pip)>

closed time in 2 days

tessy1234

issue commenttensorflow/tensorflow

Tensorflow 2.0.0 - DLL load failed - windows cpu

the answer to your question, why does conda instlal load is successful but pip one was not: Conda packages are built by conda community, and pip packages are the official ones built by us. The reason pip package cannot start on your system is, packages in pypi require AVX instruction set. Your CPU (i7 -820 QM) does not have this instruction set. Anaconda packages use MKL to be more compatible, and use older CPUs. The reason official TF builds do not use MKL in the binaries in pypi is, while providing compatibility, at graph level MKL is not as fast as non-mkl solutions for TF.

tessy1234

comment created time in 2 days

issue commenttensorflow/tensorboard

io_wrapper_test failing at head

For example, on windows, files are not allowed to have "" in its names. So, if tensorboard intentionally puts "" in its names, you lose platform compatibility

wchargin

comment created time in 6 days

issue commenttensorflow/tensorboard

io_wrapper_test failing at head

Thanks for the notification. I do not think we would be able to roll back the commit, because there were many commits following the one above, it would be quite difficult to resolve all the issues. Adding @nairb774 and @mihaimaruseac for context. We will look into a fix forward, but is using "*" in the file paths a vital usecase, or is it just a cornercase that happens only in the tests?

wchargin

comment created time in 6 days

issue commenttensorflow/tensorflow

How could I can add a new OS on the support on the "tested and supported" list?

Our python packages are built according to the "manylkinux2010" standard. So if you use the prebuilt packages from pypi, they should work on 90% of the linux distributions.

Hui-Zhi

comment created time in 6 days

pull request commenttensorflow/tensorflow

Fix Bazel not building anymore with the commit 09fe958f

bmzhao@ has more context about this than me. I will trigger the kokoro tests to see if our CI is ready for this.

DEKHTIARJonathan

comment created time in 7 days

issue commenttensorflow/tensorflow

TensorFlow for C doesn't compile - cannot open source file "tensorflow/c/tf_attrtype.h"

@yifeif @av8ramit looks like some headers are missing on the windows package.

RochaStratovan

comment created time in 8 days

issue commenttensorflow/tensorflow

tensorflow java GPU compute capabilties 6.0 instead of 3.7

The person who used to build these has left the team. So I do not know who owns them now. @goldiegadde any ideas?

callicles

comment created time in 8 days

issue commenttensorflow/tensorflow

'Tensor' object has no attribute 'numpy'

It is probably because the pip version is just too old. could you check the pip version? Our minimum required pip version is listed on our installation page

TheUnixDude

comment created time in 13 days

delete branch tensorflow/tensorflow

delete branch : jvishnuvardhan-patch-3

delete time in 14 days

delete branch tensorflow/tensorflow

delete branch : jvishnuvardhan-patch-6

delete time in 14 days

delete branch tensorflow/tensorflow

delete branch : master-where-we-want-it

delete time in 14 days

PR closed tensorflow/tensorflow

Master at alpha0

🔥BITCOIN MININGS 🔥MENJANA WANG SECARA ONLINE??🔥

🔥BITCOIN MININGS🔥

🔥INVITE DAN REGISTER SAHAJA🔥

🔥https://themeforest.net/collections/8246214-freecredit20myrr

GET FREE 100 MYR☝️

STEP 1.👇

💥JOIN TELEGRAM CHANNEL💥👇

https://t.me/FREECREDIT20MYRCHANNEL

STEP 2 .👇

💥JOIN TELEGRAM GROUP💥👇

@FREECREDIT20MYR

STEP 3.👇

🔥INVITE TELEGRAM GROUP 100 ORANG

STEP4:👇

💥SHARE INI POST KE 6 GROUP

🔥WASSAP🔥 FACEBOOK🔥 🔥TELEGRAM 🔥

🔥HANYA REGISTER🔥

FOLLOW THIS LINK👇👇👇

(1)https://www.virtualmining.farm/signup/?referrer=5E13C5F45E09A💯

(2)https://getzen.cash/auth/register?ref=185056💯

(3)-https://minepi.com/FreeCredit070707myr💯

(4)https://uplibra.io/?refer=778805#.XhQoYxOjNmw.telegram💯

(5)https://www.coinpayu.com/?r=FREECREDIT20💯

(6)https://uzemoney.xyz/7815024065995💯

(7)http://jomkiss.com/claim?r=88983💯

(8)https://dropbit.me/r/@FREECREDIT07070💯

(9)https://www.mobrog.com/my/paid-online-surveys/sign-up-malaysia.html?membership_promotion=0&i_invite=9932249-5e1c45d1e543b💯

(10) https://hourspayday.com/?ref=FREECREDIT200MYR 💯

(11)https://bitcoinaliens.com/?ref=1215811&game=8&pf=2💯

(12)https://supercryptofinance.com/sys/register.php?trader=Freecredit20myr💯

(13)https://cashzinedownload-jzc6w32blq-an.a.run.app/DownloadMe.html?571669💯

(14)http://partners.etoro.com/aw.aspx?B=11738&A=84401&Task=Get&SubAffiliateID=FREECREDIT20MYR💯

http://partners.etoro.com/aw.aspx?B=11738&A=84401&Task=Click&TargetURL=https%3a%2f%2ft.me%2fFREECREDIT20MYRCHANNEL&SubAffiliateID=FREECREDIT20MYR💯

(15)https://coinpays.cc/register?ref=FREECREDIT200MYR💯

(16) https://hourspayday.com/?ref=FREECREDIT2000MYR💯 <a href="https://hourspayday.com/?ref=FREECREDIT2000MYR💯

💥PAYPAY💥👇

https://www.paypal.com/paypalme/my/profile💯

💥 WHATSAPP💥👇

https://www.wasap.my/60168758920💯

💥 TWITTER💥👇

https://twitter.com/FREECREDIT07070💯

💥YOUTUBE💥👇

https://youtu.be/aHJXukC6Eek💯

💥 FACEBOOK 💥👇

https://www.facebook.com/groups/489043962017047/?ref=group_header💯

https://www.facebook.com/Freecredit20/💯

https://www.facebook.com/farah.azwin.90💯 https://web.facebook.com/groups/489043962017047/

https://www.facebook.com/100045791633362/posts/112208363648884/💯

💥LINKED💥👇

https://www.linkedin.com/in/free-free-credit-b7092219a💯

💥MYCHIMPSITES💥👇

http://httpfreecredit10myrgmailcom.mailchimpsites.com💯

💥REDDIT💥👇

https://www.reddit.com/u/FREECREDIT10MYR?utm_medium=android_app&utm_source=share💯

💥 INSTAGRAM 💥👇

https://www.instagram.com/freecredit20myrrr?r=nametag💯

💥TUMBLY💥👇

Check it out https://freecredit200myr.tumblr.com/post/190147202501/httpsetorotw36b4ktx💯

💥 DISCORD 💥👇

https://discord.gg/https://

discord.gg/zQ3meUb💯

https://discord.gg/💯

https://discord.gg/efux8e hntar💯

💥 PINTEREST 💥👇

https://pin.it/ym2mbebcwgktds💯

💥GITHUB💥👇

https://github.com/FREECREDIT10MYR💯

https://github.com/FREECREDIT10MYR/FREECREDIT20MYR#freecredit20myr💯

https://github.com/FREECREDIT10MYR/FREECREDIT20MYR#freecredit20myr💯

https://github.com/FREECREDIT10MYR/POSTMan-Chrome-Extension.git💯

https://discuss.atom.io/u/freecredit10myr💯

https://github.com/coinninjadev/bitcoin.org/pull/2💯

https://github.com/atom/atom/pull/20334💯

https://github.com/FREECREDIT10MYR/FreeCredit10myrrr💯

💥DROPBOX💥👇

https://www.dropbox.com/transfer/AAAAAGBhlCg50vt302884WZuUDeJAWaF9uiMCyc6vsYnhD9ynFwB9xg💯

💥BLOG💥👇

https://freecredit20myr.wordpress.com/author/mimpiencikmimpi/💯

https://freecredit20myr.wordpress.com/2020/01/19/https-t-me-freecredit20myrrr/💯

💥KIBANA💥👇

https://kibana-ci.elastic.co/user/freecredit10myr💯

https://kibana-ci.elastic.co/user/freecredit10myr/credentials/store/user💯

💥POSTMAN💥👇

https://explore.postman.com/team/43emcYQE4GK9y3💯

💥PATREON💥👇

https://www.patreon.com/rss/FREECREDIT20MYR?auth=vGRshttps://discordapp.com/store/skus/519338998791929866/zombsroyale-io🔥BITCOIN MINING🔥🔥BITCOIN MININGS🔥 🔥FREE KREDIT 20 TANPA DEPOSIT!!! 🔥TELEGRAM GROUP🔥👇 @FREECREDIT20MYR

+382 -0

0 comment

2 changed files

FREECREDIT10MYR

pr closed time in 14 days

delete branch tensorflow/tensorflow

delete branch : master-at-alpha0

delete time in 14 days

issue commenttensorflow/tensorflow

TF 2.1 libtensorflow library does not build under Windows

I am sorry, I have been knee deep in another task lately, and have not been able to respond. I will try to look into this tomorrow.

josdewitte

comment created time in 14 days

issue commenttensorflow/tensorflow

Tensorflow not properly installing

As far as I can tell, you have a CPU with AVX, and CUDA seems to be installed properly. @chsigg @yifeif could this error be caused by a mismatching cudnn version? Which cudnn version was 2.1 built against?

Bstrum36

comment created time in 16 days

pull request commenttensorflow/tensorflow

mkl_dnn as syslib

@perfinion could you review this change?

mslacken

comment created time in 16 days

Pull request review commenttensorflow/tensorflow

Print sys Info at the start of the Windows console

  import code import sys+from tensorflow.tools.test import system_info_lib +# print system information..+def Info(unused_args):

All the prints when users load TF are usually misleading, confuses and annoys users. What would be the exact use case for this? When exactly would this be shown?

Mohamed-94

comment created time in 20 days

issue commenttensorflow/tensorflow

Custom C++ operator compilation error with Tensorflow 2.1

Sorry for missing this issue so far. @yifeif is our custom ops expert. Yifei, any ideas on this one?

Lillypucien

comment created time in 20 days

Pull request review commenttensorflow/tensorflow

No more fixed link for Cuda Archive download link

 def preload_check():         try:           ctypes.WinDLL(build_info.cudart_dll_name)         except OSError:+          with open("cuda_links.json") as f:

This will almost definitely will not work with bazel. I am not sure if it would work at all with pip.

So, please avoid an extra file here.

Hyperclaw79

comment created time in 21 days

Pull request review commenttensorflow/tensorflow

No more fixed link for Cuda Archive download link

 def preload_check():         try:           ctypes.WinDLL(build_info.cudart_dll_name)         except OSError:+          with open("cuda_links.json") as f:+            link_dict = json.load(f)           raise ImportError(               "Could not find %r. TensorFlow requires that this DLL be "               "installed in a directory that is named in your %%PATH%% "               "environment variable. Download and install CUDA %s from "-              "this URL: https://developer.nvidia.com/cuda-90-download-archive"-              % (build_info.cudart_dll_name, build_info.cuda_version_number))+              "this URL: %s"

Why not just use https://developer.nvidia.com/cuda-downloads ? You can click "legacy releases" and get the other links.

Hyperclaw79

comment created time in 21 days

issue commenttensorflow/tensorflow

can't install with python3.8

@martinwicke @ewilderj do we have our release roadmap published externally?

amitport

comment created time in 22 days

issue commenttensorflow/tensorflow

Tensorflow install breaks curl on Ubuntu

What do you mean by "rasa install"? Standard tensorflow pip package installations should have no interactions with curl/libcurl.

John-Nagle

comment created time in 23 days

issue commenttensorflow/tensorflow

GPU unit-test infrastructure breakage (`Dockerfile.gpu`, `gpu/run_py3_core.sh`)

We did update our dockerfiles, but our CI does not run using ci_build.sh anymore one thing to try can be to use devel dockerfile TF is publishing on dockerhub, and run the script you run_py3_core.sh script you ran directly on it.

About the python version isue you ran into may be due to ci_build.sh trying to override PYTHON_BIN_PATH environment variable.

deven-amd

comment created time in 23 days

issue commenttensorflow/tensorflow

Build using nvcc + clang?

@Artem-B may have some details. At one point, I remember CUDA headers had a check to block building with Clang, when we first tried this. I do not know if this still applies.

Another option is to just use clang when building TF. Clang can already compile cuda code.

djwenren

comment created time in 25 days

pull request commenttensorflow/tensorflow

[S3] Adding retries for S3 file system operations

Thank you very much for your contribution. I did move around some code here, but may not be very familiar with everything. I am adding @mihaimaruseac , who would be more familiar than me with the filesystems

rahul003

comment created time in a month

issue commenttensorflow/tensorflow

TF 2.1 libtensorflow library does not build under Windows

@goldiegadde @mihaimaruseac did we see a similar issue in 2.1 branch? On master, my builds seem to be healthy.

josdewitte

comment created time in a month

push eventgunan/subprocess

Gunhan Gulsoy

commit sha 1a55202697446b5c3fa6294acf54979e4c9af3f7

Allow multiplexed IO, by running a thread for each io channel.

view details

push time in a month

issue closedtensorflow/tensorflow

variable_ops.h missing from pip install

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Centos 7
  • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device:
  • TensorFlow installed from (source or binary): binary
  • TensorFlow version: 1.14.0
  • Python version:3.6
  • Installed using virtualenv? pip? conda?: Pip
  • Bazel version (if compiling from source):
  • GCC/Compiler version (if compiling from source):
  • CUDA/cuDNN version: 10.0
  • GPU model and memory: p100

Describe the problem When building custom ops, we want to link against pip installation for end user simplicity. We were testing expanding the registered dtypes for variableson GPUs through subclassing. We found that the header for the variable op is missing from the pip installation. We had to copy the file locally from source and reference it in our cuatom op code which is undesirable from a maintenance perspective. Can the file be added to the proper place in the includes?

closed time in a month

dirktheeng

issue commenttensorflow/tensorflow

variable_ops.h missing from pip install

core/kernels by design are just our implementations of kernels. Most libraries there have not been designed to be reused. So in our code structure, they are meant to be hidden symbols not used by others.

Moreover, on windows, we have to limit the number of symbols we export. This is because windows puts a limit on number of exported symbols from shared objects. So that folder is completely excluded.

Again, if you like to use any libraries under core/kernels, you may refactor the code out of that directory.

dirktheeng

comment created time in a month

push eventgunan/subprocess

Gunhan Gulsoy

commit sha 0a5a20b5a752e029ccfce524521bbbb31ab62833

Another test passes!

view details

Gunhan Gulsoy

commit sha d6852f0d93e82efbe477308d4aaa074071288d3d

Only overlapping io failing now! will probably need to implement threads to handle each std channel for that.

view details

push time in a month

push eventgunan/subprocess

Gunhan Gulsoy

commit sha 30fb80d732aa5601ef1a328a434deffc027e764c

Make stdin Stdout test pass.

view details

push time in a month

push eventgunan/subprocess

Gunhan Gulsoy

commit sha 532d50cd660cf0915ebc7d91534873a8d953d155

Make stdin test pass, and create a helper program.

view details

push time in a month

push eventgunan/subprocess

Gunhan Gulsoy

commit sha 31128d38d7e5f4218fcea55da247fc62b7ada5b6

Add a helper binary to replace echo in test.

view details

Gunhan Gulsoy

commit sha b9108045c1a1c125663c6abfbe2b795f7c67c698

Repurpose test_echo_... and make 3rd test pass.

view details

Gunhan Gulsoy

commit sha 76fab05acc1583e5473b0f7832ed844666d8fc13

4th test passing.

view details

Gunhan Gulsoy

commit sha 4717e74b1de6a77ae8e27dd694d431b326b61c5e

Add a program to test stderr.

view details

push time in a month

push eventgunan/subprocess

Gunhan Gulsoy

commit sha 87bf4e8a56f7a62067de124272f145238998e8ce

Add test_noop as a data dependency to the subprocess_test.

view details

Gunhan Gulsoy

commit sha eadf444796451ac8b06e77537f18824af483f47d

Create a helper binary to run subprocess_test.

view details

push time in a month

push eventgunan/subprocess

Gunhan Gulsoy

commit sha 5267818b09c0cc077bf733e3e00b6ab90947d7ea

Close child handles when subprocess is created.

view details

push time in a month

push eventgunan/subprocess

Gunhan Gulsoy

commit sha 54ea5466d249cd0a45de0e50baccc1b61735b6e4

Increase the pipe buffer size.

view details

Gunhan Gulsoy

commit sha ae1e21a2c79702628f19b2addf596a00eb555033

Switch back to using anonymous pipes But it is still deadlocking...

view details

push time in a month

push eventgunan/pipe

bmzhao

commit sha 69813412e53b2774c8a27bd0545a398b73003a9f

Fixed 2 issues: 1. Parent Process was printing entire char buffer, instead of char buffer up to the amount of characters read, producing garbage. 2. Parent process leaked the child stdout handle, causing indefinited blocking on ReadFile.

view details

Gunhan Gulsoy

commit sha 673c38f67c8b5c7f4489a383be3529de6ea0340d

Merge pull request #1 from bmzhao/master Fixed Garbage Printing & Indefinite Blocking on ReadFile Caused by Leaked Handle

view details

push time in a month

PR merged gunan/pipe

Fixed Garbage Printing & Indefinite Blocking on ReadFile Caused by Leaked Handle
  1. Parent Process was printing entire char buffer, instead of char buffer up to the amount of characters read, producing garbage.
  2. Parent process leaked the child stdout handle, causing indefinited blocking on ReadFile.

See https://stackoverflow.com/a/54416516, specifically

Yes, it should be added that output_pipes[1] must be closed after CreateProcess() call and before any ReadFile() call. 

in the expanded answer comments.

+8 -7

0 comment

1 changed file

bmzhao

pr closed time in a month

create barnchgunan/pipe

branch : master

created branch time in a month

created repositorygunan/pipe

Reproduce the pipe not unblocking.

created time in a month

push eventgunan/subprocess

Gunhan Gulsoy

commit sha ba65605fa374f327f1dbaac77782e5f2d2f0099c

Now stopped deadlocking, but passing no output test mistakenly.

view details

push time in a month

push eventgunan/subprocess

Gunhan Gulsoy

commit sha 3c0eaa9dd805ac45812735f7148b9ba59a7f829e

Do not check exit code when process creation fails. There is no process to check the exit code of! Duh!

view details

push time in a month

push eventgunan/subprocess

Gunhan Gulsoy

commit sha 37ff8a27929bed236f60be847a9aa11d553efb6f

Make the first test pass.

view details

Gunhan Gulsoy

commit sha 891546caba9fe697437ad0734cac33d334ad1c02

Edit communicate function to use the new pipes. Simplify it. And the test is now broken.

view details

push time in a month

push eventgunan/subprocess

Gunhan Gulsoy

commit sha 4990fb6a8233d0df3faa44253f6a326c6c456118

Change default pipe mode.

view details

Gunhan Gulsoy

commit sha 883b87fa72f310624588f7bb5e7ca6eb97fd92d8

Add a noop program to use in tests.

view details

push time in a month

push eventgunan/subprocess

Gunhan Gulsoy

commit sha 8165d6ed276243cc99e1e1e371798d418af61a9e

Refactor subprocess:start(windows) to use NamedPipes.

view details

push time in a month

push eventgunan/subprocess

Gunhan Gulsoy

commit sha 952b0755738325419bb3a01255f8791870be04a1

Make subprocess_test buildable and passable.

view details

Gunhan Gulsoy

commit sha 917a847aee19fa1b46391fb3e0771074ba47a050

Remove main program.

view details

Gunhan Gulsoy

commit sha 4d747e54119fcd1d1aec9d64dac7e5a3b0c8b2b5

Remove visual studio stuff.

view details

push time in a month

push eventgunan/subprocess

Gunhan Gulsoy

commit sha 57fea4270bfb07ad8feae6f31b75e51b0df2a275

Create a bazel workspace, add a build file and create a test file.

view details

Gunhan Gulsoy

commit sha d44b49d49501681fd379a98b63b778a22615d3ae

Add a gitignore file

view details

push time in a month

PR closed tensorflow/docs

Reviewers
fixed typo in section Ubuntu 16.04 (CUDA 10); use cuda-repo-ubuntu1604_10.0 awaiting-technical-review cla: yes kokoro:force-run

This fix ensures that downloads apt-get installs Cuda 10.0 (and not Cuda 10.1).

Patch summary:

In section "Ubuntu 16.04 (CUDA 10)";

wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/cuda-repo-ubuntu1604_10.1.243-1_amd64.deb
sudo dpkg -i cuda-repo-ubuntu1604_10.1.243-1_amd64.deb

is replaced with;

wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/cuda-repo-ubuntu1604_10.0.130-1_amd64.deb
sudo dpkg -i cuda-repo-ubuntu1604_10.0.130-1_amd64.deb
+3 -3

1 comment

2 changed files

baxterai

pr closed time in a month

pull request commenttensorflow/docs

fixed typo in section Ubuntu 16.04 (CUDA 10); use cuda-repo-ubuntu1604_10.0

Thank you very much for your contribution. However, the change is indeed intentional and not a typo. TF now requires CUDA 10.1

baxterai

comment created time in a month

Pull request review commenttensorflow/docs

fixed typo in section Ubuntu 16.04 (CUDA 10); use cuda-repo-ubuntu1604_10.0

 complicates installation of the NVIDIA driver and is beyond the scope of these i # Add NVIDIA package repositories # Add HTTPS support for apt-key <code class="devsite-terminal">sudo apt-get install gnupg-curl</code>-<code class="devsite-terminal">wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/cuda-repo-ubuntu1604_10.1.243-1_amd64.deb</code>-<code class="devsite-terminal">sudo dpkg -i cuda-repo-ubuntu1604_10.1.243-1_amd64.deb</code>+<code class="devsite-terminal">wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/cuda-repo-ubuntu1604_10.0.130-1_amd64.deb</code>+<code class="devsite-terminal">sudo dpkg -i cuda-repo-ubuntu1604_10.0.130-1_amd64.deb</code>

I think with 2.1 we went up to 10.1. Why do you want to downgrade this to 10.0 again?

baxterai

comment created time in a month

issue commenttensorflow/tensorflow

Tensorflow 2.0.0 cpu, import error: DLL load failed

One potential difference could be, if you are getting TF binaries through conda/anaconda. We are not building them, so I am not sure which version started to require AVX. @jjhelmus may know that.

When it comes to pip install tensorflow, looks like 1.5 is where we switched? https://github.com/tensorflow/tensorflow/releases/tag/v1.5.1 So the reason 1.8 worked for you may be because you got them through a different source, or through anaconda?

To test TF 1.x, here is a simple snippet you can try this example: https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/1_Introduction/helloworld.py

faltinl

comment created time in a month

issue commenttensorflow/tensorflow

Tensorflow 2.0.0 cpu, import error: DLL load failed

From what I can tell, this CPU does not have AVX instruction set. That is the cause of your problem. You can build TF from source to use it, however prebuilt binaries will not work for you.

On Thu, Jan 23, 2020, 10:27 PM Leo Faltin notifications@github.com wrote:

What is your CPU make and model? 0> data added to system information

— You are receiving this because you were assigned. Reply to this email directly, view it on GitHub https://github.com/tensorflow/tensorflow/issues/36138?email_source=notifications&email_token=AB4UEOOVKL67T4UI326UAGDQ7KC5HA5CNFSM4KKKPVE2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEJZ3I4I#issuecomment-578008177, or unsubscribe https://github.com/notifications/unsubscribe-auth/AB4UEOIFJE4LVKJA7WJV263Q7KC5HANCNFSM4KKKPVEQ .

faltinl

comment created time in a month

issue commenttensorflow/tensorflow

Help! I have an issue when importing TF!

Could you share your CPU make and model?

Brayden1000

comment created time in a month

issue commenttensorflow/tensorflow

Tensorflow 2.0.0 cpu, import error: DLL load failed

What is your CPU make and model? Also I see that you installed through anaconda, adding @jjhelmus to see if there are any known issues there.

faltinl

comment created time in a month

PR closed tensorflow/tensorflow

Reviewers
add compatibility check badges to README cla: yes size:XS stalled stat:awaiting response

Hello google package maintainer, The badges being added to the README in this PR will indicate your compatibility with other google packages. This addresses a set of user bugs which have happened when a user depends on two Cloud libraries (or runtimes that bundle libraries), say A and B, which both depend on library C. If the two libraries require different versions of C, the users can run into issues both when they pip install the libraries, and when they deploy their code. Our compatibility server checks that all libraries we make including this one are self and pairwise compatible as well as not having any deprecated dependencies. The two badges will mark the build for your project green when the latest version available on PyPI and github HEAD respectively meet all compatibility checks with itself and all other libraries. The badge target will link to a details page that elaborates on the current status. This should help you fix issues pre-release, to avoid user surprises. For more information, please take a look at our project charter at go/python-cloud-dependencies-project-charter and the badging PRD https://docs.google.com/document/d/1GYRFrfUou2ssY71AtnLkc8Sg1SD4dxqN4GzlatGHHyI/edit?ts=5c6f031d

+1 -0

8 comments

1 changed file

ylil93

pr closed time in a month

pull request commenttensorflow/tensorflow

add compatibility check badges to README

Looks like we do now wish to add this badge. Closing the PR.

ylil93

comment created time in a month

pull request commenttensorflow/docs

Update GPU install Ubuntu 16.04

Great question. @chsigg @sanjoy which libnvinfer does TF use right now?

lamberta

comment created time in a month

issue commenttensorflow/tensorflow

can't install with python3.8

until 2.2 is released, pip install tensorflow will not work with python 3.8 Please see https://github.com/tensorflow/tensorflow/issues/33374#issuecomment-571074915

amitport

comment created time in a month

issue closedtensorflow/tensorflow

Could not load dynamic library 'libnvinfer_plugin.so.6'

<em>Please make sure that this is a build/installation issue. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:build_template</em>

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 16.04
  • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device:
  • TensorFlow installed from (source or binary): wheel
  • TensorFlow version: 2.1.0
  • Python version: 3.7
  • Installed using virtualenv? pip? conda?: pip in virtualenv
  • Bazel version (if compiling from source):
  • GCC/Compiler version (if compiling from source):
  • CUDA/cuDNN version: 10.1
  • GPU model and memory: Titan XP

Describe the problem

2020-01-16 20:21:32.912603: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libnvinfer.so.6 

2020-01-16 20:21:32.912768: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libnvinfer_plugin.so.6'; dlerror: libnvinfer_plugin.so.6:  cannot open shared object file: No such file or directory 

2020-01-16 20:21:32.912782: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:30] Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.

Provide the exact sequence of commands / steps that you executed before running into the problem

import tensorflow

Any other info / logs Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.

cuda-10-1/unknown,now 10.1.243-1 amd64 [installed]
cuda-command-line-tools-10-1/unknown,now 10.1.243-1 amd64 [installed,automatic]
cuda-compiler-10-1/unknown,now 10.1.243-1 amd64 [installed,automatic]
cuda-cudart-10-1/unknown,now 10.1.243-1 amd64 [installed,automatic]
cuda-cudart-dev-10-1/unknown,now 10.1.243-1 amd64 [installed,automatic]
cuda-cufft-10-1/unknown,now 10.1.243-1 amd64 [installed,automatic]
cuda-cufft-dev-10-1/unknown,now 10.1.243-1 amd64 [installed,automatic]
cuda-cuobjdump-10-1/unknown,now 10.1.243-1 amd64 [installed,automatic]
cuda-cupti-10-1/unknown,now 10.1.243-1 amd64 [installed,automatic]
cuda-curand-10-1/unknown,now 10.1.243-1 amd64 [installed,automatic]
cuda-curand-dev-10-1/unknown,now 10.1.243-1 amd64 [installed,automatic]
cuda-cusolver-10-1/unknown,now 10.1.243-1 amd64 [installed,automatic]
cuda-cusolver-dev-10-1/unknown,now 10.1.243-1 amd64 [installed,automatic]
cuda-cusparse-10-1/unknown,now 10.1.243-1 amd64 [installed,automatic]
cuda-cusparse-dev-10-1/unknown,now 10.1.243-1 amd64 [installed,automatic]
cuda-demo-suite-10-1/unknown,now 10.1.243-1 amd64 [installed,automatic]
cuda-documentation-10-1/unknown,now 10.1.243-1 amd64 [installed,automatic]
cuda-driver-dev-10-1/unknown,now 10.1.243-1 amd64 [installed,automatic]
cuda-drivers/unknown,now 440.33.01-1 amd64 [installed,automatic]
cuda-gdb-10-1/unknown,now 10.1.243-1 amd64 [installed,automatic]
cuda-gpu-library-advisor-10-1/unknown,now 10.1.243-1 amd64 [installed,automatic]
cuda-libraries-10-1/unknown,now 10.1.243-1 amd64 [installed,automatic]
cuda-libraries-dev-10-1/unknown,now 10.1.243-1 amd64 [installed,automatic]
cuda-license-10-1/unknown,now 10.1.243-1 amd64 [installed,automatic]
cuda-license-10-2/unknown,now 10.2.89-1 amd64 [installed,automatic]
cuda-memcheck-10-1/unknown,now 10.1.243-1 amd64 [installed,automatic]
cuda-misc-headers-10-1/unknown,now 10.1.243-1 amd64 [installed,automatic]
cuda-npp-10-1/unknown,now 10.1.243-1 amd64 [installed,automatic]
cuda-npp-dev-10-1/unknown,now 10.1.243-1 amd64 [installed,automatic]
cuda-nsight-10-1/unknown,now 10.1.243-1 amd64 [installed,automatic]
cuda-nsight-compute-10-1/unknown,now 10.1.243-1 amd64 [installed,automatic]
cuda-nsight-systems-10-1/unknown,now 10.1.243-1 amd64 [installed,automatic]
cuda-nvcc-10-1/unknown,now 10.1.243-1 amd64 [installed,automatic]
cuda-nvdisasm-10-1/unknown,now 10.1.243-1 amd64 [installed,automatic]
cuda-nvgraph-10-1/unknown,now 10.1.243-1 amd64 [installed,automatic]
cuda-nvgraph-dev-10-1/unknown,now 10.1.243-1 amd64 [installed,automatic]
cuda-nvjpeg-10-1/unknown,now 10.1.243-1 amd64 [installed,automatic]
cuda-nvjpeg-dev-10-1/unknown,now 10.1.243-1 amd64 [installed,automatic]
cuda-nvml-dev-10-1/unknown,now 10.1.243-1 amd64 [installed,automatic]
cuda-nvprof-10-1/unknown,now 10.1.243-1 amd64 [installed,automatic]
cuda-nvprune-10-1/unknown,now 10.1.243-1 amd64 [installed,automatic]
cuda-nvrtc-10-1/unknown,now 10.1.243-1 amd64 [installed,automatic]
cuda-nvrtc-dev-10-1/unknown,now 10.1.243-1 amd64 [installed,automatic]
cuda-nvtx-10-1/unknown,now 10.1.243-1 amd64 [installed,automatic]
cuda-nvvp-10-1/unknown,now 10.1.243-1 amd64 [installed,automatic]
cuda-repo-ubuntu1604/unknown,now 10.1.243-1 amd64 [installed,upgradable to: 10.2.89-1]
cuda-runtime-10-1/unknown,now 10.1.243-1 amd64 [installed,automatic]
cuda-samples-10-1/unknown,now 10.1.243-1 amd64 [installed,automatic]
cuda-sanitizer-api-10-1/unknown,now 10.1.243-1 amd64 [installed,automatic]
cuda-toolkit-10-1/unknown,now 10.1.243-1 amd64 [installed,automatic]
cuda-tools-10-1/unknown,now 10.1.243-1 amd64 [installed,automatic]
cuda-visual-tools-10-1/unknown,now 10.1.243-1 amd64 [installed,automatic]
libcuda1-440/unknown,now 440.33.01-0ubuntu1 amd64 [installed,automatic]
libcudnn7/unknown,now 7.6.4.38-1+cuda10.1 amd64 [installed,upgradable to: 7.6.5.32-1+cuda10.2]
libcudnn7-dev/unknown,now 7.6.4.38-1+cuda10.1 amd64 [installed,upgradable to: 7.6.5.32-1+cuda10.2]
libnvinfer-dev/unknown,now 6.0.1-1+cuda10.1 amd64 [installed,upgradable to: 7.0.0-1+cuda10.2]
libnvinfer6/unknown,now 6.0.1-1+cuda10.1 amd64 [installed,upgradable to: 6.0.1-1+cuda10.2]

Note this doesn't occur on nightly

closed time in a month

mjlbach

issue commenttensorflow/tensorflow

Could not load dynamic library 'libnvinfer_plugin.so.6'

That is correct. if you install tensorrt, the problem should go away. But otherwise, TF should continue running.

mjlbach

comment created time in a month

issue commenttensorflow/tensorflow

variable_ops.h missing from pip install

I definitely do not want any headers under core/kernels in the pip package. If there are libraries there we would like to use for custom kernels, they should be turned into libraries and moves outside core/kernels.

dirktheeng

comment created time in a month

issue closedtensorflow/tensorflow

Linking to both tensorflow and protobuf causes segmentation fault during static initializers

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Yes
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): 4.18.10-1rodete2-amd64 (Debian-derived)
  • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: N/A
  • TensorFlow installed from (source or binary): binary
  • TensorFlow version (use command below): nightly Jan 15, 2018 (protobuf built from HEAD Jan 15)
  • Python version: N/A
  • Bazel version (if compiling from source): N/A
  • GCC/Compiler version (if compiling from source): gcc 7.3.0
  • CUDA/cuDNN version: N/A
  • GPU model and memory: N/A

Describe the current behavior Aborts on SIGSEGV

Describe the expected behavior Exits cleanly

Details I want to create an application that calls the C API but also can parse protocol buffers on its own behalf. For that want to link dynamically to tensorflow and statically to protobuf. When I do this, it seems like protobuf may be tricking libtensorflow.so into thinking that it has run some static initializers that it in fact has not run (on the static variables needed by its own internal copy of protobuf).

The segfault is only on Linux. Linking the same way on Windows works fine.

I have varied libtensorflow and protobuf versions, and it seems to happen with all of them. It also happens whether I choose static or dynamic linking for my binary's copy of protobuf.

I also tried building my own liba.so that itself statically links protobuf and then a binary that linked dynamically to "a" and statically to protobuf. This worked, which is pointing away from this being a purely protobuf issue.

Code to reproduce the issue

  • bash
c++ -o main \
  -L$TF_DIR/lib -I$TF_DIR/include \
  -L$PROTO_DIR/lib -I$PROTO_DIR/include \
  main.cc -l tensorflow -l protobuf

LD_LIBRARY_PATH=$TF_DIR/lib:$PROTO_DIR/lib ./main

Removing -lprotobuf from the above command will get rid of the segfault.

  • main.cc
int main(int argc, char** argv) {}

Other info / logs

Program received signal SIGSEGV, Segmentation fault. 0x00007fffed8f20b8 in tensorflow::kernel_factory::OpKernelRegistrar::InitInternal(tensorflow::KernelDef const*, absl::string_view, std::un ique_ptr<tensorflow::kernel_factory::OpKernelFactory, std::default_deletetensorflow::kernel_factory::OpKernelFactory >) () from /usr/local/google/home/mattharvey/no_backup/libtensorflow/lib/libtensorflow_framework.so (gdb) bt #0 0x00007fffed8f20b8 in tensorflow::kernel_factory::OpKernelRegistrar::InitInternal(tensorflow::KernelDef const*, absl::string_view, std ::unique_ptr<tensorflow::kernel_factory::OpKernelFactory, std::default_deletetensorflow::kernel_factory::OpKernelFactory >) () from /usr/local/google/home/mattharvey/no_backup/libtensorflow/lib/libtensorflow_framework.so #1 0x00007fffed88336a in tensorflow::kernel_factory::OpKernelRegistrar::OpKernelRegistrar(tensorflow::KernelDef const*, absl::string_view , tensorflow::OpKernel* ()(tensorflow::OpKernelConstruction)) () from /usr/local/google/home/mattharvey/no_backup/libtensorflow/lib/libtensorflow_framework.so #2 0x00007fffed85f806 in _GLOBAL__sub_I_dataset.cc () from /usr/local/google/home/mattharvey/no_backup/libtensorflow/lib/libtensorflow_framework.so #3 0x00007ffff7de88aa in call_init (l=<optimized out>, argc=argc@entry=1, argv=argv@entry=0x7fffffffdc68, env=env@entry=0x7fffffffdc78) at dl-init.c:72 #4 0x00007ffff7de89bb in call_init (env=0x7fffffffdc78, argv=0x7fffffffdc68, argc=1, l=<optimized out>) at dl-init.c:30 #5 _dl_init (main_map=0x7ffff7ffe170, argc=1, argv=0x7fffffffdc68, env=0x7fffffffdc78) at dl-init.c:120 #6 0x00007ffff7dd9c5a in _dl_start_user () from /lib64/ld-linux-x86-64.so.2 #7 0x0000000000000001 in ?? () #8 0x00007fffffffdf2e in ?? () #9 0x0000000000000000 in ?? ()

0x00007fffed8f20a0 <+80>: mov 0x50(%r15),%rax 0x00007fffed8f20a4 <+84>: lea -0xa0(%rbp),%rbx 0x00007fffed8f20ab <+91>: mov %rbx,%rdi 0x00007fffed8f20ae <+94>: mov (%rax),%r8 0x00007fffed8f20b1 <+97>: mov 0x48(%r15),%rax 0x00007fffed8f20b5 <+101>: mov (%rax),%rsi => 0x00007fffed8f20b8 <+104>: mov -0x18(%r8),%r9

How did -0x18(%r8) get illegal?

(gdb) info register r8 r8 0x0 0

-0x18 is certainly illegal. Where did it come from? 0x50(%r15) if we trace through the above.

(gdb) info register r15 r15 0x555555768d10 93824994413840

(gdb) x/2 0x555555768d60 0x555555768d60: 0xee2c0bc0 0x00007fff

(gdb) x/2 0x00007fffee2c0bc0 0x7fffee2c0bc0 google::protobuf::internal::fixed_address_empty_string: 0x00000000 0x00000000

... the 0x0 that ended up in r8.

Zoom out to find lots of stuff uninitialized:

(gdb) x/64x 0x7fffee4ddb00 0x7fffee4ddb00 google::protobuf::_DoubleValue_default_instance_: 0x00000000 0x00000000 0x00000000 0x00000000 0x7fffee4ddb10 google::protobuf::_DoubleValue_default_instance_+16: 0x00000000 0x00000000 0x00000000 0x00000000 0x7fffee4ddb20 <_ZStL8__ioinit>: 0x00000000 0x00000000 0x00000000 0x00000000 0x7fffee4ddb30 <_ZStL8__ioinit>: 0x00000000 0x00000000 0x00000000 0x00000000 0x7fffee4ddb40 google::protobuf::internal::RepeatedPrimitiveDefaults::default_instance()::instance: 0x00000000 0x00000000 0x00000000 0x00000000 0x7fffee4ddb50 <guard variable for google::protobuf::internal::RepeatedStringTypeTraits::GetDefaultRepeatedField()::instance>: 0x000000000x00000000 0x00000000 0x00000000 0x7fffee4ddb60 <guard variable for google::protobuf::internal::(anonymous namespace)::Register(google::protobuf::MessageLite const*, int, google::protobuf::internal::ExtensionInfo)::local_static_registry>: 0x00000000 0x00000000 0x00000000 0x00000000 0x7fffee4ddb70 <_ZStL8__ioinit>: 0x00000000 0x00000000 0x00000000 0x00000000 0x7fffee4ddb80 google::protobuf::internal::InitSCCImpl(google::protobuf::internal::SCCInfoBase*)::mu: 0x00000000 0x00000000 0x00000000 0x00000000 0x7fffee4ddb90 google::protobuf::internal::InitSCCImpl(google::protobuf::internal::SCCInfoBase*)::mu+16: 0x00000000 0x000000000x00000000 0x00000000 0x7fffee4ddba0 google::protobuf::internal::InitSCCImpl(google::protobuf::internal::SCCInfoBase*)::mu+32: 0x00000000 0x000000000x00000000 0x00000000 0x7fffee4ddbb0 <guard variable for google::protobuf::internal::InitSCCImpl(google::protobuf::internal::SCCInfoBase*)::runner>: 0x000000000x00000000 0x00000000 0x00000000 0x7fffee4ddbc0 google::protobuf::internal::fixed_address_empty_string: 0x00000000 0x00000000 0x00000000 0x00000000 0x7fffee4ddbd0 google::protobuf::internal::implicit_weak_message_default_instance: 0x00000000 0x00000000 0x00000000 0x00000000 0x7fffee4ddbe0 google::protobuf::internal::implicit_weak_message_default_instance+16: 0x00000000 0x00000000 0x00000000 0x00000000 0x7fffee4ddbf0 google::protobuf::ShutdownProtobufLibrary()::is_shutdown: 0x00000000 0x00000000 0x00000000 0x00000000

closed time in a month

matth79

issue commenttensorflow/tensorflow

Linking to both tensorflow and protobuf causes segmentation fault during static initializers

While I do not want to close this issue, as @allenlavoie wrote in https://github.com/tensorflow/tensorflow/issues/24976#issuecomment-471760777 , I am not sure what we can do. TF is working on the slow path to hide all protobuf symbols from its API surface. Even then static initializers will be executed twice. I am not sure what will happen, as I am not sure how protobuf uses them.

So, unfortunately I can only offer https://github.com/tensorflow/tensorflow/issues/24976#issuecomment-471760777 , and we should close this as "Infeasible".

matth79

comment created time in a month

pull request commenttensorflow/tensorflow

[s3_file_system] set sync_needed_ as false after Sync()

@mrry may have some context, please redirect if I am wrong!

ziliangpeng

comment created time in a month

issue closedtensorflow/tensorflow

request to tag the latest fully qualified commit on the master branch

It would be very helpful to know the "latest fully qualified" commit on the master branch (i.e. the commit for which all internal TF checks have passed). The tip of the master branch is not that (i.e. the "latest fully qualified") commit, because we sometimes see commits/PRs getting rolled back because they failed the internal TF checks.

Having a tag/pointer to the "latest fully qualified" commit would helpful to us (and I suspect othesr in a similar boat to us), in at least a couple of ways

  1. We current have a nightly job for the ROCm Community Supported Build (CSB), which uses the tip of the master. If there was a "bad" commit (i.e. a commit that will, but has not yet been, rolled back due to internal check failure) since the last CSB run, then it has the potential to fail the ROCm CSB as well. When the ROCm CSB fails, we need to go through the process of triaging the failures, and ensuring that the failures were not regressions caused by ROCm.
  • If the nightly job uses the "latest fully qualified" commit, instead of the tip of master, then we could immediately rule out, "bad" upstream commits as the cause, saving us time and avoiding unnecessary ROCm CSB build failures.

  • In addition to the nightly job for ROCm CSB, we will soon be adding nightly jobs for running broader suite of tests for the ROCm build, and running them on the "latest full qualified" commit instead of the tip, would better serve our purpose (of detecting regressions in ROCm functionality)

  1. We maintain a fork on the TF repo, and periodically (weekly) sync with the master branch in the upstream repo. We use the tip of master to sync, but using the "latest fully qualified" commit instead, would be a better option.

thanks

closed time in a month

deven-amd

issue commenttensorflow/tensorflow

request to tag the latest fully qualified commit on the master branch

Again, all tests under tensorflow repository is only run for all configurations once every night. What I say above is still valid for all tests under tensorflow. So from my point of view, it is equally infeasible.

I guess this is in part our general development model. We live and develop at head. there are no "stable" tags except for release branches/tags.

deven-amd

comment created time in a month

issue commenttensorflow/tensorflow

can't install with python3.8

it was autoclosed by a commit. we are still aware that 3.8 binaries are not available.

amitport

comment created time in a month

IssuesEvent

push eventgunan/subprocess

Gunhan Gulsoy

commit sha 83e6baac29311b8564662d34a88044ecf5957d7f

Some more stuff working, except now the pipes are just blocked...

view details

push time in a month

pull request commenttensorflow/tensorflow

Change collection.Sequence to collection.abc.Sequence in keras recurrent.py 2

I understand, but for a while, we still have to have changes that are compatible with python 2.7. Maybe we can have the line switch between collections.abc or collections based on python version?

ulf1

comment created time in a month

pull request commenttensorflow/tensorflow

Change collection.Sequence to collection.abc.Sequence in keras recurrent.py 2

Unfortunately, we have not deprecated/migrated all the internal workflows yet. So we cannot accept any changes that would break python2 yet.

ulf1

comment created time in a month

push eventgunan/subprocess

Gunhan Gulsoy

commit sha 1f453e0b77270d243f9ac4fd93b1521a049dfde6

Finalize subprocess class

view details

Gunhan Gulsoy

commit sha 767cd8141181077dd07837d6a901cbc3195e73bb

non-working program

view details

push time in a month

push eventgunan/subprocess

Gunhan Gulsoy

commit sha 3d7cce9dafa9c04ac8e048f591899bd65c1319f6

Somehow fixed!!!

view details

push time in a month

create barnchgunan/subprocess

branch : master

created branch time in a month

created repositorygunan/subprocess

created time in a month

issue commenttensorflow/tensorflow

Building Tensorflow 1.13 for python 3.8 fails on nullptr conversion.

1.15 should be better, as there are bugs we have fixed after 1.14.

odinsbane

comment created time in a month

issue commenttensorflow/tensorflow

ARM6/RPI: Executor failed to create kernel _FusedConv2D

@petewarden may have a comment on this.

sdeoras

comment created time in a month

issue closedtensorflow/tensorflow

Building Tensorflow 1.13 for python 3.8 fails on nullptr conversion.

System information

  • OS Platform and Distribution: Centos 7
  • TensorFlow installed from : source
  • TensorFlow version: 1.13.2
  • Python version: 3.8.0
  • Bazel version : 0.21.0
  • GCC/Compiler version : gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39)
  • CUDA/cuDNN version: 9.1/7.0
  • GPU model and memory: 1080ti

Describe the problem

When attempting to build the tensorflow branch r1.13 targeting python 3.8 the build fails with the following error:

ndarray_tensor_bridge.cc:108:1: error: cannot convert 'std::nullptr_t' to 'Py_ssize_t {aka long int}' in initialization };

This error occurs in three files that I found: bfloat16.cc, ndarray_tensor_bridge.cc, pywrap_tfe_src.cc

This appears to be the same problem fixed in issue #33543

Provide the exact sequence of commands / steps that you executed before running into the problem

I am willing to provide more details, but this appears to be the exact same issue. I am including the git diff from the version that compiled and works for me.

diff --git a/tensorflow/python/eager/pywrap_tfe_src.cc b/tensorflow/python/eager/pywrap_tfe_src.cc
index 9ce500b..726c4c5 100644
--- a/tensorflow/python/eager/pywrap_tfe_src.cc
+++ b/tensorflow/python/eager/pywrap_tfe_src.cc
@@ -1216,7 +1216,7 @@ static PyTypeObject TFE_Py_Tape_Type = {
     sizeof(TFE_Py_Tape),                          /* tp_basicsize */
     0,                                            /* tp_itemsize */
     &TFE_Py_Tape_Delete,                          /* tp_dealloc */
-    nullptr,                                      /* tp_print */
+    NULL,                                         /* tp_print */
     nullptr,                                      /* tp_getattr */
     nullptr,                                      /* tp_setattr */
     nullptr,                                      /* tp_reserved */
diff --git a/tensorflow/python/lib/core/bfloat16.cc b/tensorflow/python/lib/core/bfloat16.cc
index fde3a83..e0da0f4 100644
--- a/tensorflow/python/lib/core/bfloat16.cc
+++ b/tensorflow/python/lib/core/bfloat16.cc
@@ -317,7 +317,7 @@ PyTypeObject PyBfloat16_Type = {
     sizeof(PyBfloat16),                        // tp_basicsize
     0,                                         // tp_itemsize
     nullptr,                                   // tp_dealloc
-    nullptr,                                   // tp_print
+    NULL,                                      // tp_print
     nullptr,                                   // tp_getattr
     nullptr,                                   // tp_setattr
     nullptr,                                   // tp_compare / tp_reserved
diff --git a/tensorflow/python/lib/core/ndarray_tensor_bridge.cc b/tensorflow/python/lib/core/ndarray_tensor_bridge.cc
index 0d58385..43ab92c 100644
--- a/tensorflow/python/lib/core/ndarray_tensor_bridge.cc
+++ b/tensorflow/python/lib/core/ndarray_tensor_bridge.cc
@@ -86,7 +86,7 @@ PyTypeObject TensorReleaserType = {
     0,                                /* tp_itemsize */
     /* methods */
     TensorReleaser_dealloc,      /* tp_dealloc */
-    nullptr,                     /* tp_print */
+    NULL,                        /* tp_print */
     nullptr,                     /* tp_getattr */
     nullptr,                     /* tp_setattr */
     nullptr,                     /* tp_compare */

closed time in a month

odinsbane

issue commenttensorflow/tensorflow

Building Tensorflow 1.13 for python 3.8 fails on nullptr conversion.

As the old branches are closed, we will not backport the compilation fix to these branches. Only security issues will be patched and released for old branches.

odinsbane

comment created time in a month

issue commenttensorflow/tensorflow

[tf2.0.0] Fails to build on Python 3.8 - suggested fix (change nullptr to 0 in source)

See the below post for the plans. https://github.com/tensorflow/tensorflow/issues/33374#issuecomment-571074915

dbonner

comment created time in a month

issue closedtensorflow/tensorflow

Windows Bazel build fails when using TF_SYSTEM_LIBS

System information

  • OS Platform and Distribution: Windows 10 Pro
  • TensorFlow installed from: source
  • TensorFlow version: 1.11
  • Python version: 3.5.5
  • Installed using: conda
  • Bazel version: 0.15.1
  • GCC/Compiler version: Visual Studio 14.0
  • CUDA/cuDNN version: 9.0
  • GPU model and memory: NVIDIA GeForce GTX 960

Describe the problem

I can build Tensorflow 1.11 normally using Bazel on Windows, but if I set any TF_SYSTEM_LIBS, then Bazel can't find the necessary headers.

Via setting some options and messing with bazel's copt commandline argument and the CC_OPT_FLAGS environment variable, I was once able to get bazel to find zlib.h, but now I can't figure out what I did to make that happen. When that happened, I got a different error, something about zlib.h being an undeclared dependency.

Provide the exact sequence of commands / steps that you executed before running into the problem

steps that I am running in Git Bash:

export PYTHON_BIN_PATH=C:/Users/nstier/AppData/Local/Continuum/anaconda3/python.exe
export PYTHON_LIB_PATH=C:/Users/nstier/AppData/Local/Continuum/anaconda3/Lib

export TF_CUDA_VERSION=9.0
export TF_CUDNN_VERSION=7
export CUDA_TOOLKIT_PATH="C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v9.0"
export CUDNN_INSTALL_PATH="C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v9.0"

export GCC_HOST_COMPILER_PATH=/usr/bin/gcc
export TF_CUDA_CLANG=0

# zlib/ is symlinked into the tensorflow directory to avoid the error "include path references outside of execution root...." 
export CC_OPT_FLAGS="/arch:AVX /I/c/Users/nstier/tensorflow/ExternalPackages/zlib-1.2.11-t1-VC15.8.1/include"

export TF_CUDA_COMPUTE_CAPABILITIES=6.1

export TF_NEED_CUDA=1
export TF_NEED_JEMALLOC=1
export TF_ENABLE_XLA=0
export TF_NEED_OPENCL=0
export TF_NEED_OPENCL_SYCL=0
export TF_NEED_TENSORRT=0
export TF_NEED_NGRAPH=0

export TF_NEED_GCP=0
export TF_NEED_AWS=0

export TF_NEED_KAFKA=0
export TF_NEED_HDFS=0
export TF_NEED_GDR=0
export TF_NEED_VERBS=0
export TF_NEED_MPI=0

export TF_SET_ANDROID_WORKSPACE=0
export TMP=/tmp
export TF_SYSTEM_LIBS=zlib_archive
export TF_OVERRIDE_EIGEN_STRONG_INLINE=1

bazel build \
  --config=opt \
  --config=cuda \
  --config=monolithic \
  --define=no_tensorflow_py_deps=true \
  --copt=/I/c/Users/nstier/tensorflow/ExternalPackages/zlib-1.2.11-t1-VC15.8.1/include \
  --cxxopt=/I/c/Users/nstier/tensorflow/ExternalPackages/zlib-1.2.11-t1-VC15.8.1/include \
   //tensorflow:libtensorflow_cc.so \
  //tensorflow/tools/pip_package:build_pip_package

Any other info / logs

build log: <pre> Starting local Bazel server and connecting to it... ............. WARNING: The following configs were expanded more than once: [cuda, monolithic]. For repeatable flags, repeats are counted twice anday lead to unexpected behavior. Loading: Loading: 0 packages loaded Loading: 0 packages loaded currently loading: tensorflow/tools/pip_package ... (2 packages) DEBUG: C:/users/nstier/_bazel_nstier/t2qf7f76/external/bazel_tools/tools/cpp/lib_cc_configure.bzl:115:5: Auto-Configuration Warning: 'BAZEL_VC' is not set, start looking for the latest Visual C++ installed. DEBUG: C:/users/nstier/_bazel_nstier/t2qf7f76/external/bazel_tools/tools/cpp/lib_cc_configure.bzl:115:5: Auto-Configuration Warning: Looking for VS%VERSION%COMNTOOLS environment variables, eg. VS140COMNTOOLS DEBUG: C:/users/nstier/_bazel_nstier/t2qf7f76/external/bazel_tools/tools/cpp/lib_cc_configure.bzl:115:5: Auto-Configuration Warning: Visual C++ build tools found at C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC
DEBUG: C:/users/nstier/_bazel_nstier/t2qf7f76/external/bazel_tools/tools/cpp/lib_cc_configure.bzl:115:5: Auto-Configuration Warning: 'BAZEL_VC' is not set, start looking for the latest Visual C++ installed. DEBUG: C:/users/nstier/_bazel_nstier/t2qf7f76/external/bazel_tools/tools/cpp/lib_cc_configure.bzl:115:5: Auto-Configuration Warning: Looking for VS%VERSION%COMNTOOLS environment variables, eg. VS140COMNTOOLS DEBUG: C:/users/nstier/_bazel_nstier/t2qf7f76/external/bazel_tools/tools/cpp/lib_cc_configure.bzl:115:5: Auto-Configuration Warning: Visual C++ build tools found at C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC
Analyzing: 2 targets (2 packages loaded) Analyzing: 2 targets (150 packages loaded) WARNING: C:/users/nstier/tensorflow/tensorflow/core/BUILD:2464:1: in includes attribute of cc_library rule //tensorflow/core:framewo_internal_headers_lib: '../../external/com_google_absl' resolves to 'external/com_google_absl' not below the relative path of its paage 'tensorflow/core'. This will be an error in the future. Since this rule was created by the macro 'cc_header_only_library', the eor might have been caused by the macro implementation in C:/users/nstier/tensorflow/tensorflow/tensorflow.bzl:1373:20 WARNING: C:/users/nstier/tensorflow/tensorflow/core/BUILD:2549:1: in includes attribute of cc_library rule //tensorflow/core:framewo_headers_lib: '../../external/com_google_absl' resolves to 'external/com_google_absl' not below the relative path of its package 'teorflow/core'. This will be an error in the future. Since this rule was created by the macro 'cc_header_only_library', the error mighhave been caused by the macro implementation in C:/users/nstier/tensorflow/tensorflow/tensorflow.bzl:1373:20 WARNING: C:/users/nstier/_bazel_nstier/t2qf7f76/external/grpc/BUILD:1992:1: in srcs attribute of cc_library rule @grpc//:grpc_nanopbplease do not import '@grpc//third_party/nanopb:pb_common.c' directly. You should either move the file to this package or depend on appropriate rule there. Since this rule was created by the macro 'grpc_generate_one_off_targets', the error might have been caused the macro implementation in C:/users/nstier/_bazel_nstier/t2qf7f76/external/grpc/bazel/grpc_build_system.bzl:172:12 WARNING: C:/users/nstier/_bazel_nstier/t2qf7f76/external/grpc/BUILD:1992:1: in srcs attribute of cc_library rule @grpc//:grpc_nanopbplease do not import '@grpc//third_party/nanopb:pb_decode.c' directly. You should either move the file to this package or depend on appropriate rule there. Since this rule was created by the macro 'grpc_generate_one_off_targets', the error might have been caused the macro implementation in C:/users/nstier/_bazel_nstier/t2qf7f76/external/grpc/bazel/grpc_build_system.bzl:172:12 WARNING: C:/users/nstier/_bazel_nstier/t2qf7f76/external/grpc/BUILD:1992:1: in srcs attribute of cc_library rule @grpc//:grpc_nanopbplease do not import '@grpc//third_party/nanopb:pb_encode.c' directly. You should either move the file to this package or depend on appropriate rule there. Since this rule was created by the macro 'grpc_generate_one_off_targets', the error might have been caused the macro implementation in C:/users/nstier/_bazel_nstier/t2qf7f76/external/grpc/bazel/grpc_build_system.bzl:172:12 WARNING: C:/users/nstier/tensorflow/tensorflow/core/BUILD:2563:1: in includes attribute of cc_library rule //tensorflow/core:stream_ecutor_headers_lib: '../../external/com_google_absl' resolves to 'external/com_google_absl' not below the relative path of its packa 'tensorflow/core'. This will be an error in the future. Since this rule was created by the macro 'cc_header_only_library', the erromight have been caused by the macro implementation in C:/users/nstier/tensorflow/tensorflow/tensorflow.bzl:1373:20 WARNING: C:/users/nstier/tensorflow/tensorflow/contrib/learn/BUILD:17:1: in py_library rule //tensorflow/contrib/learn:learn: target//tensorflow/contrib/learn:learn' depends on deprecated target '//tensorflow/contrib/session_bundle:exporter': No longer supported. itch to SavedModel immediately. WARNING: C:/users/nstier/tensorflow/tensorflow/contrib/learn/BUILD:17:1: in py_library rule //tensorflow/contrib/learn:learn: target//tensorflow/contrib/learn:learn' depends on deprecated target '//tensorflow/contrib/session_bundle:gc': No longer supported. Switcho SavedModel immediately. WARNING: C:/users/nstier/tensorflow/tensorflow/contrib/timeseries/python/timeseries/BUILD:354:1: in py_library rule //tensorflow/conib/timeseries/python/timeseries:ar_model: target '//tensorflow/contrib/timeseries/python/timeseries:ar_model' depends on deprecated rget '//tensorflow/contrib/distributions:distributions_py': TensorFlow Distributions has migrated to TensorFlow Probability (https:/ithub.com/tensorflow/probability). Deprecated copies remaining in tf.contrib.distributions are unmaintained, unsupported, and will bremoved by late 2018. You should update all usage of tf.contrib.distributions to tfp.distributions. WARNING: C:/users/nstier/tensorflow/tensorflow/contrib/timeseries/python/timeseries/state_space_models/BUILD:73:1: in py_library rul//tensorflow/contrib/timeseries/python/timeseries/state_space_models:kalman_filter: target '//tensorflow/contrib/timeseries/python/teseries/state_space_models:kalman_filter' depends on deprecated target '//tensorflow/contrib/distributions:distributions_py': Tensorow Distributions has migrated to TensorFlow Probability (https://github.com/tensorflow/probability). Deprecated copies remaining in .contrib.distributions are unmaintained, unsupported, and will be removed by late 2018. You should update all usage of tf.contrib.dtributions to tfp.distributions. WARNING: C:/users/nstier/tensorflow/tensorflow/contrib/timeseries/python/timeseries/state_space_models/BUILD:230:1: in py_library ru //tensorflow/contrib/timeseries/python/timeseries/state_space_models:filtering_postprocessor: target '//tensorflow/contrib/timeseri/python/timeseries/state_space_models:filtering_postprocessor' depends on deprecated target '//tensorflow/contrib/distributions:distbutions_py': TensorFlow Distributions has migrated to TensorFlow Probability (https://github.com/tensorflow/probability). Deprecatedopies remaining in tf.contrib.distributions are unmaintained, unsupported, and will be removed by late 2018. You should update all uge of tf.contrib.distributions to tfp.distributions. WARNING: C:/users/nstier/tensorflow/tensorflow/contrib/seq2seq/BUILD:23:1: in py_library rule //tensorflow/contrib/seq2seq:seq2seq_p target '//tensorflow/contrib/seq2seq:seq2seq_py' depends on deprecated target '//tensorflow/contrib/distributions:distributions_py'TensorFlow Distributions has migrated to TensorFlow Probability (https://github.com/tensorflow/probability). Deprecated copies remaing in tf.contrib.distributions are unmaintained, unsupported, and will be removed by late 2018. You should update all usage of tf.ctrib.distributions to tfp.distributions. WARNING: C:/users/nstier/tensorflow/tensorflow/contrib/bayesflow/BUILD:17:1: in py_library rule //tensorflow/contrib/bayesflow:bayesow_py: target '//tensorflow/contrib/bayesflow:bayesflow_py' depends on deprecated target '//tensorflow/contrib/distributions:distribions_py': TensorFlow Distributions has migrated to TensorFlow Probability (https://github.com/tensorflow/probability). Deprecated coes remaining in tf.contrib.distributions are unmaintained, unsupported, and will be removed by late 2018. You should update all usagof tf.contrib.distributions to tfp.distributions. WARNING: C:/users/nstier/tensorflow/tensorflow/contrib/BUILD:13:1: in py_library rule //tensorflow/contrib:contrib_py: target '//tenrflow/contrib:contrib_py' depends on deprecated target '//tensorflow/contrib/distributions:distributions_py': TensorFlow Distributio has migrated to TensorFlow Probability (https://github.com/tensorflow/probability). Deprecated copies remaining in tf.contrib.distrutions are unmaintained, unsupported, and will be removed by late 2018. You should update all usage of tf.contrib.distributions totfp.distributions`. INFO: Analysed 2 targets (297 packages loaded). INFO: Found 2 targets... Building: no action [1 / 8] [-----] BazelWorkspaceStatusAction stable-status.txt <b>ERROR: C:/users/nstier/_bazel_nstier/t2qf7f76/external/grpc/BUILD:1443:1: C++ compilation of rule '@grpc//:grpc_transport_chttp2' faed (Exit 2): msvc_wrapper_for_nvcc.bat failed: error executing command cd C:/users/nstier/_bazel_nstier/t2qf7f76/execroot/org_tensorflow SET CUDA_TOOLKIT_PATH=C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v9.0 SET CUDNN_INSTALL_PATH=C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v9.0 SET INCLUDE=C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE;C:\Program Files (x86)\Microsoft Visual Studio 14.0\VATLMFC\INCLUDE;C:\Program Files (x86)\Windows Kits\10\include\10.0.17134.0\ucrt;C:\Program Files (x86)\Windows Kits\NETFXSDK\4.6.1\ilude\um;C:\Program Files (x86)\Windows Kits\10\include\10.0.17134.0\shared;C:\Program Files (x86)\Windows Kits\10\include\10.0.17134\um;C:\Program Files (x86)\Windows Kits\10\include\10.0.17134.0\winrt; SET LIB=C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\LIB\amd64;C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\LMFC\LIB\amd64;C:\Program Files (x86)\Windows Kits\10\lib\10.0.17134.0\ucrt\x64;C:\Program Files (x86)\Windows Kits\NETFXSDK\4.6.1\l\um\x64;C:\Program Files (x86)\Windows Kits\10\lib\10.0.17134.0\um\x64; SET PATH=C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\amd64;C:\WINDOWS\Microsoft.NET\Framework64\v4.0.30319;C:\Proam Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE;C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\Tools;C:\Programiles (x86)\Windows Kits\10\bin\x64;C:\Program Files (x86)\Windows Kits\10\bin\x86;C:\Program Files (x86)\Microsoft SDKs\Windows\v10.\bin\NETFX 4.6.1 Tools\x64;;C:\WINDOWS\system32 SET PWD=/proc/self/cwd SET PYTHON_BIN_PATH=C:/Users/nstier/AppData/Local/Continuum/anaconda3/python.exe SET PYTHON_LIB_PATH=C:/Users/nstier/AppData/Local/Continuum/anaconda3/Lib SET TEMP=C:\Users\nstier\AppData\Local\Temp SET TF_CUDA_CLANG=0 SET TF_CUDA_COMPUTE_CAPABILITIES=6.1 SET TF_CUDA_VERSION=9.0 SET TF_CUDNN_VERSION=7 SET TF_NEED_CUDA=1 SET TF_NEED_OPENCL_SYCL=0 SET TF_SYSTEM_LIBS=zlib_archive SET TMP=C:\Users\nstier\AppData\Local\Temp external/local_config_cuda/crosstool/windows/msvc_wrapper_for_nvcc.bat /nologo /DCOMPILER_MSVC /DNOMINMAX /D_WIN32_WINNT=0x0600 /DRT_SECURE_NO_DEPRECATE /D_CRT_SECURE_NO_WARNINGS /D_SILENCE_STDEXT_HASH_DEPRECATION_WARNINGS /bigobj /Zm500 /J /Gy /GF /EHsc /wd4351wd4291 /wd4250 /wd4996 /Iexternal/grpc /Ibazel-out/x64_windows-opt/genfiles/external/grpc /Iexternal/bazel_tools /Ibazel-out/x64_winws-opt/genfiles/external/bazel_tools /Iexternal/zlib_archive /Ibazel-out/x64_windows-opt/genfiles/external/zlib_archive /Iexternal/gc/include /Ibazel-out/x64_windows-opt/genfiles/external/grpc/include /Ibazel-out/x64_windows-opt/bin/external/grpc/include /DGRPC_AR=0 /showIncludes /MD /O2 /DNDEBUG -w /arch:AVX /I/c/Users/nstier/tensorflow/ExternalPackages/zlib-1.2.11-t1-VC15.8.1/include /I/c/Uss/nstier/tensorflow/ExternalPackages/zlib-1.2.11-t1-VC15.8.1/include /I/c/Users/nstier/tensorflow/ExternalPackages/zlib-1.2.11-t1-VC.8.1/include /Fobazel-out/x64_windows-opt/bin/external/grpc/_objs/grpc_transport_chttp2/hpack_parser.o /c external/grpc/src/core/extransport/chttp2/transport/hpack_parser.cc external/grpc\src/core/lib/compression/stream_compression.h(27): fatal error C1083: Cannot open include file: 'zlib.h': No such file or directory </b> INFO: Elapsed time: 33.850s, Critical Path: 3.71s INFO: 0 processes. FAILED: Build did NOT complete successfully FAILED: Build did NOT complete successfully </pre> Any ideas?

This line from the logs looks interesting: "set INCLUDE=...". Not sure how to get the directories that I want into that list. And if I manage to do that, I might end up with the "undeclared dependency error."

closed time in a month

noahstier

issue commenttensorflow/tensorflow

Windows Bazel build fails when using TF_SYSTEM_LIBS

I will close the issue. The last two comments are unrelated to the original issue, so please file new issues for them.

There may already be issues for these, but please keep in mind that I have 80+ issues assigned to me, and I am just back from holiday break. So it may take some time for me to get to all the issues.

noahstier

comment created time in a month

issue commenttensorflow/tensorflow

Make Tensorflow 2.1 available through conda

@jjhelmus may be able to help. We can redirect conda availability issues to Jonathan, I think?

cossio

comment created time in 2 months

issue commenttensorflow/tensorflow

can't install with python3.8

As others said, in the meantime, it would go a long way if you could fix the docs as currently, it could be hard for newcomers to figure out why pip install tensorflow yields since it is stated that the Python Requirement is python 3.4 or later.

Great suggestion. Thank you for the feedback. @lamberta Can we update our docs to say currently python 3.8 is not supported yet?

amitport

comment created time in 2 months

issue commenttensorflow/tensorflow

tensorflow 2.0.0 crashes with protobuf 3.10.1 on macOS

Are you installing TF through macports or pip. Unless macports just fetches the package from what we are building, it may be built with different flags, causing all kinds of ABI issues. Similar for protobuf. If these are packages not built by us, I do not think we can do much to help.

essandess

comment created time in 2 months

issue commenttensorflow/tensorflow

request to tag the latest fully qualified commit on the master branch

It is more complicated than that. Sometimes, even if a commit passes all our internal checks, we may see bugs later that cause issues in production TF users, or some cases which only show up in very specific use cases. We have had roll backs up to 7 days after a commit was merged. So in my opinion, a commit we can mark that "there will be no rollbacks at this point" is infeasible (In my opinion).

Even if we tried to write a marker that says "this commit has passed every single TF build" only releases so far has done that. In master branch, we run every single release test every night. However, there has never been a moment in the history of TF that master branch has passed every single nightly(release) test.

deven-amd

comment created time in 2 months

issue commenttensorflow/tensorflow

can't install with python3.8

As we are a large project, we have to wait untill all our dependencies are python 3.8 compatible. This also prevents us from being able to try with the beta release, the release you mention. Grpcio only released a compatible package in mid-December, and we only were able to make sure all our build issues are resolved. So, if you like you can build TF from sources for python 3.8 at the moment.

Nowadays, most of the team is on vacation. As we slowly come back from holidays, we will set up nightly 3.8 builds sometime in January. Official release with python 3.8 is planned in march, the 2.2 release. 2.1 was cut before all python 3.8 issues were resolved.

On Mon, Jan 6, 2020, 12:34 PM Alexander Grigoryev notifications@github.com wrote:

Python 3.8.0b1 Release Date: June 4, 2019 I guess Top5 most popular Github project could do better. Please update this issue with the progress on scale from 0 to 100 with step size 10.

— You are receiving this because you were assigned. Reply to this email directly, view it on GitHub https://github.com/tensorflow/tensorflow/issues/33374?email_source=notifications&email_token=AB4UEONV6BFCXYNDALXH3C3Q4L3KHA5CNFSM4JA4OP42YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEIE5IVI#issuecomment-571069525, or unsubscribe https://github.com/notifications/unsubscribe-auth/AB4UEOOMCDI3BA3QIRACXT3Q4L3KHANCNFSM4JA4OP4Q .

amitport

comment created time in 2 months

push eventgunan/bazel_test

Günhan Gülsoy

commit sha 2889835454aee996e22d56d2e451b00ffa72b018

Add additional build files and fix the issue that requires multiple downloads.

view details

Günhan Gülsoy

commit sha ab301d7f38ca908876677f41150fe8b77349e2e3

Add the git repository counterpart for llvm.

view details

push time in 2 months

issue commentbazelbuild/bazel

Workspace dependency downloaded 4 times

I think I got it! in my repository, I added just one "build_file" attribute to my workspace rule. However, if I instead set it using our "additional_build_files" attribute, I get multiple llvm extractions: llvm_vacuum.profile.gz

Here is the implementation of it: https://github.com/tensorflow/tensorflow/blob/master/third_party/repo.bzl#L121

However, I do not get why bazel does this multiple times. is this an artifact of ctx.symlink? do you think this is a bug?

How

gunan

comment created time in 2 months

issue commentbazelbuild/bazel

Workspace dependency downloaded 4 times

I have just tried another thing. Created a new repository, with only llvm workspace dependency. Looks like this one only downloads llvm once: https://github.com/gunan/bazel_test

llvm.profile.gz

gunan

comment created time in 2 months

create barnchgunan/bazel_test

branch : master

created branch time in 2 months

created repositorygunan/bazel_test

created time in 2 months

issue commentbazelbuild/bazel

Workspace dependency downloaded 4 times

here are the profiler outputs on linux and windows.

linux_simplest_case.profile.gz windows_build_more_targets.profile.gz

gunan

comment created time in 2 months

issue openedbazelbuild/bazel

Workspace dependency downloaded 4 times

#10071 # Description of the problem / feature request:

When building tensorflow, looking at the json profile, looks like the LLVM dependency is downloaded and extracted 4 times.

Bugs: what's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.

Run the following commands:

git clone http://github.com/tensorflow/tensorflow

unfortunately, there may be some dependencies to be installed here then

bazel build --experimental_generate_json_trace_profile --experimental_profile_cpu_usage --profile=~/llvm.profile.gz --nobuild @llvm-project//llvm:FileCheck

What operating system are you running Bazel on?

Reproduced in linux and windows, suspected to reproduce in windows, too.

What's the output of bazel info release?

dowloaded and installed 1.2.1 release 1.2.1

What's the output of git remote get-url origin ; git rev-parse master ; git rev-parse HEAD ?

$ git remote get-url origin ; git rev-parse master ; git rev-parse HEAD
https://github.com/tensorflow/tensorflow
6c902370140cb4b5c9ae69f440e16e5ace55b828
6c902370140cb4b5c9ae69f440e16e5ace55b828

Have you found anything relevant by searching the web?

No, but there is an internal discussion thread. I was recommendde by @laszlocsomor to create a github issue and point him and @aehlig to this issue.

Any other information, logs, or outputs that you want to share?

can attach profiler outputs at request.

created time in 2 months

issue commenttensorflow/tensorflow

error C2280: 'tensorflow::FunctionLibraryDefinition &tensorflow::FunctionLibraryDefinition::operator =(const tensorflow::FunctionLibraryDefinition &)': attempting to reference a deleted function

Thank you very much for debugging! On our side, TF recently moved the official builds to MSVC 2019, that may be why we missed the issue.

We try to use STL as much as possible due to various reasons, some performance, some know issues with interoperability with eigen, etc. So using std::optional may not be OK for us. We can look into patching absl, as it looks like MSVC 2017 users are also reporting issues.

mayadav1

comment created time in 2 months

issue commenttensorflow/tensorflow

-D_GLIBCXX_USE_CXX11_ABI=1 increases a lot RAM usage

@r4nt may have some input on this, any ideas? @maingoh Could you confirm this is also the same at head, or a newer release?

maingoh

comment created time in 2 months

pull request commenttensorflow/tensorflow

[ROCm] r1.15 rccl upstream patch

Actually, remote builds do not care about which branch we are running from. All remote build parameters, docker containers and the compiler options are baked into the branch, under third_party/toolchains/preconfig folder.

jeffdaily

comment created time in 2 months

more