profile
viewpoint

Ask questionsBuild failure - missing dependency declarations LLVM

With the current trunk at 8b87c1a09bf156ca9a42d9f72fad07da62100318 (Jun 29), even after a bazel clean, I see:

$  CC=clang CXX=clang++ bazel  --per_file_copt=llvm-project@-UNDEBUG   --linkopt="-fuse-ld=lld"    //tensorflow/compiler/mlir/xla/tests:all
...
INFO: Found 1 target and 40 test targets...
ERROR: /ws/2cca323f33485b8e970325d856ce9b72/external/llvm-project/llvm/BUILD:665:1: undeclared inclusion(s) in rule '@llvm-project//llvm:count':
this rule is missing dependency declarations for the following files included by 'external/llvm-project/llvm/utils/count/count.c':
  '/usr/lib64/clang/10.0.0/include/stddef.h'
  '/usr/lib64/clang/10.0.0/include/stdarg.h'
INFO: Elapsed time: 0.140s, Critical Path: 0.02s
INFO: 0 processes.
$ clang --version
clang version 10.0.0 (Fedora 10.0.0-2.fc32)
Target: x86_64-unknown-linux-gnu
Thread model: posix
InstalledDir: /usr/bin
$ bazel --version
bazel 3.1.0

OS: Fedora 32, x86-64

tensorflow/tensorflow

Answer questions jpienaar

Hey Uday,

I think in this cases the workaround is to check the git history and checkout a commit immediately before a LLVM integrate. We are working on fixing these transient errors

Thanks - but unfortunately, the build time effort (given the amount of rebuilding necessary when switching commits) would make this exercise prohibitive for those without infinite build infrastructure! :-)

I don't see how Mihai's suggestion is in increasing effort: instead of syncing to arbitrary change, sync to a specific one. That doesn't change how much you build, just when. And the suggestion is for a known stabler time until we do a bit more refactoring of the export process.

Either one doesn't sync as often and you don't incur rebuilds, or you do and you do. Now normally one syncs as there is a reason (e.g., changes in upstream projects) but in those cases one will naturally incur build times (e.g, you are pulling new code). One can also change the workspace to a local_repository for even more flexibility (e.g., not constrained by what revs chosen for a given project) and that could enable more reuse even as you sink files and then you have the option of which ones to sync (you can just update ~n files in one directory rather doing a full rev bump).

Currently it is ~eventual consistency and we are working to make it more atomic.

As to this report: you are using an unsupported flag (wrt TF builds) and that what is causing this error. Building as normal

$ git checkout -b includes 8b87c1a09bf156ca9a42d9f72fad07da62100318 $ CC=clang CXX=clang++ bazel build --linkopt="-fuse-ld=lld" //tensorflow/compiler/mlir/xla/tests:all

works for me.

useful!

Related questions

ModuleNotFoundError: No module named 'tensorflow.contrib' hot 9
Tf.Keras metrics issue hot 8
Error occurred when finalizing GeneratorDataset iterator hot 7
Error loading tensorflow hot 6
module 'tensorflow' has no attribute 'ConfigProto' hot 6
TF 2.0 'Tensor' object has no attribute 'numpy' while using .numpy() although eager execution enabled by default hot 6
tensorflow-gpu CUPTI errors
Lossy conversion from float32 to uint8. Range [0, 1]. Convert image to uint8 prior to saving to suppress this warning.
ModuleNotFoundError: No module named 'tensorflow.contrib'
When importing TensorFlow, error loading Hadoop
OSError: SavedModel file does not exist at: saved_model_dir/{saved_model.pbtxt|saved_model.pb}
AttributeError: module 'tensorflow.python.framework.op_def_registry' has no attribute 'register_op_list'
tf.keras.layers.Conv1DTranspose ?
[TF 2.0] tf.keras.optimizers.Adam hot 4
TF2.0 AutoGraph issue hot 4
source:https://uonfu.com/
Github User Rank List