If you are wondering where the data of this site comes from, please visit GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.
Michael Coleman michaelkarlcoleman University of Oregon Eugene, Oregon, USA Grumpy Linux and Python hacker.

michaelkarlcoleman/subterfugue 3

subterfugue is/was a tool for doing ptrace-based system call interposition (not currently maintained)

michaelkarlcoleman/ssssh 2

execute shell commands on or copy files to/from a set of Linux hosts

michaelkarlcoleman/sequest-rally 1

parallel scheduler to run SEQUEST (adapted from greylag)

michaelkarlcoleman/BRAKER 0

BRAKER is a pipeline for fully automated prediction of protein coding genes with GeneMark-ES/ET and AUGUSTUS in novel eukaryotic genomes

michaelkarlcoleman/dotfiles 0

my dotfiles (useful to me, but not terribly interesting)

michaelkarlcoleman/greylag 0

Tandem mass spectrum peptide identification and validation software, for single host or large cluster. Hybrid Python/C++.

michaelkarlcoleman/quarterstick 0

This is not the repo you are looking for.

michaelkarlcoleman/racsml 0

Materials for AI Workshops

michaelkarlcoleman/singularity-easybuild 0

test repo to run EasyBuild inside a singularity container

michaelkarlcoleman/sonfilade 0

Automatically exported from

issue commentconda/conda

conda extremely slow

I'm not "real folks" either, but I think the fundamental issue is that this is a constraint satisfaction problem, and in general, those can take an exponential amount of time to run. (And indeed, running time can vary wildly depending on seemingly insignificant changes in the input.) This implies that multi-threading isn't likely to be much help--one tenth of forever is still forever.

I've hit this occasionally, and what has worked is trying some of the advice on the conda pages. In particular, it helps to prune the set of possibilities being considered. Perhaps you can provide some hints on what you already "know" you want. For example, you might "know" that you prefer a setup based on Python 3.6 or later, even if there is some theoretical combination of packages out there that might seem better to the algorithm, even though they're based on Python 3.1 (or 2.6). Add some constraints and see if it helps.


comment created time in 4 days

issue openedspack/spack

Installation issue: amber

Steps to reproduce the issue

spack install amber@20 %intel +cuda cuda_arch=37 +x11 ^intel-mpi ^cuda@10.2.89

Information on your system

  • Spack: 0.16.2-4201-eb99db5
  • Python: 3.6.8
  • Platform: linux-rhel7-broadwell
  • Concretizer: original

Additional information


First, thanks very much for this package file. It doesn't work currently, but looks very promising. The upstream is clearly shaky and buggy, so props for trying to deal with that.

I'm including my best package file, which doesn't work, but might contain some breadcrumbs for moving forward. Notably, it appears that downloads of AmberTools.* no longer works. Perhaps the URL has changed. Otherwise, manual download.

Also, there's a hard-coded "18" in the package file, which is now "20". Perhaps there's some way to pick up the actual version.

Here's the diff from today's develop branch that I'm using. Not sure whether zapping older versions is necessary, but I was trying to remove complications.

The update option doesn't work. There seems to be a problem with update.7, which might be due to the content, which strangely identifies it as update.12. Not sure.

The nomkl is due to an odd issue with the Intel (19) compiler alternative not being able to find a shared MKL library. Guessing this is due to a botch in the Amber config scripts.

diff --git a/var/spack/repos/builtin/packages/amber/ b/var/spack/repos/builtin/packages/amber/
index 6bfc4fb..0cb995c 100644
--- a/var/spack/repos/builtin/packages/amber/
+++ b/var/spack/repos/builtin/packages/amber/
@@ -23,30 +23,23 @@ class Amber(Package, CudaPackage):

     homepage = ""
-    url = "file://{0}/Amber18.tar.bz2".format(os.getcwd())
+    url = "file://{0}/Amber20.tar.bz2".format(os.getcwd())
     manual_download = True

     maintainers = ['hseara']

         '20', sha256='a4c53639441c8cc85adee397933d07856cc4a723c82c6bea585cd76c197ead75')
-    version(
-        '18', sha256='2060897c0b11576082d523fb63a51ba701bc7519ff7be3d299d5ec56e8e6e277')
-    version(
-        '16', sha256='3b7ef281fd3c46282a51b6a6deed9ed174a1f6d468002649d84bfc8a2577ae5d',
-        deprecated=True)

     resources = {
         # [version amber, version ambertools , sha256sum]
-        '20': ('20', 'b1e1f8f277c54e88abc9f590e788bbb2f7a49bcff5e8d8a6eacfaf332a4890f9'),
-        '18': ('19', '0c86937904854b64e4831e047851f504ec45b42e593db4ded92c1bee5973e699'),
-        '16': ('16', '7b876afe566e9dd7eb6a5aa952a955649044360f15c1f5d4d91ba7f41f3105fa'),
+        '20': ('21', 'f55fa930598d5a8e9749e8a22d1f25cab7fcf911d98570e35365dd7f262aaafd'),
     for ver, (ambertools_ver, ambertools_checksum) in resources.items():
-                 url='{0}.tar.bz2'.format(
-                      ambertools_ver),
+                 url = "file://{0}/AmberTools{1}.tar.bz2".format(os.getcwd(),
+                                                                 ambertools_ver),
@@ -62,6 +55,8 @@ class Amber(Package, CudaPackage):
         ('20', '7', '143b6a09f774aeae8b002afffb00839212020139a11873a3a1a34d4a63fa995d'),
         ('20', '8', 'a6fc6d5c8ba0aad3a8afe44d1539cc299ef78ab53721e28244198fd5425d14ad'),
         ('20', '9', '5ce6b534bab869b1e9bfefa353d7f578750e54fa72c8c9d74ddf129d993e78cf'),
+        ('20', '10', '76a683435be7cbb860f5bd26f09a0548c2e77c5a481fc6d64b55a3a443ce481d'),
+        ('20', '11', 'f40b3612bd3e59efa2fa1ec06ed6fd92446ee0f1d5d99d0f7796f66b18e64060'),
         ('18', '1', '3cefac9a24ece99176d5d2d58fea2722de3e235be5138a128428b9260fe922ad'),
         ('18', '2', '3a0707a9a59dcbffa765dcf87b68001450095c51b96ec39d21260ba548a2f66a'),
         ('18', '3', '24c2e06f71ae553a408caa3f722254db2cbf1ca4db274542302184e3d6ca7015'),
@@ -186,6 +181,7 @@ def install(self, spec, prefix):
         conf = Executable('./configure')
         base_args = ['--skip-python',
                      '--with-netcdf', self.spec['netcdf-fortran'].prefix,
+                     '-nomkl',
         if self.spec.satisfies('~x11'):
             base_args += ['-noX11']

This version ultimately fails like so

==> amber: Executing phase: 'install'
==> Error: ProcessError: Command exited with status 2:
    'make' '-j14' 'install'

15 errors found in build log:
     7275    gti_schedule_functions.h(108): warning #3500: field initializers are a C++11 feature
     7276          FunctionType functionType=linear;
     7277                                   ^
     7279    In file included from gpuContext.h(6),
     7280                     from gpu.cpp(23):
  >> 7281    gti_gpuContext.h(47): error: qualified name is not allowed
     7282        std::unique_ptr< GpuBuffer<bool>>       pbTLNeedNewNeighborList;
     7283        ^
     7285    In file included from gpuContext.h(6),
     7286                     from gpu.cpp(23):
  >> 7287    gti_gpuContext.h(47): error #77: this declaration has no storage class or type specifier
     7288        std::unique_ptr< GpuBuffer<bool>>       pbTLNeedNewNeighborList;
     7289        ^
     7291    In file included from gpuContext.h(6),
     7292                     from gpu.cpp(23):
  >> 7293    gti_gpuContext.h(47): error: expected a ";"
     7294        std::unique_ptr< GpuBuffer<bool>>       pbTLNeedNewNeighborList;
     7295                       ^

and so on.

Seems like a language version issue versus compiler versions. Not sure whether I'll have time soon to dig deeper.

General information

  • [X] I have run spack debug report and reported the version of Spack/Python/Platform
  • [X] I have run spack maintainers <name-of-the-package> and @mentioned any maintainers
  • [X] I have uploaded the build log and environment files
  • [X] I have searched the issues of this repo and believe this is not a duplicate

created time in 13 days

issue commentjupyterlab/jupyterlab

file deletion fails: invalid cross-device link

@arsenetar I see. Thanks for the detailed explanation.

FWIW, our use case is a multi-user Linux system (OOD on HPC). At least in our shop, the only plausible locations for a .Trash directory would be in $HOME or $PWD. On a single-user system, other things could work, as you say.

One troublesome case would be if the trashed file is huge, especially if a copy attempt is made (though even renames might lead to quota issues). That might be more of a problem for users of "send2trash" to deal with, though.


comment created time in a month

issue commentjupyterlab/jupyterlab

file deletion fails: invalid cross-device link

@arsenetar It might be a bit before I can test this, but eyeballing the commit (, it appears reasonable.

I'm wondering, though, why not just invoke shutil.move in all cases? It appears that it already uses os.rename or not internally, as appropriate.


comment created time in a month

issue openedgeodynamics/sw4

BUG: sw4 fails to flush output on FATAL ERROR (hiding error message)

The cout stream needs to be flushed before calling MPI_Abort, otherwise the error message is not printed, at least in our case.

(Invoking with unbuffer sw4 is a workaround, but not a good one.)

Probably the code should be examined for other such instances.

created time in a month

issue commentGEOSX/GEOSX

[Bug] TPL: invalid C++ in chai or umpire (std::numeric_limits)

I'm giving up on this, unless someone else can spot a way forward. It's just soaking up too many hours.

To aid anyone who might later push on, I'll leave some breadcrumbs that might be useful.

To clarify the above-mentioned mangling of the ld command line, here's one specific example that exhibits a number of problems. The problems are

  1. In some cases, an -rpath flag is followed by multiple directories that are clearly intended to have -rpath in front of each. As soon as ld encounters a directory missing its -rpath flag, it assumes that that is a file to be loaded and fails immediately, since it's a directory.
  2. Some -rpath arguments contain colons. This does not match the ld documentation, but sort of accidentally works, since ld is itself internally accumulating a colon-delimited path. One limitation is that ld uniquifies this list if it's given properly, but this won't work if the colons appear in the arguments. This is probably just a minor performance issue, but might also be a problem for correctness.
  3. Similarly, some arguments end in colons. Not sure how the whole toolchain would react to such an "empty" directory in the path.
  4. In some cases, the -rpath argument is simply wrong. In particular, the final "/lib" or "/lib64" is being clipped off. That won't work, obviously. (This happens with -L as well.) This might sometimes "work" by then finding the library in question in one of the standard locations.

Broadly, this appears to be string mangling happening either due to issues in CMake itself, or the CMake config files in this project or its dependencies.

The compilation with works fine for the "build the world" part, but does eventually fail for several packages that (I think) are from LLNL.

Although uberenv builds openmpi, I saw an error indicating that that version of mpicc and friends might not actually be being seen by the rest of the build. I'm not sure about this. It certainly would be handy if uberenv would build and then use all of the compilers it needs.

Here's one example of the above. (The ld-hack/ld is just a script to capture the command lines.)


Finally, a few links that might be relevant, or at least hint at similar issues.


comment created time in a month

issue commentGEOSX/GEOSX

[Bug] TPL: invalid C++ in chai or umpire (std::numeric_limits)

It would be handy to document these version requirements somewhere obvious. Perhaps even better would be to have the build scripts detect this up front with a diagnostic. The ultimate would be to have 'uberenv' simply build the compilers it needs, though I imagine this feature might be less useful in your environment.

It does seem that this is not a Spack issue, since (I think) we see it both with and without. I'm inclined to blame CMake (probably a config file), but that's partly because I loathe CMake.

I'm trying to figure out how I might enable voluminous tracing/debugging output for the cmake runs. If someone is familiar with an easy way to do that, I'd love to hear it.

The proximal CMake bogus output seems to be CMakeFiles/conduit_relay_mpi_io.dir/link.txt, relative to directory GEOSX/uberenv_libs/builds/spack-stage-conduit-0.5.0-eetp764l2x3hhs2ce3g7jf65nxxvymyh/spack-src/spack-build/libs/relay. Is there a straightforward way to see all of the inputs that go into that file?


comment created time in a month

issue commentGEOSX/GEOSX

[Bug] TPL: invalid C++ in chai or umpire (std::numeric_limits)

For the above, the CMake version was 3.20.5. (see my first comment)

I didn't take very good notes for my first uberenv try, but it seemed to get a fair way in. Retrying it just now, with a very vanilla environment (no modules loaded, distro is RHEL 7.9), it blows up almost immediately because the system gcc is 4.8.5, and it tries to build suite-sparse first, which requires a later gcc.

[exe: spack/bin/spack dev-build --quiet -d /gpfs/projects/hpcrcf/mcolema5/akubo-geosx/try2/GEOSX/scripts/uberenv/../.. -u hostconfig geosx@develop%gcc]
==> Warning: gcc@4.8.5 cannot build optimized binaries for "broadwell". Using best target possible: "haswell"
==> Error: Conflicts in concretized spec "geosx@develop%gcc@4.8.5+caliper~cuda~essl+hypre~hypre-cuda~mkl+petsc~pygeosx+shared+suite-sparse+trilinos build_type=RelWithDebInfo cuda_arch=none lai=trilinos arch=linux-rhel7-haswell/tihmllq"

List of matching conflicts for spec:

    suite-sparse@5.8.1%gcc@4.8.5+amd~blas-no-underscore~btf+camd+ccolamd+cholmod+colamd~csparse~cuda~cxsparse~klu+openmp+pic~rbio~spqr~tbb+umfpack arch=linux-rhel7-haswell
        ^cmake@3.18.2%gcc@4.8.5~doc+ncurses+openssl+ownlibs~qt arch=linux-rhel7-haswell
            ^ncurses@6.2%gcc@4.8.5~symlinks+termlib arch=linux-rhel7-haswell
                ^pkgconf@1.7.3%gcc@4.8.5 arch=linux-rhel7-haswell
            ^openssl@1.1.1g%gcc@4.8.5+systemcerts arch=linux-rhel7-haswell
                ^perl@5.30.3%gcc@4.8.5+cpanm+shared+threads arch=linux-rhel7-haswell
                    ^berkeley-db@18.1.40%gcc@4.8.5 arch=linux-rhel7-haswell
                    ^gdbm@1.18.1%gcc@4.8.5 arch=linux-rhel7-haswell
                        ^readline@8.0%gcc@4.8.5 arch=linux-rhel7-haswell
                ^zlib@1.2.11%gcc@4.8.5+optimize+pic+shared arch=linux-rhel7-haswell
        ^m4@1.4.18%gcc@4.8.5+sigsegv patches=3877ab548f88597ab2327a2230ee048d2d07ace1062efe81fc92e91b7f39cd00,fc9b61654a3ba1a8d6cd78ce087e7c96366c290bc8d2c299f09828d793b853c8 arch=linux-rhel7-haswell
            ^libsigsegv@2.12%gcc@4.8.5 arch=linux-rhel7-haswell
        ^metis@5.1.0%gcc@4.8.5~gdb+int64~real64+shared build_type=Release patches=4991da938c1d3a1d3dea78e49bbebecba00273f98df2a656e38b83d55b281da1 arch=linux-rhel7-haswell
        ^openblas@0.3.10%gcc@4.8.5~consistent_fpcsr~ilp64+pic+shared threads=none arch=linux-rhel7-haswell

1. "%gcc@:4.8" conflicts with "suite-sparse@5.2.0:" [gcc version must be at least 4.9 for suite-sparse@5.2.0:]

[ERROR: failure of spack install/dev-build]

One way to fix this would be to build the compilers first in Spack, as part of building the world.

As another experiment, I used a GCC 10.2.0. This is actually from within our existing Spack tree, but without initializing Spack itself (to avoid any confusion with the uberenv use of Spack). I think this is valid, though not absolutely certain.

One notable point is that a recent 'git' is required, but it seems is not built by uberenv and then used. So it has to exist outside of the uberenv tree, just like the compiler(s). This is a bit harder to spot, as an older 'git' does seem to "work" without error status, but it's not clear whether it's doing the right thing or not. Seeing a warning (?) like Fetching tags only, you probably meant: git fetch --tags.

export CC=/gpfs/packages/spack/spack/opt/spack/linux-rhel7-broadwell/gcc-8.2.0/gcc-10.2.0-55xl7vwtoqeyu3gnbkhit5m3qnahf4f4/bin/gcc
export CXX=/gpfs/packages/spack/spack/opt/spack/linux-rhel7-broadwell/gcc-8.2.0/gcc-10.2.0-55xl7vwtoqeyu3gnbkhit5m3qnahf4f4/bin/g++
export F77=/gpfs/packages/spack/spack/opt/spack/linux-rhel7-broadwell/gcc-8.2.0/gcc-10.2.0-55xl7vwtoqeyu3gnbkhit5m3qnahf4f4/bin/gfortran
export FC=/gpfs/packages/spack/spack/opt/spack/linux-rhel7-broadwell/gcc-8.2.0/gcc-10.2.0-55xl7vwtoqeyu3gnbkhit5m3qnahf4f4/bin/gfortran
export LD_LIBRARY_PATH=/gpfs/packages/spack/spack/opt/spack/linux-rhel7-broadwell/gcc-8.2.0/gcc-10.2.0-55xl7vwtoqeyu3gnbkhit5m3qnahf4f4/lib64:/gpfs/packages/spack/spack/opt/spack/linux-rhel7-broadwell/gcc-8.2.0/gcc-10.2.0-55xl7vwtoqeyu3gnbkhit5m3qnahf4f4/lib:$LD_LIBRARY_PATH
export PATH=/gpfs/packages/spack/spack/opt/spack/linux-rhel7-broadwell/gcc-8.2.0/gcc-10.2.0-55xl7vwtoqeyu3gnbkhit5m3qnahf4f4/bin:$PATH

export PATH=/gpfs/packages/spack/spack/opt/spack/linux-rhel7-broadwell/gcc-8.2.0/git-2.31.1-m7r7sioduzbfg33hc6wlbvyoiktl3x7i/bin:$PATH

Beyond that, I did this

sed -i -e 's|||'  scripts/uberenv/project.json

This gets a long way in, but ultimately fails with a link error similar to the one mentioned above:

==> Installing conduit
==> No binary for conduit found: installing from source
==> Fetching
######################################################################## 100.0%
==> conduit: Executing phase: 'configure'
==> conduit: Executing phase: 'build'
==> Error: ProcessError: Command exited with status 2:
    'make' '-j28'

7 errors found in build log:
     816      352 |     DataType(const DataType& type);
     817          |     ^~~~~~~~
     818    [ 65%] Linking CXX shared library ../../lib/
     819    cd /gpfs/projects/hpcrcf/mcolema5/akubo-geosx/try2/GEOSX/uberenv_libs/builds/spack-stage-conduit-0.5.0-eetp764l2x3hhs2ce3g7jf65nxxvymyh/spack-src/spack-build/li
            bs/relay && /gpfs/projects/hpcrcf/mcolema5/akubo-geosx/try2/GEOSX/uberenv_libs/linux-rhel7-broadwell/gcc-10.2.0/cmake-3.18.2-glaktlnxnd6nnqvv7fsfgg7nfrvuqbxq/bi
            n/cmake -E cmake_link_script CMakeFiles/conduit_relay_mpi.dir/link.txt --verbose=1
     820    /gpfs/packages/spack/spack/opt/spack/linux-rhel7-broadwell/gcc-8.2.0/gcc-10.2.0-55xl7vwtoqeyu3gnbkhit5m3qnahf4f4/bin/g++ -fPIC -O2 -g -DNDEBUG -Wl,-rpath,/gpfs/
            packages/spack/spack/opt/spack/linux-rhel7-broadwell/gcc-8.2.0/gcc-10.2.0-55xl7vwtoqeyu3gnbkhit5m3qnahf4f4/lib/gcc/x86_64-pc-linux-gnu/10.2.0 -Wl,-rpath,/gpfs/p
            ackages/spack/spack/opt/spack/linux-rhel7-broadwell/gcc-8.2.0/gcc-10.2.0-55xl7vwtoqeyu3gnbkhit5m3qnahf4f4/lib64 -Wl,-rpath -Wl,/gpfs/projects/hpcrcf/mcolema5/ak
            ubo-geosx/try2/GEOSX/uberenv_libs/linux-rhel7-broadwell/gcc-10.2.0/hwloc-1.11.11-ijt37r6wh2swaj4is5zbigao7b6n2yzs/lib -Wl,/gpfs/projects/hpcrcf/mcolema5/akubo-g
            eosx/try2/GEOSX/uberenv_libs/linux-rhel7-broadwell/gcc-10.2.0/zlib-1.2.11-t37xblst5onbiz2tn6d4hkrg2a5wic2o -Wl,/gpfs/projects/hpcrcf/mcolema5/akubo-geosx/try2/G
            EOSX/uberenv_libs/linux-rhel7-broadwell/gcc-10.2.0/openmpi-3.1.6-e5b3hxgf5p3a2dgu63rwctcphurbcqpv/lib -L/gpfs/projects/hpcrcf/mcolema5/akubo-geosx/try2/GEOSX/ub
            erenv_libs/linux-rhel7-broadwell/gcc-10.2.0/hwloc-1.11.11-ijt37r6wh2swaj4is5zbigao7b6n2yzs/lib -L/gpfs/projects/hpcrcf/mcolema5/akubo-geosx/try2/GEOSX/uberenv_l
            ibs/linux-rhel7-broadwell/gcc-10.2.0/zlib-1.2.11-t37xblst5onbiz2tn6d4hkrg2a5wic2o -pthread -shared -Wl,-soname, -o ../../lib/libconduit_r
   CMakeFiles/conduit_relay_mpi.dir/conduit_relay_mpi.cpp.o  -Wl,-rpath,/gpfs/projects/hpcrcf/mcolema5/akubo-geosx/try2/GEOSX/uberenv_libs/builds/spack
            roadwell/gcc-10.2.0/openmpi-3.1.6-e5b3hxgf5p3a2dgu63rwctcphurbcqpv/lib: ../../lib/ /gpfs/projects/hpcrcf/mcolema5/akubo-geosx/try2/GEOSX/uberenv_li
     821    /gpfs/projects/hpcrcf/mcolema5/akubo-geosx/try2/GEOSX/uberenv_libs/linux-rhel7-broadwell/gcc-10.2.0/zlib-1.2.11-t37xblst5onbiz2tn6d4hkrg2a5wic2o: file not recog
            nized: Is a directory
  >> 822    collect2: error: ld returned 1 exit status
  >> 823    make[2]: *** [lib/] Error 1
     824    make[2]: Leaving directory `/gpfs/projects/hpcrcf/mcolema5/akubo-geosx/try2/GEOSX/uberenv_libs/builds/spack-stage-conduit-0.5.0-eetp764l2x3hhs2ce3g7jf65nxxvymyh
  >> 825    make[1]: *** [libs/relay/CMakeFiles/conduit_relay_mpi.dir/all] Error 2

Again, we see mangled arguments as the proximate cause: -Wl,-rpath -Wl,/gpfs/

And again, my best guess would be that this is mangled string manipulation somewhere, probably in a CMake config file of some sort.

If that can't be diagnosed, I suppose another thing to try would be to start with a container having either RHEL 7.9 or the CentOS equivalent and uberenv from there. If it succeeds, perhaps all of that would be close enough (in terms of things like libc) to be copied out and run on our hosts. Maybe. If it only builds on Ubuntu, this could still work, though experience suggests a libc clash is more likely in that case.


comment created time in a month

issue commentGEOSX/GEOSX

[Bug] TPL: invalid C++ in chai or umpire (std::numeric_limits)

For the record, looking through the TPL cmake files, it appeared that perhaps


would help. It may actually have disabled the doxygen build. Unfortunately, the build still fails later, like so:

-- Installing: /gpfs/projects/hpcrcf/mcolema5/akubo-geosx/try2/thirdPartyLibs/install-talapas-spack-release/axom/examples/axom/using-with-make/Makefile
-- Installing: /gpfs/projects/hpcrcf/mcolema5/akubo-geosx/try2/thirdPartyLibs/install-talapas-spack-release/axom/examples/axom/using-with-make/example.cpp
[ 93%] Completed 'axom'
[ 93%] Built target axom
[ 94%] Building CXX object CMakeFiles/tpl.dir/tpl.cpp.o
[ 95%] Linking CXX executable bin/tpl
[ 95%] Built target tpl
[ 95%] Building CXX object blt/thirdparty_builtin/googletest-master-2020-01-07/googletest/CMakeFiles/gtest.dir/src/
[ 96%] Linking CXX static library ../../../../lib/libgtest.a
[ 96%] Built target gtest
[ 96%] Building CXX object blt/thirdparty_builtin/googletest-master-2020-01-07/googletest/CMakeFiles/gtest_main.dir/src/
[ 97%] Linking CXX static library ../../../../lib/libgtest_main.a
[ 97%] Built target gtest_main
[ 97%] Building CXX object blt/tests/smoke/CMakeFiles/blt_mpi_smoke.dir/blt_mpi_smoke.cpp.o
[ 98%] Linking CXX executable ../../../tests/blt_mpi_smoke
/gpfs/packages/spack/spack/opt/spack/linux-rhel7-broadwell/gcc-8.2.0/libevent-2.1.12-esh72id7jta52ghvl6egurb4bwgp5gre/lib: file not recognized: Is a directory
collect2: error: ld returned 1 exit status
make[2]: *** [tests/blt_mpi_smoke] Error 1
make[1]: *** [blt/tests/smoke/CMakeFiles/blt_mpi_smoke.dir/all] Error 2
make: *** [all] Error 2

There's no log, but this appears to be due to

# in thirdPartyLibs/build-talapas-spack-release/blt/tests/smoke/CMakeFiles/blt_mpi_smoke.dir
$ cat link.txt
/packages/spack/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/gcc-8.2.0-kot2sql3i2pckkfopvmxdmbdopuwy42t/bin/g++ -Wall -Wextra  -O3 -DNDEBUG -Wl,-rpath,/gpfs/packages/spack/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/gcc-8.2.0-kot2sql3i2pckkfopvmxdmbdopuwy42t/lib/gcc/x86_64-pc-linux-gnu/8.2.0 -Wl,-rpath,/gpfs/packages/spack/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/gcc-8.2.0-kot2sql3i2pckkfopvmxdmbdopuwy42t/lib64 -Wl,-rpath -Wl,/gpfs/packages/spack/spack/opt/spack/linux-rhel7-broadwell/gcc-8.2.0/hwloc-2.4.1-hkbkx5nnzw3ubtzxcpirhv4uzkexso52/lib -Wl,/gpfs/packages/spack/spack/opt/spack/linux-rhel7-broadwell/gcc-8.2.0/libevent-2.1.12-esh72id7jta52ghvl6egurb4bwgp5gre/lib -Wl,/gpfs/packages/spack/spack/opt/spack/linux-rhel7-broadwell/gcc-8.2.0/zlib-1.2.11-r2llgrncecwh3hlaqtn7e6x7nwdzap3m/lib -Wl,/gpfs/packages/spack/spack/opt/spack/linux-rhel7-broadwell/gcc-8.2.0/openmpi-4.0.5-efzld6vninonhliblgn2ci52aqnroile/lib -L/gpfs/packages/spack/spack/opt/spack/linux-rhel7-broadwell/gcc-8.2.0/hwloc-2.4.1-hkbkx5nnzw3ubtzxcpirhv4uzkexso52/lib -L/gpfs/packages/spack/spack/opt/spack/linux-rhel7-broadwell/gcc-8.2.0/libevent-2.1.12-esh72id7jta52ghvl6egurb4bwgp5gre/lib -L/gpfs/packages/spack/spack/opt/spack/linux-rhel7-broadwell/gcc-8.2.0/zlib-1.2.11-r2llgrncecwh3hlaqtn7e6x7nwdzap3m/lib -pthread CMakeFiles/blt_mpi_smoke.dir/blt_mpi_smoke.cpp.o -o ../../../tests/blt_mpi_smoke  -Wl,-rpath,/gpfs/packages/spack/spack/opt/spack/linux-rhel7-broadwell/gcc-8.2.0/openmpi-4.0.5-efzld6vninonhliblgn2ci52aqnroile/lib /gpfs/packages/spack/spack/opt/spack/linux-rhel7-broadwell/gcc-8.2.0/openmpi-4.0.5-efzld6vninonhliblgn2ci52aqnroile/lib/

which, as you can clearly see :-) is bogus. The key bit is the -Wl,-rpath -Wl,/gpfs/..., which looks like something somewhere mangled one of these directories into an empty string (or inserted an empty string).

I believe I saw this also during an early attempt to use the uberenv approach. My guess is that this is an issue with a CMake file somewhere, but after running through the strace logs for quite a while during that early attempt, I never was able to spot what was going on.


comment created time in a month

issue commentGEOSX/GEOSX

[Bug] TPL: invalid C++ in chai or umpire (std::numeric_limits)

Hi @klevzoff , one more thing. Are your build recipes available somewhere? Hopefully in the form of a script or set of scripts that build everything starting from (say) a newly minted, minimal Ubuntu image? Thanks.


comment created time in a month

issue commentGEOSX/GEOSX

[Bug] TPL: invalid C++ in chai or umpire (std::numeric_limits)

@klevzoff Thanks. Based on this, I switched to GCC 8.2.0 (chosen arbitrarily, since I already had the parts built in Spack). This gets me past the problem above, but unfortunately crashes into trouble when trying to build doxygen. The issue seems to be that it can't find libiconv, even though it's there. Even trying to point cmake directly at it using LIBRARY_PATH or LD_LIBRARY_PATH fails.

As an alternative route, I tried defeating the doxygen build in various ways (since I neither need nor want it), but so far without success. This block in my host config seems not to have any effect:



Similarly, commenting out the lines in tpls.cmake has no apparent effect. Still tries to build doxygen. It's like kudzu.

I'm thinking about giving up on Spack and trying to piece together something with the environment modules we have as a next approach. Not very optimistic, though.


comment created time in a month

issue commentGEOSX/GEOSX

[Bug] TPL: invalid C++ in chai or umpire (std::numeric_limits)

@corbett5 I did spend quite a bit of time looking at the Docker images, but it doesn't appear that they help in our case. As you say, they seem to contain only the TPLs. Exasperating, as it seems like it would be easy enough to include a basic working version of GEOSX. But still, it's not clear how MPI users could benefit from GEOSX on Docker images. Copying the compiled contents out seems fraught with problems, and trying use the containers as is has its own set of difficulties when running on more than one host. My impression is that the Docker images are mostly only useful for developers and conceivably someone who wants to use GEOSX on a toy scale.


comment created time in a month

issue commentGEOSX/GEOSX

[Bug] TPL: invalid C++ in chai or umpire (std::numeric_limits)

The GCC is indeed coming from Spack. Here's what's loaded:

$ spack find --loaded
==> 45 installed packages
-- linux-rhel7-broadwell / gcc@8.2.0 ----------------------------
berkeley-db@18.1.40  gettext@0.21          libiconv@1.16        mpfr@4.1.0      readline@8.1
bzip2@1.0.8          git@2.31.1            libidn2@2.3.0        ncurses@6.2     tar@1.34
curl@7.76.1          git-lfs@2.11.0        libmd@1.0.3          openssh@8.5p1   xz@5.2.5
expat@2.3.0          gmp@6.2.1             libunistring@0.9.10  openssl@1.1.1k  zlib@1.2.11
gcc@11.1.0           libbsd@0.11.3         libxml2@2.9.10       pcre2@10.36     zstd@1.5.0
gdbm@1.19            libedit@3.1-20210216  mpc@1.1.0            perl@5.32.1

-- linux-rhel7-broadwell / gcc@11.1.0 ---------------------------
cmake@3.20.5          libevent@2.1.12    ncurses@6.2      openssh@8.5p1
cuda@11.2.2           libiconv@1.16      numactl@2.0.14   openssl@1.1.1k
hwloc@2.5.0           libpciaccess@0.16  openblas@0.3.15  xz@5.2.5
libedit@3.1-20210216  libxml2@2.9.10     openmpi@4.1.1    zlib@1.2.11

and here's the host config I'm using:

# detect host and name the configuration file
set(CONFIG_NAME "your-platform" CACHE PATH "")
message( "CONFIG_NAME = ${CONFIG_NAME}" )

# set paths to C, C++, and Fortran compilers. Note that while GEOSX does not contain any Fortran code,
# some of the third-party libraries do contain Fortran code. Thus a Fortran compiler must be specified.
set(CMAKE_C_COMPILER "/packages/spack/spack/opt/spack/linux-rhel7-broadwell/gcc-8.2.0/gcc-11.1.0-b5ctlwh3lnwoystx6r6yet3ue4nikxcg/bin/gcc" CACHE PATH "")
set(CMAKE_CXX_COMPILER "/packages/spack/spack/opt/spack/linux-rhel7-broadwell/gcc-8.2.0/gcc-11.1.0-b5ctlwh3lnwoystx6r6yet3ue4nikxcg/bin/g++" CACHE PATH "")
set(CMAKE_Fortran_COMPILER "/packages/spack/spack/opt/spack/linux-rhel7-broadwell/gcc-8.2.0/gcc-11.1.0-b5ctlwh3lnwoystx6r6yet3ue4nikxcg/bin/gfortran" CACHE PATH "")

# enable MPI and set paths to compilers and executable.
# Note that the MPI compilers are wrappers around standard serial compilers.
# Therefore, the MPI compilers must wrap the appropriate serial compilers specified
set(MPI_C_COMPILER "/packages/spack/spack/opt/spack/linux-rhel7-broadwell/gcc-11.1.0/openmpi-4.1.1-dvuecndxwkrbbi5p2rtkhfnakf7ydgnm/bin/mpicc" CACHE PATH "")
set(MPI_CXX_COMPILER "/packages/spack/spack/opt/spack/linux-rhel7-broadwell/gcc-11.1.0/openmpi-4.1.1-dvuecndxwkrbbi5p2rtkhfnakf7ydgnm/bin/mpic++" CACHE PATH "")
set(MPI_Fortran_COMPILER "/packages/spack/spack/opt/spack/linux-rhel7-broadwell/gcc-11.1.0/openmpi-4.1.1-dvuecndxwkrbbi5p2rtkhfnakf7ydgnm/bin/mpifort" CACHE PATH "")
set(MPIEXEC "/packages/spack/spack/opt/spack/linux-rhel7-broadwell/gcc-11.1.0/openmpi-4.1.1-dvuecndxwkrbbi5p2rtkhfnakf7ydgnm/bin/mpirun" CACHE PATH "")

# disable CUDA and OpenMP

# enable PAMELA and PVTPackage

# enable tests


It's not entirely obvious that this approach will work, but it seems possible. At the moment, I'm just hunting for "proof of life"--any recipe whatsoever to get a working compile from source. That would be a base from which to build, for options, optimizations, etc.

The code apparently compiles at one or more of the national labs, but I have no idea what those environments look like, and I suspect they are rather exotic.

I can patch the bug reported here locally, but the fact that I'm hitting it at all suggests that I'm "doing it wrong" (compiling in a way that no one has ever tried). I'm looking for a recipe that someone has seen work, and that I can reasonably reproduce.


comment created time in a month

issue openedGEOSX/GEOSX

[Bug] TPL: invalid C++ in chai or umpire (std::numeric_limits)

Describe the bug While compiling TPL according to the Quick Start guide, it errors on source file thirdPartyLibs/build-try2-release/chai/src/chai/src/tpl/umpire/src/umpire/util/allocation_statistics.cpp.

Looking within, the code is trying to use std::numeric_limits without including the <limits> header. I'm using g++ version 11.1.0, but this code is simply invalid wrt the C++ specs.

More info here:

More generally, I've been struggling for days to find a combination of versions and configs that might compile GEOSX (outside of the national labs). If anyone has ever seen this compile on a vanilla Linux distro, it would be very useful to provide a list of dependencies and versions thought to work. Even better, a scripted build from start to tested, perhaps in the form of a Docker or Singularity recipe.

My final goal is to get it compiled with MPI, CUDA, and perhaps OpenMP. So far, I'm still batting zero. (The uberenv route looked promising, but unfortunately fails with a link error.)

created time in a month

issue commentipfs/ipfs-docs

What keeps the last copy of a doc from disappearing?

Got it. Thanks--I thought I was missing something.


comment created time in 2 months

issue openedipfs/ipfs-docs

What keeps the last copy of a doc from disappearing?

Would love to see everyone's first question answered at the top. Which is, what's to keep the last copy of any given document from disappearing?

Apologies if this is in the docs somewhere...

created time in 2 months

issue commentSathyaBhat/spotify-dl

Add mention of 100 track limit issue to docs?

Yes, spotify-dl works correctly. In my case, I created a playlist of all of my tracks on the Spotify site (using either the website or the Windows 10 client). Although I did nothing special, the site clipped that to 100 tracks. Presumably an upstream bug of some sort.

I foolishly started with the idea that there was a paging issue, before finally discovering that the playlist itself had only 100 tracks. (Blah.)

Anyway, only suggesting a "check upstream playlist track count first" mention.


comment created time in 2 months

issue commentSathyaBhat/spotify-dl

the --version flag should work alone

Ah, I see where you're going. It's likely too picayune to say, but arguably --output could just default to the current working directory and --url could be the (sole) positional argument.

In any case, it's working well for me, and this is my nomination for "Project of the Year"!


comment created time in 2 months

issue openedSathyaBhat/spotify-dl

the --version flag should work alone

The --version flag should work even if no other flags are given.

$ spotify_dl --version
usage: spotify_dl [-h] -l URL -o OUTPUT [-d] [-f FORMAT_STR] [-k] [-m] [-s SCRAPE] [-V] [-v]
spotify_dl: error: the following arguments are required: -l/--url, -o/--output

created time in 3 months

issue openedSathyaBhat/spotify-dl

Add mention of 100 track limit issue to docs?

Summary: Might be worth mentioning in the docs that if your playlist download unexpectedly clips at 100 tracks, it might be because the playlist only has 100 tracks.

After an hour of debugging, I realized that my playlist itself had only 100 tracks (when I was expecting 1500). Why? Who knows. I created it through the Windows Spotify client a few months ago. Maybe there was a bug in their client at that time. Anyway, I should have checked.

(This is somewhat confusing because the Spotify API as a 100-track limit that has to be paged through, and it originally looked like a spotify-dl bug. Blah.)

Again, thanks very much for this software!!

created time in 3 months

issue openedSathyaBhat/spotify-dl

/usr/bin/env line cleanups (no python, no carriage return)

The file spotify_dl/ is in "DOS" mode, having carriage returns at the end of each line. This doesn't matter to Python, but it does matter to the kernel for the "/usr/bin/env" line. Though this gets fixed (apparently) when the file is installed, it would be nice for debugging purposes if it were correct from the start. The solution is to remove all carriage returns from the file, or at least the one at the end of the first line.

Also, the 'python' command no longer exists in recent Linux distributions. (It's ambiguous as to whether it refers to python2 or python3.) The fix is to use #!/usr/bin/env python3 instead.

Thank you very much for creating this package!!

created time in 3 months

issue commentHuangCongQing/Python

SpecNotFound: Envirnment with requirements.txt file needs n name

Ugh. Just spent 20 minutes on this, try to discern the error messages that lead off in many other directions. The 'conda' command should not be paying any attention to the file suffix at all.


comment created time in 3 months