profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/hartzell/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.

hartzell/alien-samtools 1

Alien::SamTools, a Perl Alien package that fetches/builds/installs the SamTools headers and libraries.

hartzell/caddy-custom 1

Caddy web server with my custom plugin list

hartzell/.emacs.d 0

:page_facing_up: My Emacs config.

hartzell/adafruit-feather-m0-express 0

Software for the adafruit-m0-express

hartzell/Adafruit_CircuitPython_MCP9808 0

CircuitPython drivers for the MCP9808 i2c high accuracy temperature sensor.

hartzell/alembic-issue-25 0

Demo of my problem, related to Alembic issue #25.

hartzell/ansible-demo 0

A provisioning demo using Ansible

hartzell/ansible-disk 0

Format extra disks and attach them to different mount points.

hartzell/ansible-freebsd-modules 0

Ansible Modules for FreeBSD

pull request commentspack/spack

zig: add new package at v0.7.1

@adamjstewart It is for now, as far as I know. But on the other hand it's a language like golang or D.

alalazo

comment created time in 30 minutes

pull request commentspack/spack

WIP: oneapi standalones

I removed the dependence on intel-oneapi-compiler in mkl, dal, dnn. I understand now that if you are using gcc, you want the gcc openmp runtime. If you are using oneapi compilers, you want the oneapi openmp runtime. The runtimes comes with the compiler and will be available without adding a dependence.

I have run into another problem and need some advice. If I have a package that depends on intel-oneapi-tbb, and the package is built with gcc, then it will install intel-oneapi-tbb%gcc. If I build the package with icc, then it will ALSO install intel-oneapi-tbb%oneapi. You don't need 2 separate installs of tbb. If the only issue is wasted disk space, then it doesn't seem like a big issue. Is it possible that an application could be using 2 different TBBs because it mixed compilers?

rscohn2

comment created time in 36 minutes

pull request commentspack/spack

zig: add new package at v0.7.1

Is this the only package that provides ziglang? I'm not sure if a virtual provider is needed.

alalazo

comment created time in 37 minutes

issue commenthashicorp/terraform

Module source lines should allow variable interpolation on RHS values.

@till - Thanks. I think we solved this issue back then a completely different way. It probably involved a python-based wrapper I wrote around terraform. Or, we stopped using the https method and switched to the ssh method instead. I forget.

The wrapper allows us to define a config file which can take all sorts of URIs for where the terraform exists. If they're git-based URIs, it'll use git to download to a known location, then invoke terraform with a local path instead. Same for https URIs.

It would still be cool to be able to do this though.

pll

comment created time in 43 minutes

push eventspack/spack

rexcsn

commit sha 16c07e6abdec3e3c500934d1b93841519226a752

aws-parallelcluster v2.10.2 (#22047) Signed-off-by: Rex <shuningc@amazon.com>

view details

push time in an hour

PR merged spack/spack

aws-parallelcluster v2.10.2 new-version

Signed-off-by: Rex shuningc@amazon.com

+4 -4

1 comment

1 changed file

rexcsn

pr closed time in an hour

Pull request review commentspack/spack

Improve R package creation

 If you only specify the URL for the latest release, your package will no longer be able to fetch that version as soon as a new release comes out. To get around this, add the archive directory as a ``list_url``. +^^^^^^^^^^^^^^^^^^^^^+Bioconductor packages+^^^^^^^^^^^^^^^^^^^^^++Bioconductor packages are setup in a similar way to CRAN packages, but there+are some very important distinctions. Bioconductor packages can be found at:+https://bioconductor.org/. Bioconductor packages are R packages and so follow+the same packaging scheme as CRAN packages. What is different is that+Bioconductor itself is versioned and released. This scheme, using the+Bioconductor package installer, allows further specification of the minimum+version of R as well as further restrictions on the dependencies between+packages than what is possible with the native R packaging system. Spack can+not replicate these extra features and thus Bioconductor packages in Spack need+to be managed as a group during updates in order to maintain package+consistency with Bioconductor itself.++Another key difference is that, while previous versions of packages are+available, they are not available from a site that can be programmatically set,+thus a ``list_url`` attribute can not be used. However, each package is also+available in a git repository, with branches corresponding to each Bioconductor+release. Thus, it is always possible to retrieve the version of any package+corresponding to a Bioconductor release simply by fetching the branch that+corresponds to the Bioconductor release of the package repository. For this+reason, spack Bioconductor R packages use the git repository, with the commit+of the respective branch used in the ``version()`` attribute of the package.++^^^^^^^^^^^^^^^^^^^^^^^^+cran and bioc attributes+^^^^^^^^^^^^^^^^^^^^^^^^++Much like the ``pypi`` attribute for python packages, due to the fact that R+packages are obtained from specific repositories, it is possible to setup shortcut+attributes that can be used to set ``homepage``, ``url``, ``list_url``, and+``git``. For example, the following ``cran`` attribute:++.. code-block:: python++   cran = 'caret/caret_6.0-86.tar.gz'

Yes, but if a tarball is always available, you could simply set cran = 'caret' and set the url to a fake version like {cran}_1.2.3.tar.gz.

glennpj

comment created time in an hour

Pull request review commentspack/spack

Improve R package creation

 If you only specify the URL for the latest release, your package will no longer be able to fetch that version as soon as a new release comes out. To get around this, add the archive directory as a ``list_url``. +^^^^^^^^^^^^^^^^^^^^^+Bioconductor packages+^^^^^^^^^^^^^^^^^^^^^++Bioconductor packages are setup in a similar way to CRAN packages, but there+are some very important distinctions. Bioconductor packages can be found at:+https://bioconductor.org/. Bioconductor packages are R packages and so follow+the same packaging scheme as CRAN packages. What is different is that+Bioconductor itself is versioned and released. This scheme, using the+Bioconductor package installer, allows further specification of the minimum+version of R as well as further restrictions on the dependencies between+packages than what is possible with the native R packaging system. Spack can+not replicate these extra features and thus Bioconductor packages in Spack need+to be managed as a group during updates in order to maintain package+consistency with Bioconductor itself.++Another key difference is that, while previous versions of packages are+available, they are not available from a site that can be programmatically set,+thus a ``list_url`` attribute can not be used. However, each package is also+available in a git repository, with branches corresponding to each Bioconductor+release. Thus, it is always possible to retrieve the version of any package+corresponding to a Bioconductor release simply by fetching the branch that+corresponds to the Bioconductor release of the package repository. For this+reason, spack Bioconductor R packages use the git repository, with the commit+of the respective branch used in the ``version()`` attribute of the package.++^^^^^^^^^^^^^^^^^^^^^^^^+cran and bioc attributes+^^^^^^^^^^^^^^^^^^^^^^^^++Much like the ``pypi`` attribute for python packages, due to the fact that R+packages are obtained from specific repositories, it is possible to setup shortcut+attributes that can be used to set ``homepage``, ``url``, ``list_url``, and+``git``. For example, the following ``cran`` attribute:++.. code-block:: python++   cran = 'caret/caret_6.0-86.tar.gz'

I could be missing something but I think the form with the tarfile is needed for setting the url attribute. The bioc attribute only needs to set the git attribute so it uses the simpler form.

glennpj

comment created time in 2 hours

Pull request review commentspack/spack

Improve R package creation

 If you only specify the URL for the latest release, your package will no longer be able to fetch that version as soon as a new release comes out. To get around this, add the archive directory as a ``list_url``. +^^^^^^^^^^^^^^^^^^^^^+Bioconductor packages+^^^^^^^^^^^^^^^^^^^^^++Bioconductor packages are setup in a similar way to CRAN packages, but there+are some very important distinctions. Bioconductor packages can be found at:+https://bioconductor.org/. Bioconductor packages are R packages and so follow+the same packaging scheme as CRAN packages. What is different is that+Bioconductor itself is versioned and released. This scheme, using the+Bioconductor package installer, allows further specification of the minimum+version of R as well as further restrictions on the dependencies between+packages than what is possible with the native R packaging system. Spack can+not replicate these extra features and thus Bioconductor packages in Spack need+to be managed as a group during updates in order to maintain package+consistency with Bioconductor itself.

I have thought of this, and think I suggested it in the past. I am on the fence about it though. First off, a bundle package would not address the fact that bioconductor packages would need to be updated as a group. It would not help with package creation, PR review, or updates. In fact, it would add one more level as new and updated packages would need to be added to the bundle. All of the bioconductor packages would need to be added with version specs to avoid the possibility of some packages picking up a newer version of a dependency if an older bioconductor bundle is installed. The package file would be quite large and ever growing. Finally, I am not sure that people really want to install all of the spack bioconductor packages, although that may be viewed as reasonable if the bundle package existed.

glennpj

comment created time in 2 hours

Pull request review commentspack/spack

Improve R package creation

 If you only specify the URL for the latest release, your package will no longer be able to fetch that version as soon as a new release comes out. To get around this, add the archive directory as a ``list_url``. +^^^^^^^^^^^^^^^^^^^^^+Bioconductor packages+^^^^^^^^^^^^^^^^^^^^^++Bioconductor packages are setup in a similar way to CRAN packages, but there+are some very important distinctions. Bioconductor packages can be found at:+https://bioconductor.org/. Bioconductor packages are R packages and so follow+the same packaging scheme as CRAN packages. What is different is that+Bioconductor itself is versioned and released. This scheme, using the+Bioconductor package installer, allows further specification of the minimum+version of R as well as further restrictions on the dependencies between+packages than what is possible with the native R packaging system. Spack can+not replicate these extra features and thus Bioconductor packages in Spack need+to be managed as a group during updates in order to maintain package+consistency with Bioconductor itself.

I have thought of this, and think I suggested it in the past. I am on the fence about it though. First off, a bundle package would not address the fact that bioconductor packages would need to be updated as a group. It would not help with package creation, PR review, or updates. In fact, it would add one more level as new and updated packages would need to be added to the bundle. All of the bioconductor packages would need to be added with version specs to avoid the possibility of some packages picking up a newer version of a dependency if an older bioconductor bundle is installed. The package file would be quite large and ever growing. Finally, I am not sure that people really want to install all of the spack bioconductor packages, although that may be viewed as reasonable if the bundle package existed.

glennpj

comment created time in 2 hours

issue closedhashicorp/terraform

Error creating route

#Public resource "aws_route_table" "public_r" { vpc_id = aws_vpc.main_test.id

route {
    cidr_block = aws_subnet.main_subnet1.cidr_block
}

tags = { Name = "main_public" } depends_on = [ aws_vpc.main_test, aws_subnet.main_subnet1] } resource "aws_route_table_association" "public_assoc" { route_table_id = aws_route_table.public_r.id subnet_id = aws_subnet.main_subnet1.id

depends_on = [ aws_route_table.public_r, aws_subnet.main_subnet1 ] }

#Private

resource "aws_route_table" "private_r" { vpc_id = aws_vpc.main_test.id

route {
    cidr_block = aws_subnet.main_subnet2.cidr_block
}

tags = { Name = "main_private" } depends_on = [ aws_vpc.main_test, aws_subnet.main_subnet2 ] }

ERROR

Error: error creating route: one of egress_only_gateway_id, gateway_id, instance_id, nat_gateway_id, local_gateway_id, transit_gateway_id, vpc_endpoint_id, vpc_peering_connection_id, network_interface_id must be specified

closed time in 2 hours

umarkhan207322405

issue commenthashicorp/terraform

Error creating route

Hello!

We use GitHub issues for tracking bugs and enhancements, rather than for questions. While we can sometimes help with certain simple problems here, it's better to use the community forum where there are more people ready to help. The GitHub issues here are monitored only by our few core maintainers.

Based on the information you've provided, it looks like this doesn't represent a specific bug or feature request, so I'm going to close it. Please do feel free to ask your question in the community forum. Thanks!

umarkhan207322405

comment created time in 2 hours

issue commenthashicorp/terraform

Module source lines should allow variable interpolation on RHS values.

I'd really like this feature as well.


@pll It's been almost four years, I think in your case, you might be able to get away with a .netrc file though:

machine gitlab.com
login gitlab-ci-token
password your-token-here
pll

comment created time in 2 hours

pull request commentspack/spack

aws-parallelcluster v2.10.2

Hi, thank you for the suggestions. Addressed the comments and PR is updated

rexcsn

comment created time in 2 hours

issue openedhashicorp/terraform

Error creating route

#Public resource "aws_route_table" "public_r" { vpc_id = aws_vpc.main_test.id

route {
    cidr_block = aws_subnet.main_subnet1.cidr_block
}

tags = { Name = "main_public" } depends_on = [ aws_vpc.main_test, aws_subnet.main_subnet1] } resource "aws_route_table_association" "public_assoc" { route_table_id = aws_route_table.public_r.id subnet_id = aws_subnet.main_subnet1.id

depends_on = [ aws_route_table.public_r, aws_subnet.main_subnet1 ] }

#Private

resource "aws_route_table" "private_r" { vpc_id = aws_vpc.main_test.id

route {
    cidr_block = aws_subnet.main_subnet2.cidr_block
}

tags = { Name = "main_private" } depends_on = [ aws_vpc.main_test, aws_subnet.main_subnet2 ] }

ERROR

Error: error creating route: one of egress_only_gateway_id, gateway_id, instance_id, nat_gateway_id, local_gateway_id, transit_gateway_id, vpc_endpoint_id, vpc_peering_connection_id, network_interface_id must be specified

created time in 2 hours

issue commentspack/spack

Installation issue: font-util

FYI, I just now also ran into this bug and removed it with the diff above.

mamelara

comment created time in 3 hours

push eventspack/spack

Desmond Orton

commit sha 8e1b62ee68df19e1f16f22cc30138ec47ce2ea2b

New package at r-spades at 2.0.6 (#21784)

view details

push time in 3 hours

PR merged spack/spack

New package at r-spades at 2.0.6 R new-package
+29 -0

3 comments

1 changed file

dorton21

pr closed time in 3 hours

issue commenthashicorp/terraform

Unable to dereference set values

That sounds correct to me! I tried to find information about this in the AWS provider docs before replying to this issue to no avail, so adding docs explaining what to expect from this set and how to use it in practice would be very valuable.

ablackrw

comment created time in 3 hours

issue commenthashicorp/terraform

Unable to dereference set values

This discussion may point to a documentation or design bug in the hashicorp/aws provider.

The block_device_mappings set is expected to contain at least one element. If it did not have any elements, it would be defining a virtual machine image that lacks an associated file system. As such, things would likely fall apart quickly. In this particular case, the entity described by the data.aws_ami object is expected to contain only a single definition. If more than one element was expected in the set, you would need to identify which one of the elements should be associated with the root_block_device structure in the aws_instance definition. This would possibly be done via pattern matching on the device_name value in the set element. The remainder of the set elements would likely be referenced in other structures (likely ebs_block_device).

ablackrw

comment created time in 3 hours

issue closedhashicorp/terraform

variables in module source

Current Terraform Version

❯ terraform version
Terraform v0.14.3
+ provider registry.terraform.io/terraform-provider-openstack/openstack v1.35.0
+ provider registry.terraform.io/terraform-providers/ignition v1.2.1

Your version of Terraform is out of date! The latest version
is 0.14.7. You can update by downloading from https://www.terraform.io/downloads.html

Use-cases

We are setting up different environments with Terraform and while we would like to continue iterating on our Terraform related setup, we can't role out everything to already setup environments to update them constantly. This is why we decided to version (git tag) our terraform modules and currently install them like so:

module "foobar" {
  source = "git@github.com:org/terraform-repo//foobar?ref=a.b.c"`
}

Attempted Solutions

I can provide multiple entry points (as in .tf files) depending on when infrastructure was created. This works, but I can't really DRY. And end up repeating boilerplate. It's doable of course, but being allowed to use a variable in source would make this part a lot shorter and more maintainable.

My current solution revolves around either using sub directories where I duplicate the boilerplate and symlink parts of it "in" to keep it lean. I've been also considering another Makefile to chain it all together. But that ends up being complicated as well at some point.

Proposal

I'd like to be able to inject the source via a variable, or a variable into the source.

# I can provide a default, but override via .tfvars 
variable "module_version" {
  default = "1.0.0"
}

module "foobar" {
  source = "git@github.com:organisation/repository//foobar-module?ref=${var.module_version}"`
}

# as an alternative, the complete URI to the module
module "foobar" {
  source = var.module_source`
}

References

  • #1145
  • #1439

Wanted to add how much I already love what modules has become. It's come along way from when I first started toying with this and supports us on an almost daily basis in our work.

closed time in 3 hours

till

issue commenthashicorp/terraform

variables in module source

Duplicate of #14745

till

comment created time in 3 hours

issue openedhashicorp/terraform

variables in module source

Current Terraform Version

❯ terraform version
Terraform v0.14.3
+ provider registry.terraform.io/terraform-provider-openstack/openstack v1.35.0
+ provider registry.terraform.io/terraform-providers/ignition v1.2.1

Your version of Terraform is out of date! The latest version
is 0.14.7. You can update by downloading from https://www.terraform.io/downloads.html

Use-cases

We are setting up different environments with Terraform and while we would like to continue iterating on our Terraform related setup, we can't role out everything to already setup environments to update them constantly. This is why we decided to version (git tag) our terraform modules and currently install them like so:

module "foobar" {
  source = "git@github.com:org/terraform-repo//foobar?ref=a.b.c"`
}

Attempted Solutions

I can provide multiple entry points (as in .tf files) depending on when infrastructure was created. This works, but I can't really DRY. And end up repeating boilerplate. It's doable of course, but being allowed to use a variable in source would make this part a lot shorter and more maintainable.

My current solution revolves around either using sub directories where I duplicate the boilerplate and symlink parts of it "in" to keep it lean. I've been also considering another Makefile to chain it all together. But that ends up being complicated as well at some point.

Proposal

I'd like to be able to inject the source via a variable, or a variable into the source.

# I can provide a default, but override via .tfvars 
variable "module_version" {
  default = "1.0.0"
}

module "foobar" {
  source = "git@github.com:organisation/repository//foobar-module?ref=${var.module_version}"`
}

# as an alternative, the complete URI to the module
module "foobar" {
  source = var.module_source`
}

References

  • #1145
  • #1439

Wanted to add how much I already love what modules has become. It's come along way from when I first started toying with this and supports us on an almost daily basis in our work.

created time in 3 hours

pull request commentspack/spack

openfoam: disable FPE handling for Fujitsu compiler

@alalazo or @adamjstewart up for a merge?

olesenm

comment created time in 3 hours

Pull request review commentspack/spack

py-chainer: Add test method for ChainerMN (continued)

 class PyChainer(PythonPackage):     depends_on('py-filelock', type=('build', 'run'))     depends_on('py-protobuf@3:', type=('build', 'run'))     depends_on('py-typing@:3.6.6', when='@:6', type=('build', 'run'))++    # Dependencies only required for test of ChainerMN+    depends_on('py-matplotlib', type=('build', 'run'), when='+mn')+    depends_on('py-mpi4py', type=('build', 'run'), when='+mn')+    depends_on("mpi", type=("build", "run"), when='+mn')++    @run_after('install')+    def cache_test_sources(self):+        if '+mn' in self.spec:+            self.cache_extra_test_sources("examples")++    def test(self):+        if "+mn" in self.spec:+            # Run test of ChainerMN+            test_dir = self.test_suite.current_test_data_dir++            mnist_dir = join_path(+                self.install_test_root, "examples", "chainermn", "mnist"+            )+            mnist_file = join_path(mnist_dir, "train_mnist.py")+            mpi_name = self.spec["mpi"].prefix.bin.mpirun+            python_exe = self.spec["python"].command.path+            opts = [+                "-n",+                "4",+                python_exe,+                mnist_file,+                "-o",+                test_dir,+            ]+            env["OMP_NUM_THREADS"] = "4"++            # set LD_PRELOAD for Fugaku+            if (self.spec.target == 'a64fx' and+                self.spec['mpi'].name == 'fujitsu-mpi'):+                lib_path = join_path(+                    "/usr", "lib", "FJSVtcs", "ple", "lib64", "libpmix.so"+                )+                if os.path.exists(lib_path):+                    env["LD_PRELOAD"] = lib_path

I'm still a bit concerned about this section of code because it relies on a specific file on Fugaku and won't work on other machines. Is libpmix something we could add a dependency on?

ketsubouchi

comment created time in 3 hours

pull request commentspack/spack

py-anuga: add new package

For exporting environment variables, you can do something like:

def setup_build_environment(self, env):
    if self.spec['mpi'].name == 'mpich':
        env.set('ANUGA_PARALLEL', 'mpich2')
    elif self.spec['mpi'].name == 'openmpi':
        env.set('ANUGA_PARALLEL', 'openmpi'

For adding a new version for the Python 3 branch, see https://spack.readthedocs.io/en/latest/packaging_guide.html#fetching-from-code-repositories

adamjstewart

comment created time in 4 hours

PR opened spack/spack

py-pandas: add v1.2.3 new-version

https://pandas.pydata.org/pandas-docs/version/1.2.3/whatsnew/v1.2.3.html

+1 -0

0 comment

1 changed file

pr created time in 4 hours

pull request commentspack/spack

add `spack test list --all`

It's a bit slow - I'm not sure there is a way to speed it up by caching some kind of information? It looks like it's looking at every package classes, so the slowness makes sense (that's a lot of looking).

for this one, we're planning to cache package metadata (maybe in json). Then we could load fast (and not have to parse as much python)

tgamblin

comment created time in 4 hours

Pull request review commentspack/spack

geant4(-data): Add version 10.7.1

 class Geant4Data(BundlePackage):     depends_on("g4emlow@7.13", when='@10.7.0:10.7.9999')     depends_on("g4photonevaporation@5.7", when='@10.7.0:10.7.9999')     depends_on("g4radioactivedecay@5.6", when='@10.7.0:10.7.9999')-    depends_on("g4particlexs@3.1", when='@10.7.0:10.7.9999')+    depends_on("g4particlexs@3.1.1", when='@10.7.1:10.7.9999')+    depends_on("g4particlexs@3.1", when='@10.7.0:10.7.0')

Yep, that's the case! The canonical list is always in the G4DatasetDefinitions.cmake file, and it's a bug if the website download page conflicts with that.

ChristianTackeGSI

comment created time in 4 hours

issue commenthashicorp/terraform

Terraform apply not upgrading remote state from 0.12 to 0.13

Hi @Kardi5. I'm unable to reproduce this issue, and the additional details you describe around using Ansible make it difficult to understand what the problem could be here.

Here's what I did:

  1. Create a simple Terraform config:

    terraform {
      backend "consul" {
        path = "27952"
      }
    }
    
    resource "null_resource" "none" {
    }
    
  2. Terraform 0.12.30: run terraform init and terraform apply -auto-approve

  3. Verify the state exists using the Consul UI

  4. Terraform 0.13.6: run terraform init and terraform apply, see that there are no changes

  5. Check the state using the Consul UI and see that it has been upgraded

(There should be no difference here between any of the remote state backends, and I don't have any easy way to set up an Azure Storage Container backend.)

Are you able to adjust these simple reproduction steps to show the issue you're seeing? Removing as many of the complex details as possible would help us find the root problem here, so please try using Terraform directly instead of via Ansible.

Kardi5

comment created time in 4 hours