profile
viewpoint

micw/camunda-schemaupdate-liquibase 2

Liquibase-based schema updates for camunda

micw/charts 2

Curated applications for Kubernetes

micw/crowd-ldap-server 2

Implementation of an LDAP server that delegates authentication to an Atlassian Crowd installation using the Crowd REST API.

evermind/docker-froxlor 1

Froxlor based webhosting on docker

micw/ArduinoProjekte 1

Meine privaten Arduino Projekte

micw/davis-to-gsm 1

Projekt zum Übertragen der Daten einer Davis Wetterstation via GSM

evermind/docker-alpine-openjdk8-service 0

Base image for openjre8 based service

push eventmicw/jenkins-k8s-pipeline

micw

commit sha 9cf4c25ffca03fbb95052bd4b4cbed3478f6d402

Correct docker port

view details

push time in 3 days

push eventmicw/jenkins-k8s-pipeline

micw

commit sha da6868fb433630d30f1c1a59238a29607873756b

Correct DOCKER_HOST format

view details

push time in 3 days

issue openedarakelian/docker-junit-rule

Waiting for the same log line twice does not work anymore

I's starting a mariadb image which starts mariadb twice (once for setup, once to serve requests). Unfortunately, there's no way to distinguish between those. In 2.x; I could simply use container.waitForLog("mysqld: ready for connections.","mysqld: ready for connections."); to wait for the both log lines. In 4.x this does not work anymore, it waits only for the 1st occurence. Example Rule:

	@ClassRule
	public static DockerRule mysqlRule = new DockerRule(ImmutableDockerConfig.builder()
			.image("mariadb:10.4")
			.addCreateContainerConfigurer(create -> {
                create.withName("baseapp-mariadb-" + UUID.randomUUID().toString());
                create.withExposedPorts(mysqlPort);
                create.withEnv("MYSQL_ROOT_PASSWORD=r", "MYSQL_DATABASE=db_meta");
            })
			.addHostConfigConfigurer(hostConfig -> {
                hostConfig.withAutoRemove(true);
                hostConfig.withPortBindings(new PortBinding(Binding.empty(), mysqlPort));
            })
			.addStartedListener(container -> {
				// MySQL is started twice within the container - we need to wait for the startup message twice
				container.waitForLog("mysqld: ready for connections.","mysqld: ready for connections.");
				container.waitForPort(mysqlPort);
			})
			.build());

created time in 3 days

issue openedarakelian/docker-junit-rule

README does not reflect new API

Hi, with 4.x, the api has changed but the README still contains the old API of the rule.

created time in 3 days

issue commentAntennaPod/AntennaPod

java.io.IOException: No such file or directory after importing Database

I installed AntennaPod on one of my own old phones and imported the database (but did not copy the old file structure). Download fails on that podcasts too. There are several podcasts added but only some of them fail. I can send you the database (privately) if that helps.

micw

comment created time in 11 days

issue commentAntennaPod/AntennaPod

java.io.IOException: No such file or directory after importing Database

I updated the version. I had restored the old data dir, still the same issue. When I delete/re-add the podcast, it works.

May it be that the download path is stored along with the podcast (and restored on database import) and has changed on the new rom, so that the podcast tries to download to a non-existant location?

micw

comment created time in 11 days

issue commentAntennaPod/AntennaPod

java.io.IOException: No such file or directory after importing Database

What I do not understand is that the target directory actually exists (despite the exception telling the opposite).

Since it's not my own device, I copied the database backup so that I can potentially reproduce it on my own device.

micw

comment created time in 11 days

issue openedAntennaPod/AntennaPod

java.io.IOException: No such file or directory after importing Database

App version: latest (Google Play)

Android version: 7.1.2

Device model: Samsung S3 Mini

Current behaviour:

06-27 20:16:25.342 4055 8140 W System.err: java.io.IOException: No such file or directory 06-27 20:16:25.349 4055 8140 W System.err: at java.io.UnixFileSystem.createFileExclusively0(Native Method) 06-27 20:16:25.349 4055 8140 W System.err: at java.io.UnixFileSystem.createFileExclusively(UnixFileSystem.java:280) 06-27 20:16:25.349 4055 8140 W System.err: at java.io.File.createNewFile(File.java:948) 06-27 20:16:25.349 4055 8140 W System.err: at de.danoeh.antennapod.core.service.download.HttpDownloader.download(SourceFile:186) 06-27 20:16:25.349 4055 8140 W System.err: at de.danoeh.antennapod.core.service.download.Downloader.call(SourceFile:46) 06-27 20:16:25.349 4055 8140 W System.err: at de.danoeh.antennapod.core.service.download.Downloader.call(SourceFile:15) 06-27 20:16:25.349 4055 8140 W System.err: at java.util.concurrent.FutureTask.run(FutureTask.java:237) 06-27 20:16:25.349 4055 8140 W System.err: at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:428) 06-27 20:16:25.349 4055 8140 W System.err: at java.util.concurrent.FutureTask.run(FutureTask.java:237) 06-27 20:16:25.349 4055 8140 W System.err: at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1133) 06-27 20:16:25.349 4055 8140 W System.err: at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:607) 06-27 20:16:25.349 4055 8140 W System.err: at java.lang.Thread.run(Thread.java:761) 06-27 20:16:25.349 4055 8140 D HttpDownloader: onFail() called with: reason = [ERROR_IO_ERROR], reasonDetailed = [No such file or directory]

Steps to reproduce:

I installed a new ROM on my device. So I 1st backed up the database, wiped / installed the device, installed AntennaPod and re-imported the database.

Now all my podcasts are there but I cannot download anymore (Exception see above).

When I delete and re-add the Podcasts, downloading works again.

created time in 11 days

push eventmicw/jenkins-k8s-pipeline

micw

commit sha 9771657e299845924d563b34c134c6b97f432487

Update JenkinsPipelineModel.groovy

view details

push time in 13 days

push eventmicw/jenkins-k8s-pipeline

micw

commit sha bc78b6af050b1ce30d34cd2add23d10722313c6b

Update README.md

view details

push time in 13 days

issue commentghedo/pflask

Build errors

Got it built using "extern" for that variable (like in https://stackoverflow.com/questions/11072244/c-multiple-definitions-of-a-variable) - no idea what effect it has ^^

micw

comment created time in 13 days

issue commentghedo/pflask

Build errors

I installed waf and python-sphinx from system (arch linux) and used the recent version (2.0.20). Seems to be an issue with python 3.

Now I get:

[17/19] Linking build/pflask
/usr/bin/ld: src/cgroup.c.1.o:/home/mwyraz/git/pflask/build/../src/printf.h:42: multiple definition of `use_syslog'; src/capabilities.c.1.o:/home/mwyraz/git/pflask/build/../src/printf.h:42: first defined here
/usr/bin/ld: src/dev.c.1.o:/home/mwyraz/git/pflask/build/../src/printf.h:42: multiple definition of `use_syslog'; src/capabilities.c.1.o:/home/mwyraz/git/pflask/build/../src/printf.h:42: first defined here
/usr/bin/ld: src/machine.c.1.o:/home/mwyraz/git/pflask/build/../src/printf.h:42: multiple definition of `use_syslog'; src/capabilities.c.1.o:/home/mwyraz/git/pflask/build/../src/printf.h:42: first defined here
/usr/bin/ld: src/mount.c.1.o:/home/mwyraz/git/pflask/build/../src/printf.h:42: multiple definition of `use_syslog'; src/capabilities.c.1.o:/home/mwyraz/git/pflask/build/../src/printf.h:42: first defined here
/usr/bin/ld: src/netif.c.1.o:/home/mwyraz/git/pflask/build/../src/printf.h:42: multiple definition of `use_syslog'; src/capabilities.c.1.o:/home/mwyraz/git/pflask/build/../src/printf.h:42: first defined here
/usr/bin/ld: src/nl.c.1.o:/home/mwyraz/git/pflask/build/../src/printf.h:42: multiple definition of `use_syslog'; src/capabilities.c.1.o:/home/mwyraz/git/pflask/build/../src/printf.h:42: first defined here
/usr/bin/ld: src/path.c.1.o:/home/mwyraz/git/pflask/build/../src/printf.h:42: multiple definition of `use_syslog'; src/capabilities.c.1.o:/home/mwyraz/git/pflask/build/../src/printf.h:42: first defined here
/usr/bin/ld: src/pflask.c.1.o:/home/mwyraz/git/pflask/build/../src/printf.h:42: multiple definition of `use_syslog'; src/capabilities.c.1.o:/home/mwyraz/git/pflask/build/../src/printf.h:42: first defined here
/usr/bin/ld: src/printf.c.1.o:/home/mwyraz/git/pflask/build/../src/printf.h:42: multiple definition of `use_syslog'; src/capabilities.c.1.o:/home/mwyraz/git/pflask/build/../src/printf.h:42: first defined here
/usr/bin/ld: src/pty.c.1.o:/home/mwyraz/git/pflask/build/../src/printf.h:42: multiple definition of `use_syslog'; src/capabilities.c.1.o:/home/mwyraz/git/pflask/build/../src/printf.h:42: first defined here
/usr/bin/ld: src/sync.c.1.o:/home/mwyraz/git/pflask/build/../src/printf.h:42: multiple definition of `use_syslog'; src/capabilities.c.1.o:/home/mwyraz/git/pflask/build/../src/printf.h:42: first defined here
/usr/bin/ld: src/user.c.1.o:/home/mwyraz/git/pflask/build/../src/printf.h:42: multiple definition of `use_syslog'; src/capabilities.c.1.o:/home/mwyraz/git/pflask/build/../src/printf.h:42: first defined here
/usr/bin/ld: src/util.c.1.o:/home/mwyraz/git/pflask/build/../src/printf.h:42: multiple definition of `use_syslog'; src/capabilities.c.1.o:/home/mwyraz/git/pflask/build/../src/printf.h:42: first defined here
collect2: error: ld returned 1 exit status
micw

comment created time in 13 days

issue openedghedo/pflask

Build errors

Hi, I wanted to give it a try but I get:

[mwyraz@mw-t470s ✓ ~/git/pflask $ ./bootstrap.py 
Downloading http://waf.io/pub/release/waf-1.8.6...
Checksum verified.
[mwyraz@mw-t470s ✓ ~/git/pflask $ ./waf configure
Traceback (most recent call last):
  File "/home/mwyraz/git/pflask/.waf3-1.8.6-815b991931c187cd7c4e2bcfeb5f4a5d/waflib/Node.py", line 293, in ant_iter
    raise StopIteration
StopIteration

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/mwyraz/git/pflask/.waf3-1.8.6-815b991931c187cd7c4e2bcfeb5f4a5d/waflib/Scripting.py", line 103, in waf_entry_point
    run_commands()
  File "/home/mwyraz/git/pflask/.waf3-1.8.6-815b991931c187cd7c4e2bcfeb5f4a5d/waflib/Scripting.py", line 160, in run_commands
    parse_options()
  File "/home/mwyraz/git/pflask/.waf3-1.8.6-815b991931c187cd7c4e2bcfeb5f4a5d/waflib/Scripting.py", line 133, in parse_options
    Context.create_context('options').execute()
  File "/home/mwyraz/git/pflask/.waf3-1.8.6-815b991931c187cd7c4e2bcfeb5f4a5d/waflib/Options.py", line 141, in execute
    super(OptionsContext,self).execute()
  File "/home/mwyraz/git/pflask/.waf3-1.8.6-815b991931c187cd7c4e2bcfeb5f4a5d/waflib/Context.py", line 92, in execute
    self.recurse([os.path.dirname(g_module.root_path)])
  File "/home/mwyraz/git/pflask/.waf3-1.8.6-815b991931c187cd7c4e2bcfeb5f4a5d/waflib/Context.py", line 133, in recurse
    user_function(self)
  File "/home/mwyraz/git/pflask/wscript", line 19, in options
    opt.load('compiler_c')
  File "/home/mwyraz/git/pflask/.waf3-1.8.6-815b991931c187cd7c4e2bcfeb5f4a5d/waflib/Context.py", line 89, in load
    fun(self)
  File "/home/mwyraz/git/pflask/.waf3-1.8.6-815b991931c187cd7c4e2bcfeb5f4a5d/waflib/Tools/compiler_c.py", line 36, in options
    opt.load_special_tools('c_*.py',ban=['c_dumbpreproc.py'])
  File "/home/mwyraz/git/pflask/.waf3-1.8.6-815b991931c187cd7c4e2bcfeb5f4a5d/waflib/Context.py", line 296, in load_special_tools
    lst=self.root.find_node(waf_dir).find_node('waflib/extras').ant_glob(var)
  File "/home/mwyraz/git/pflask/.waf3-1.8.6-815b991931c187cd7c4e2bcfeb5f4a5d/waflib/Node.py", line 342, in ant_glob
    ret=[x for x in self.ant_iter(accept=accept,pats=[to_pat(incl),to_pat(excl)],maxdepth=kw.get('maxdepth',25),dir=dir,src=src,remove=kw.get('remove',True))]
  File "/home/mwyraz/git/pflask/.waf3-1.8.6-815b991931c187cd7c4e2bcfeb5f4a5d/waflib/Node.py", line 342, in <listcomp>
    ret=[x for x in self.ant_iter(accept=accept,pats=[to_pat(incl),to_pat(excl)],maxdepth=kw.get('maxdepth',25),dir=dir,src=src,remove=kw.get('remove',True))]
RuntimeError: generator raised StopIteration

created time in 13 days

created tagmicw/docker-mpd

tagv1.0

Dockerized MPD

created time in 14 days

create barnchmicw/docker-mpd

branch : master

created branch time in 14 days

created repositorymicw/docker-mpd

Dockerized MPD

created time in 14 days

push eventevermind/docker-froxlor

Travis CI User

commit sha c5ca0c6d08fa4251bb3ef00ca2cbfffc39e7f196

Travis build

view details

push time in 16 days

push eventevermind/docker-froxlor

Michael Wyraz

commit sha ce6439a04028c9f3846158e51cb10833b9740919

Correct paths for php in readme

view details

push time in 17 days

push eventevermind/docker-geoserver

micw

commit sha 8989fc0e454148ed628e38b4e96523c336a76ba0

Update Dockerfile

view details

push time in 17 days

push eventevermind/docker-geoserver

micw

commit sha 5066a3df192018c10cee863cb2ac0ff73b1c7dfc

Unzip to directory (required since 2.16.3)

view details

push time in 17 days

MemberEvent

issue commentevermind/docker-froxlor

Password for database

FYI, I added a "NOTES.txt" to the chart that prints the required steps after deployment

leobr2014

comment created time in 21 days

push eventevermind/docker-froxlor

Michael Wyraz

commit sha d7640269414573dfa6d7ace8196e79004cbc2724

Add setup notes to helm chart

view details

push time in 21 days

issue commentevermind/docker-froxlor

Password for database

Please elaborate in detail. You need to create your own values with the settings you like to overwrite (including mysql password and rootPassword. Then later in froxlor setup you can use these passwords. Ensure that you use the correct host name (which is the service name of the mysql service in k8s).

leobr2014

comment created time in 21 days

push eventevermind/docker-froxlor

Michael Wyraz

commit sha ab2ff648a20d1d1e2b3f5b4ca6ab223beb36935a

Fixes, phpmyadmin support

view details

push time in 22 days

created tagevermind/docker-froxlor

tagv0.4.0

Froxlor based webhosting on docker

created time in 22 days

created tagevermind/docker-froxlor

tagv0.3.0

Froxlor based webhosting on docker

created time in 22 days

push eventevermind/docker-froxlor

Michael Wyraz

commit sha 837401af20e7e4fca3a0b92f00397e7051e13bb3

Run initial tasks twice (because first run might fail if users are not created yet)

view details

push time in 22 days

push eventevermind/docker-froxlor

Michael Wyraz

commit sha 371557c395123067383c579c181f96056d2854ec

Update readme

view details

Michael Wyraz

commit sha ff17a42cdc069173538c32805c719761a31a9f8e

Add update script, allow to set FTP masquerade address

view details

push time in 23 days

push eventevermind/docker-froxlor

Travis CI User

commit sha cd3245571299ba68a03494f7b46ae5c7fa8e17b7

Travis build

view details

push time in a month

issue commentbigbluebutton/bigbluebutton

The webcam image fliped horizontally

I'd say a "UX flaw that bugs so many users" counts as a critical bug...

DronKram

comment created time in a month

issue commentmatomo-org/matomo

hosting matomo in subdirectory

Can't remember (was 1,5 years ago) and I don't use it anymore.

courtens

comment created time in a month

push eventevermind/docker-froxlor

Travis CI User

commit sha 3d6afbee78cf0b5674a5bf8c1be1efcf6a952431

Travis build

view details

push time in a month

push eventevermind/docker-froxlor

Michael Wyraz

commit sha 93cc26d4abd411004558c74628fac759624208b9

Typo in values.yaml

view details

push time in a month

created tagevermind/docker-elasticsearch-modules-repo-s3-ingest-attachment

tag7.7.1

ElasticSearch With modules for Snapshot into S3 and ingest for attachments

created time in a month

push eventevermind/docker-elasticsearch-modules-repo-s3-ingest-attachment

micw

commit sha 4ef488452a32e7bcc25629c61282afec28f42d15

bump version to 7.7.1

view details

push time in a month

pull request commentpsi-4ward/docker-powerdns

feat: Handle auto DB upgrade

The schema version should be stored along with the database, so the best way to do it is to store it into a custom table. This table can also be used to lock the database during schema update. There are several tools that does exactly this way (liquibase, flyway, sqlalchemie-migrate, ...).

madmath03

comment created time in a month

issue commentngoduykhanh/PowerDNS-Admin

sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) unable to open database file

Hi, I ran in the same problem. Only both, using 4 slashes and chmodding the data dir helped. At least the latter should be done by the docker image itself.

Fossil01

comment created time in a month

issue commentkubernetes-client/python-base

kube_config.py fails to load config if no context is set

/remove-lifecycle stale

micw

comment created time in a month

issue commentMailu/helm-charts

Allow choosing between separate env vars or a single ConfigMap

Please add a list of inconsistencies so that I can check it.

frgomes

comment created time in a month

push eventMailu/helm-charts

Travis CI User

commit sha a3ed9a73ec2565add5fe896fe519eae03b89fcc6

Travis build

view details

push time in a month

push eventMailu/helm-charts

Michael Wyraz

commit sha d66154a6e40a0684e677a99dab97503a169f1a29

Fix #55

view details

Michael Wyraz

commit sha 48a4561a225283887a32236b259bd0936a28b4d2

Add warning that this won't work on most cloud providers

view details

push time in a month

push eventMailu/helm-charts

Michael Wyraz

commit sha 40e23c1f05ee4b1fd2a5a79ca356d5f53909384f

Add warning that this won't work on most cloud providers

view details

push time in a month

push eventevermind/docker-froxlor

Travis CI User

commit sha b4de360429cdf2587b42227237931e0706d0caae

Travis build

view details

push time in a month

push eventevermind/docker-froxlor

Michael Wyraz

commit sha ac01289593559d32082e8f90265d3aa0f7971019

Pipework in chart, dockerfile updates

view details

push time in a month

created tagevermind/docker-froxlor

tagv0.2.0

Froxlor based webhosting on docker

created time in a month

push eventevermind/docker-froxlor

Michael Wyraz

commit sha d99b3162a08c4681c910bef7ef045f4970a6d957

Chart version 0.2.0

view details

push time in a month

issue openedkvaps/kube-pipework

Correct setting of host routes for single IPs (/32 netmask)

Hello, first I'd like to say thank you. I stumbled across this today and instantly switched from the env based config to your new the annotation based config.

Unfortunately there's an issue with setting host routes for single IPs (i.e. the netmask is /32). This issue is also pressent on dreamcat4/pipework. The problem is that the script tries to set up a new network including routing which fails for /32. In https://github.com/kvaps/kube-pipework/blob/master/entrypoint.sh#L226 sipcalc returns nothing, the following ping command either fails or simply hangs.

For a single IP, it's sufficient to add a route to the host device to make routing work (e.g. route add -host 1.2.3.4 dev eth0). So all the IP calculation is not required. This could even work with a subnet (I can't see why it should not), so the whole macvlan creation and stuff is maybe not necessary.

created time in a month

issue commentdreamcat4/docker-images

Pipework: pipework commands not run when container is restarted

Got the same issue, how did you fixed this?

schmurfy

comment created time in a month

issue closedMailu/helm-charts

[BUG] mysql database schema and user for MailU don´t be created at startup

Describtion At first startup no user or database for MailU would be created. So startup without existing database is not working.

Environment

  • Kubernetes 1.16.3 with Rancher

values.yaml like this:

database:
  type: mysql
  roundcubeType: mysql
  mysql:
    rootPassword: my-secret-secret
    database: mailu
    user: mailu
    password: my-secret-secret
    roundcubeDatabase: roundcube
    roundcubeUser: roundcube
    roundcubePassword: my-secret-secret

Additional context For Roundcube i can see database user and schema.

closed time in a month

tgruenert

issue commentMailu/helm-charts

[BUG] mysql database schema and user for MailU don´t be created at startup

Thank you, it's fixed now. Be aware that the database init scripts are only executed if there data dir is empty (i.e on first start of the container) - so after that, the database must be created manually.

tgruenert

comment created time in a month

push eventMailu/helm-charts

Michael Wyraz

commit sha 2c996a2792ec8723b30cf2f39d7d92e8a5cf75e9

Fix #55

view details

push time in a month

issue closedMailu/helm-charts

Unable to mount volumes for pod

Deploying this to GKE with standard storage class users might face an issue with volumes like the following:

Unable to mount volumes for pod "mailu-postfix-7779bbcb88-4rvzm_mailu(b7dab7b3-42b1-4fb0-8e87-539ca7b04539)"

RollingUpdate will also fail because of that.

Probably this worth being mentioned in the readme as a warning or a solution could be provided.

closed time in a month

andrewnazarov

issue closedMailu/helm-charts

mailu postfix cannot start and when it does, cannot send or receive email

Hello, i use this chart to install mailuserver and installation seems to go well as every pods starts and proceeed installation at the beginning. Now i'm facing severals issues

  • mailupostfix cannot start with an error Mar 24 10:06:29 mail postfix/postfix-script[55]: fatal: the Postfix mail system is already running
  • when i log in roundcube, roundcube cannot send mail (error 220 authentication) and cannot retrieve mail for a user i got these kind of error

`imap(admin@xxxxx.com)<41270><1v184pah3qgKKgtQ>: Error: chdir(/mail/admin@xxxxx.com) failed: Stale file handle

Mar 24 10:05:00 imap(admin@xxxxx.com)<41268><wox74pah3KgKKgtQ>: Info: Logged out in=102 out=888 deleted=0 expunged=0 trashed=0 hdr_count=0 hdr_bytes=0 body_count=0 body_bytes=0

Mar 24 10:05:00 imap(admin@xxxxx.com)<41270><1v184pah3qgKKgtQ>: Error: stat(/mail/admin@xxxxx.com/tmp) failed: Invalid argument

Mar 24 10:05:01 imap(admin@xxxxx.com)<41270><1v184pah3qgKKgtQ>: Error: stat(/mail/admin@xxxxx.com/tmp) failed: Invalid argument

Mar 24 10:05:01 imap(admin@xxxxx.com)<41270><1v184pah3qgKKgtQ>: Error: stat(/mail/admin@sxxxxx.com) failed: Stale file handle

Mar 24 10:05:01 imap(admin@xxxxx.com)<41270><1v184pah3qgKKgtQ>: Error: stat(/mail) failed: Stale file handle

Mar 24 10:05:01 imap(admin@xxxxx.com)<41270><1v184pah3qgKKgtQ>: Error: opendir(/mail/admin@xxxxx.com) failed: Stale file handle

Mar 24 10:05:01 imap(admin@xxxxx.com)<41270><1v184pah3qgKKgtQ>: Error: Failed to get quota resource STORAGE: quota-count: Listing namespace '' failed: opendir(/mail/admin@xxxxx.com) failed: Stale file handle

Mar 24 10:05:01 imap(admin@xxxxx.com)<41270><1v184pah3qgKKgtQ>: Info: Logged out in=114 out=820 deleted=0 expunged=0 trashed=0 hdr_count=0 hdr_bytes=0 body_count=0 body_bytes=0

Mar 24 10:05:06 auth: Debug: auth client connected (pid=41279)`

Is it normal that the volumemount are so different with the version of mailuserver that are in the kubernetes documentation here.

For example in the helm chart the volume for data is in the subpath "dovecotdata" while in the kubernetes docs the volume is mount at the same subpath with the other pods as mail-admin etc ... the volumes for this chart are very different from documentation while it's seems to be the same images.

I think for example that these error on IMAP are caused by these mismatch on dovecot volumes, in chart the subpath for mail is dovecotmail while in mailu kubernetes doc is "mailstate". Please do someone have mailuserver working correctly with this chart. How to configure it? Thanks

closed time in a month

mamiapatrick

issue commentMailu/helm-charts

mailu postfix cannot start and when it does, cannot send or receive email

Closing due to lack of response.

mamiapatrick

comment created time in a month

issue closedMailu/helm-charts

[BUG]

Describe the bug Readiness & liveness probes failed. Connection refused in admin. Services unavailable.

Environment

  • Kubernetes Platform

values.yaml file content: `domain: xxx.com hostnames:

  • mail.xxx.com clusterDomain: xxx.com initialAccount: domain: mail.xxx.com password: xxx username: xxx logLevel: INFO mail: authRatelimit: 100/minute;3600/hour messageSizeLimitInMegabytes: 200 secretKey: xxx ingress: tlsFlavor: letsencrypt persistence: existingClaim: mailu-volume-claim storageClass: standart certmanager: issuerType: ClusterIssuer issuerName: letsencrypt-prod subnet: 10.1.34.0/16`

Additional context mail system is not reachable at hostname and at hostname, admin, postfix, . Tried with diferent subnets but with no results.

admin pod warnings: Readiness probe failed: Get http://10.1.34.224:80/ui/login: dial tcp 10.1.34.224:80: connect: connection refused Liveness probe failed: Get http://10.1.34.235:80/ui/login: dial tcp 10.1.34.235:80: connect: connection refused

clamav pod warnings: Readiness probe failed: ping failed Liveness probe failed: ping failed

postfix warnings without messages

image image image

closed time in a month

zbagdzevicius

issue commentMailu/helm-charts

[BUG]

I see. Nevertheless, I'll close this issue since it's not a report of a particular bug nor a feature request. The issue tracker is not the right place to debug your installation. Please use the chat. You will find me there as well as others who deploy an various k8s flavors.

zbagdzevicius

comment created time in a month

issue commentMailu/helm-charts

[BUG] Documentation about sample load balancer

A propper solution would be to use proxy protocol (https://www.digitalocean.com/docs/kubernetes/how-to/configure-load-balancers/#proxy-protocol) - but unfortunately nginx mail proxy modules does not support it

justinasjaronis

comment created time in a month

issue commentMailu/helm-charts

[BUG] Documentation about sample load balancer

It breaks rate limits as well as spam detection and the opportunity to detect where a mail originated.

justinasjaronis

comment created time in a month

issue commentMailu/Mailu

Collect issues and solutions for propper preserving of client ips

One possible solution for some cloud providers (Digital Ocean, maybe others?) is described in https://github.com/Mailu/Mailu/issues/1370#issuecomment-612439037

micw

comment created time in a month

issue commentMailu/helm-charts

[BUG] Documentation about sample load balancer

One possible solution for some cloud providers (Digital Ocean?) is described in https://github.com/Mailu/Mailu/issues/1370#issuecomment-612439037

justinasjaronis

comment created time in a month

issue openedMailu/Mailu

Collect issues and solutions for propper preserving of client ips

Mailu works only if the original client IP is known to the services. That means is must be known by mailu-front (behind it, this behaviour is ensured by mailu itself). For HTTP/s this can easily be achieved by trusting proxy headers. For mail protocols it's way more difficult.

The default case is: ipv4, exposing the mail ports of the "front" container directly (e.g. by using a "port" mapping on docker or by using "hostPort" on kubernetes.

Prone to issues are: ipv6, various cloud providers, reverse proxies. In the result, deployments often end in open relays.

This issue is to collect different situations and solutions for that problem so that we can see how we can address it.

created time in a month

created tagevermind/docker-elasticsearch-modules-repo-s3-ingest-attachment

tag7.7.0

ElasticSearch With modules for Snapshot into S3 and ingest for attachments

created time in a month

push eventevermind/docker-elasticsearch-modules-repo-s3-ingest-attachment

Michael Wyraz

commit sha fdb9337098d8d69f3ab6198d37f4fe54ccceb3ab

bump version to 7.7.0

view details

push time in a month

issue commentMailu/helm-charts

Feature request: allow individual PVCs for each pod

@omitrowski a PR that solves this (by using multiple PVs) is very welcome. Please think about the differnt options the current chart allows for PVs - the change should be as compatible as possible.

unixfox

comment created time in 2 months

issue commentMailu/helm-charts

Feature request: allow individual PVCs for each pod

Furthermore, I would highly recommend to keep such files like in /queue/pid/* away from persistent volumes as mentioned below in #54

I disagree, the PID is used for locking which is required for data consistency. No two instances of postfix must access queue at the same time. Nevertheless, a proper cleanup must be done on startup.

unixfox

comment created time in 2 months

issue commentMailu/helm-charts

postfix crashes

Your questions to the volume are already covered in https://github.com/Mailu/helm-charts/issues/39 . The NFS stuff for dovecot is also documented in mailu docs and there's a pending issue here: https://github.com/Mailu/helm-charts/issues/22 .

justinasjaronis

comment created time in 2 months

issue commentMailu/helm-charts

Feature request: allow individual PVCs for each pod

I don't think so. Postfix uses dovecot socket to store mail, so the mail folder must only be accessible by dovecot. From what I know, there's no need to share any filesystem between PODs.

unixfox

comment created time in 2 months

issue commentMailu/helm-charts

[BUG] no mechanism to automatically clean up master.pid after postfix pod crash

Solution can be easily done. If postfix is running on the PID file, the file is locked. So on startup we can simply test if that pidfile exists. If so, we do

flock -n master.pid rm master.pid

This command removes the pid file if it's not locked (meaning no other postfix instance running on it), otherwise it fails.

justinasjaronis

comment created time in 2 months

issue closedMailu/helm-charts

[BUG] no mechanism to automatically clean up master.pid after postfix pod crash

Describe the bug If pod crashes abruptly (i.e. some internal posfix error) or postfix is killed without clean shutdown, /queue/pid/master.pid blocks future startup of postfix in newly created pod. Shouldn't /queue/pid folder persistency be the same with container's lifetime ?

Environment Digital Ocean + PVC on NFS

closed time in 2 months

justinasjaronis

issue commentMailu/helm-charts

[BUG] no mechanism to automatically clean up master.pid after postfix pod crash

(closing it here, we will track this in https://github.com/Mailu/Mailu/issues/1483)

justinasjaronis

comment created time in 2 months

IssuesEvent

issue commentMailu/Mailu

postfix throttling afer a hard shutdown

I reopen this due to https://github.com/Mailu/helm-charts/issues/54. It seems to affect others as well.

ofthesun9

comment created time in 2 months

issue commentMailu/helm-charts

[BUG] no mechanism to automatically clean up master.pid after postfix pod crash

It should. There was a discussion about it in mailu chat a few weeks ago and an (already closed) issue at https://github.com/Mailu/Mailu/issues/1483. Please post to that issue if it's reproducible for you.

justinasjaronis

comment created time in 2 months

issue commentMailu/helm-charts

postfix crashes

Hello, please use this tracker only for helm chart related issues. If there are any issues with mailu images, you can post to https://github.com/Mailu/Mailu

justinasjaronis

comment created time in 2 months

issue commentMailu/helm-charts

[BUG]

BTW, which cloud provider are you using?

zbagdzevicius

comment created time in 2 months

issue commentMailu/helm-charts

[BUG]

One thing I just saw is "tlsFlavor: letsencrypt" - that won't work with certmanager. The option was added by a PR recently. Unfortunately the doc was a bit missleading - I updated it a few days ago. You need to use the default "cert" here. Probably not the only issue in your setup but at least one.

zbagdzevicius

comment created time in 2 months

issue commentMailu/helm-charts

[BUG]

Any results?

zbagdzevicius

comment created time in 2 months

issue closedMailu/helm-charts

The list of supported Mailu versions is missing in the README.md

Is this helm chart only compatible with the master branch or all versions? If it is only compatible with master or specific versions it should be mentioned in the Prerequisites, thanks!

closed time in 2 months

berni2288

push eventMailu/helm-charts

micw

commit sha 74b7e403abbba3f75665bf0deb79e7cac01a3ee2

Update README.md

view details

push time in 2 months

push eventMailu/helm-charts

Travis CI User

commit sha 31457d278b72618f4b6c1f1a75afa20043515f5c

Travis build

view details

push time in 2 months

issue commentMailu/helm-charts

The list of supported Mailu versions is missing in the README.md

Currently only "master" is supported since it contains lot of k8s specific changes. 1.8 will be the first release with full k8s support. I'll add it to the README.

berni2288

comment created time in 2 months

issue commentMailu/helm-charts

Allow option to specify relay method via SMTPD/SMTPS port instead of straight SMTP

What is a use case for a mail server that cannot do outbound to port 25? Most mail servers out there won't accept unauthenticated traffic on the "alternative" ports like 587 or 465. Also see https://stackoverflow.com/questions/41179258/how-to-enable-smtp-port-25-in-google-cloud-redhat-7-instance for an in-detail discussion.

Besides that, support in mailu will be needed before such a feature could be added to the chart.

huang-jy

comment created time in 2 months

push eventMailu/helm-charts

Travis CI User

commit sha 14cd7d76c94739b74e7f988cbd4c2de27e935839

Travis build

view details

push time in 2 months

push eventMailu/helm-charts

micw

commit sha 23a4da4914c218f1c93d5f875b09d28676de983e

Update README.md

view details

push time in 2 months

issue commentarakelian/docker-junit-rule

3.5.0: NoClassDefFoundError repackaged/com/arakelian/docker/junit/org/apache/commons/codec/binary/Base64

Update: I tried every single version, starting from 3.5.0 going backward, the issue occurs in all versions starting with 2.2.0 (including 2.3.0).

micw

comment created time in 2 months

issue openedarakelian/docker-junit-rule

3.5.0: NoClassDefFoundError repackaged/com/arakelian/docker/junit/org/apache/commons/codec/binary/Base64

Hello, seems a regression of #4, same issue:

java.lang.NoClassDefFoundError: repackaged/com/arakelian/docker/junit/org/apache/commons/codec/binary/Base64
	at com.spotify.docker.client.DefaultDockerClient.authHeader(DefaultDockerClient.java:2869)
	at com.spotify.docker.client.DefaultDockerClient.pull(DefaultDockerClient.java:1344)
	at com.spotify.docker.client.DefaultDockerClient.pull(DefaultDockerClient.java:1318)
	at com.spotify.docker.client.DefaultDockerClient.pull(DefaultDockerClient.java:1312)
	at com.arakelian.docker.junit.Container.pullImage(Container.java:527)
	at com.arakelian.docker.junit.Container.start(Container.java:287)
	at com.arakelian.docker.junit.DockerRule$StatementWithDockerRule.evaluate(DockerRule.java:75)
	at org.junit.rules.RunRules.evaluate(RunRules.java:20)
	at org.springframework.test.context.junit4.statements.SpringRepeat.evaluate(SpringRepeat.java:84)
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
	at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:251)
	at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:97)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
	at org.springframework.test.context.junit4.statements.RunBeforeTestClassCallbacks.evaluate(RunBeforeTestClassCallbacks.java:61)
	at org.springframework.test.context.junit4.statements.RunAfterTestClassCallbacks.evaluate(RunAfterTestClassCallbacks.java:70)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
	at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.run(SpringJUnit4ClassRunner.java:190)
	at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
	at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
	at org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:40)
	at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
	at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
	at java.util.Iterator.forEachRemaining(Iterator.java:116)
	at java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
	at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
	at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
	at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
	at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
	at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
	at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418)
	at org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
	at org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:71)
	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:229)
	at org.junit.platform.launcher.core.DefaultLauncher.lambda$execute$6(DefaultLauncher.java:197)
	at org.junit.platform.launcher.core.DefaultLauncher.withInterceptedStreams(DefaultLauncher.java:211)
	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:191)
	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:137)
	at org.eclipse.jdt.internal.junit5.runner.JUnit5TestReference.run(JUnit5TestReference.java:98)
	at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:41)
	at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:542)
	at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:770)
	at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:464)
	at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:210)
Caused by: java.lang.ClassNotFoundException: repackaged.com.arakelian.docker.junit.org.apache.commons.codec.binary.Base64
	at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
	... 47 more


created time in 2 months

issue commentMailu/helm-charts

[BUG] Multiattach Error

"front" is nginx. That's the one that needs to be exposed. See the "hostPort" directives in the front pod which ports are exposed. If you cannot use "hostPort" (I guess you cannot on GKE), you need to exposie it in a different way, usually through a load balance. I'm not familar with GKE, so I cannot tell how exactly it must be done. It probably needs change to the chart, I'm open for suggestions (there was already a feature request which was closed due to lack of feedback).

Important! If you expose the ports with other than "hostPort", you need to ensure that the originating ip address is passed down to nginx. Otherwise you will create an open relay that will be blacklisted within hours.

huang-jy

comment created time in 2 months

issue commentMailu/helm-charts

[BUG] Multiattach Error

Yes, nginx proxies everything and also does auth for mail protocols. Other services must not be exposed (since they are not secured)

huang-jy

comment created time in 2 months

push eventmicw/davis-to-gsm

Michael Wyraz

commit sha 54d2a43a776005c2c3e7fd09c80aab0fedb3bc07

Lots of changes for current version

view details

push time in 2 months

push eventmicw/davis-to-gsm

Michael Wyraz

commit sha d0c573267259bc8a4379d2980661e974aa5b1e96

Show voltages on webui

view details

push time in 2 months

issue commentMailu/helm-charts

[BUG] Multiattach Error

According to https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes, there is only a limitation on how many nodes a storage can be mounted. So it should be allowed to attach a volume to more than one pod (even with ReadWriteOnce if pods run on one node). See also https://stackoverflow.com/questions/56592929/how-pods-are-able-to-mount-the-same-pvc-with-readwriteonce-access-mode-when-stor and https://github.com/kubernetes/kubernetes/issues/60903 for discussion around this. But that's not an issue of this particular helm chart.

For this chart are 2 solutions: use a storage provider that supports mounting to many pods (or ReadWriteMany if you want to run on multiple nodes. Or wait for #39 to be implemented (PRs are very welcome).

You also can ask for help in the "mailu" matrix channel, I'm pretty sure there are some users running on GKE.

huang-jy

comment created time in 2 months

issue closedMailu/helm-charts

[BUG] Multiattach Error

Describe the bug Pods will not come good due to multi-attach errors

Environment GKE

Additional context Attempting to set this up on a dev cluster using this:

helm upgrade \
  --install --wait --atomic \
  --namespace mailu \
  mailu mailu/mailu \
   --set hostnames={mail.domain.net} \
   --set domain=domain.net \
   --set ingress.tlsFlavor=letsencrypt \
   --set initialAccount.username=admin \
   --set initialAccount.domain=domain.net \
   --set initialAccount.password=XXXXX \
   --set secretKey=XXXXX \
   --set subnet=172.16.0.0/12 \
   --set certmanager.issuerType=ClusterIssuer \
   --set certmanager.issuerName=letsencrypt-production

The chart installs, but timeout due to this:

Multi-Attach error for volume "pvc-01188ee8-96be-11ea-897a-42010a9a00e6" Volume is already used by pod(s) mailu-admin-76f5b54765-cs8kt, mailu-postfix-546cdf5ff7-5t89f

Those pods never come good, helm times out on the wait and purges the chart as it did not install successfully

closed time in 2 months

huang-jy

issue commentMailu/helm-charts

[BUG] Multiattach Error

Hello, you need to set persistence.accessMode to ReadWriteMany (see https://github.com/Mailu/helm-charts/blob/master/mailu/README.md#pvc-with-automatic-provisioning) if you run on multiple nodes (and probably also for some PV implementations on one node).

There's currently only one PV that's shared across pods (see also #39).

huang-jy

comment created time in 2 months

issue commentMailu/helm-charts

[BUG]

Probably not a subnet issue. Check the logs of the pods. If there are no errors, get a shell to the pods and check if the services respond at the given ports.

zbagdzevicius

comment created time in 2 months

issue openedMailu/Mailu

Proposal: Test for correct IP configuration (avoid open relay)

It seems that users accidentally create open relays with Mailu, especially in setups where load balancers or proxies are involved. Reason in most cases is that the requests seems to come from $SUBNET because the originating IP get lost at load balancer level. What do you think about some automatic check that prevents this and enables email only after the check has passed? The check could be repeated at an interval, blocking any email activity if it fails.

Such a check would require external "help", e.g. a kind of service that is called by mailu and connects from a known IP to any of the email services (like SMTP). On nginx we wolud see if the originating IP or any other IP occurs here. Instead of SMTP, we could also use a deticated port.

The check could of course be configurable, enabled by default but easy to switch off via config variable.

A similar check could also be created to verify that the IP address of outgoing mails matches the configured IP for a domain.

created time in 2 months

issue commentMailu/helm-charts

mailu postfix cannot start and when it does, cannot send or receive email

Please provide further information, without I'll close the issue soon.

mamiapatrick

comment created time in 2 months

issue commentMailu/helm-charts

Unable to mount volumes for pod

Docs are updates. If there are no further concerns, I'll close the issue.

andrewnazarov

comment created time in 2 months

more