profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/Slach/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.

Altinity/clickhouse-operator 683

The ClickHouse Operator creates, configures and manages ClickHouse clusters running on Kubernetes

AlexAkulov/clickhouse-backup 503

Tool for easy ClickHouse backup and restore with cloud storages support

Slach/clickhouse-flamegraph 54

CLI utility for build flamegraph based on system.trace_log

Altinity/clickhouse-zabbix-template 38

Zabbix template for ClickHouse

Slach/clickhouse-zetcd 18

Test stand for creating Clickhouse test cluster via docker and zetcd

Slach/clickhouse-metrics-grafana 8

Clickhouse grafana dashboard with graphite storage

Altinity/clickhouse-backup 6

Tool for easy ClickHouse backup and restore with cloud storages support

Altinity/dbench 3

Benchmark Kubernetes persistent disk volumes with fio: Read/write IOPS, bandwidth MB/s and latency

Altinity/clickhouse-grafana 1

Clickhouse datasource for grafana

Altinity/grafana-plugin-repository 1

The plugin repository for plugins that are published on grafana.com.

issue commentAlexAkulov/clickhouse-backup

`upload` causes OOM killed - the whole zipped backup is less than 1GB

@fzyzcjy sorry for late reply, is bug still relevant to you?

Could you try to latest https://github.com/Altinity/clickhouse-backup/releases/tag/1.1.1? Which remote storage type do you use?

Could you share your config.yml with masked private credentials?

fzyzcjy

comment created time in a minute

issue commentAlexAkulov/clickhouse-backup

Folder download from a GCS bucket fails with 'is not found on remote storage'

@sushraju sorry for late reply,

could you try to use latest 1.1.0+ clickhouse-backup? Is issue still relevant with latest version?

Could you share your current config.yml without private credentials?

sushraju

comment created time in 4 minutes

issue closedAlexAkulov/clickhouse-backup

remote_storage: none leads to config validation failure

With remote_storage set to 'none' all functions exit with: "error 'none' is unsupported compression format"

I tracked it to line 201 of config/config.go where the check on remote_storage was removed in commit ab563711c89a781d58fabf8839b934ca46e37caf

closed time in 7 minutes

widders

issue commentAlexAkulov/clickhouse-backup

remote_storage: none leads to config validation failure

@widders did you try the latest 1.1.0+ clickhouse-backup release?

I tried to reproduce your issue with following config

general:
  remote_storage: none

clickhouse-backup list and clickhouse-backup list local works ok all command which related to remote_srorage aborted with properly error message

I close issue, feel free to make comment or reopen if necessary

widders

comment created time in 7 minutes

issue commentAltinity/clickhouse-operator

clcikhouse metrics is not showing

Ok, could you share clickhouse-client -q "SELECT count() FROM system.parts" results? count is 172716

ok. could you precise measurement following query execution time?

let's run

kubectl exec -n ch-prod chi-repl-prod-prdc01-0-0-0 -c clickhouse-pod -- time clickhouse-client --echo --progress -mn -q "
SELECT
-q "
SELECT 'metric.DiskDataBytes' AS metric, toString(sum(bytes_on_disk)) AS value, 'Total data size for all ClickHouse tables' AS description, 'gauge' AS type FROM system.parts;

SELECT 'metric.MemoryPrimaryKeyBytesAllocated' AS metric, toString(sum(primary_key_bytes_in_memory_allocated)) AS value, 'Memory size allocated for primary keys' AS description, 'gauge' AS type FROM system.parts;

SELECT database, table, toString(active) AS active, toString(uniq(partition)) AS partitions, toString(count()) AS parts, toString(sum(bytes)) AS bytes, toString(sum(data_uncompressed_bytes)) AS uncompressed_bytes, toString(sum(rows)) AS rows FROM system.parts GROUP BY active, database, table;
"

and let's analyze partition rows distribution, could you share

 SELECT database, table, count(), sum(rows), quantiles(0.5,0.9)(rows) FROM system.parts GROUP BY database, table;
sagarshrestha24

comment created time in 44 minutes

issue closedAlexAkulov/clickhouse-backup

I have clickhouse cluster with 2 shard 1 replicated, when i run clickhouse-backup has error

./clickhouse-backup --config config.yml create tables 2021/08/31 03:28:42 error cat't get tables from clickhouse: code: 999, message: Session expired (Session expired): While executing Tables

The clickhouse cluster has 3 clickhouse instances of docker deploying(shard1 \shard2\replicated1), the some tables engine of is distributed. about the parts configuration of the cluster's config.xml

image

closed time in an hour

cntsp

issue commentAlexAkulov/clickhouse-backup

I have clickhouse cluster with 2 shard 1 replicated, when i run clickhouse-backup has error

@cntsp is the issue still relevant for you? Did you try the latest releases 1.1.0+?

Let's close issue, during inactivity, feel free to reopen if necessary

cntsp

comment created time in an hour

issue closedAlexAkulov/clickhouse-backup

I can't able to restore by clickhouse backup file using clickhouse-backup tools

Hi, -I have use the clickhouse-backup tools for taking backup of my clickhouse database. -It is working fine. -I tried to restore my backup db in another server at that Time am facing the below issues. "error code: 336, message: Unknown database engine: ENGINE = Memory()"

closed time in an hour

sivasankarananth

issue commentAlexAkulov/clickhouse-backup

I can't able to restore by clickhouse backup file using clickhouse-backup tools

@sivasankarananth is the issue still relevant for you? Did you try the latest releases 1.1.0+? Let's close issue during inactivity, feel free reopen if necessary

sivasankarananth

comment created time in an hour

issue commentAltinity/clickhouse-operator

clcikhouse metrics is not showing

Ok, could you share clickhouse-client -q "SELECT count() FROM system.parts" results?

How you ingest data to your clickhouse data? How many rows contains in each INSERT statement? let's chech

SELECT ProfileEvent_InsertedRows, ProfileEvent_InsertQuery, event_time 
FROM system.metric_log
WHERE event_date = toDate(now()) ORDER BY event_time DESC LIMIT 60
sagarshrestha24

comment created time in 2 hours

push eventAltinity/clickhouse-operator

Slach

commit sha a42d3ca922227cf2a1c84c7707b543b379695dd7

update test image altinit/clickhouse-backup:1.1.0, some code cleanup in `clickhouse_fetcher.go`

view details

push time in 2 hours

issue commentAltinity/clickhouse-operator

clcikhouse metrics is not showing

@sagarshrestha24 Expected behavior stuck in one query please share what query will exactly stuck

sagarshrestha24

comment created time in 2 hours

issue commentAltinity/clickhouse-operator

clcikhouse metrics is not showing

ok. great, we found root cause, one of SQL monitoring queries take too much time, let's check what exactly wrong with your metrics data:

could you check following SQL queries in each clickhouse-pod

chi-repl-prod-prdc01-0-0-0
chi-repl-prod-prdc01-0-1-0
chi-repl-prod-prdc01-1-0-0
chi-repl-prod-prdc01-1-1-0

for example:

kubectl exec -n ch-prod chi-repl-prod-prdc01-0-0-0 -c clickhouse-pod -- clickhouse-client --echo --progress -mn -q "
SELECT
			database,
			table,
			toString(is_session_expired) AS is_session_expired
		FROM system.replicas;

SELECT
        	concat('metric.', metric) AS metric,
        	toString(value)           AS value, 
        	''                        AS description, 
        	'gauge'                   AS type
    	FROM system.asynchronous_metrics;
    	
SELECT 
	        concat('metric.', metric) AS metric, 
	        toString(value)           AS value, 
    	    ''                        AS description,       
	        'gauge'                   AS type   
	    FROM system.metrics;
	    
SELECT
	        concat('event.', event)   AS metric,
	        toString(value)           AS value,
	        ''                        AS description,
	        'counter'                 AS type
	    FROM system.events;	    
	        			
SELECT 
	        'metric.DiskDataBytes'                      AS metric,
	        toString(sum(bytes_on_disk))                AS value,
	        'Total data size for all ClickHouse tables' AS description,
		    'gauge'                                     AS type
	    FROM system.parts;

SELECT 
	        'metric.MemoryPrimaryKeyBytesAllocated'              AS metric,
	        toString(sum(primary_key_bytes_in_memory_allocated)) AS value,
	        'Memory size allocated for primary keys'             AS description,
	        'gauge'                                              AS type
	    FROM system.parts;

SELECT 
	        'metric.MemoryDictionaryBytesAllocated'  AS metric,
	        toString(sum(bytes_allocated))           AS value,
	        'Memory size allocated for dictionaries' AS description,
	        'gauge'                                  AS type
	    FROM system.dictionaries;
	    
SELECT
            'metric.LongestRunningQuery' AS metric,
            toString(max(elapsed))       AS value,
            'Longest running query time' AS description,
            'gauge'                      AS type
	    FROM system.processes;
	    
SELECT
            'metric.ChangedSettingsHash'       AS metric,
            toString(groupBitXor(cityHash64(name,value))) AS value,
            'Control sum for changed settings' AS description,
            'gauge'                            AS type
		FROM system.settings WHERE changed;	    

SELECT
			database,
			table,
			toString(active)                       AS active,
			toString(uniq(partition))              AS partitions, 
			toString(count())                      AS parts, 
			toString(sum(bytes))                   AS bytes, 
			toString(sum(data_uncompressed_bytes)) AS uncompressed_bytes, 
			toString(sum(rows))                    AS rows 
		FROM system.parts
		GROUP BY active, database, table;

SELECT
			database,
			table,
			count()          AS mutations,
			sum(parts_to_do) AS parts_to_do 
		FROM system.mutations 
		WHERE is_done = 0 
		GROUP BY database, table;
		
SELECT 
	        name,
            toString(free_space) AS free_space,
			toString(total_space) AS total_space			
        FROM system.disks;
        
SELECT 
			count() AS detached_parts,
			database,
			table,
			disk,
			if(coalesce(reason,'unknown')='','detached_by_user',coalesce(reason,'unknown')) AS detach_reason
		FROM system.detached_parts
		GROUP BY
			database,
			table,
			disk,
			reason        					    	    
"
sagarshrestha24

comment created time in 3 hours

issue commentAltinity/clickhouse-operator

clcikhouse metrics is not showing

I miss ?query=SELECT instead of just ?SELECT

if last query return 404 status it mean clickhouse-operator metrics-exporter container should show metrics

let's last check

kubectl exec -n kube-system <clickhouse-operator-pod-name> -c metrics-exporter -- wget -qO- "http://clickhouse_operator:clickhouse_operator_password@chi-repl-prod-prdc01-0-0.ch-prod.svc.cluster.local:8123/?query=SELECT+count()+FROM+system.metrics"

kubectl exec -n kube-system <clickhouse-operator-pod-name> -c metrics-exporter -- wget -qO- http://127.0.0.1:8888/chi

kubectl exec -n kube-system <clickhouse-operator-pod-name> -c metrics-exporter -- wget -qO- http://127.0.0.1:8888/metrics
sagarshrestha24

comment created time in 4 hours

startedalcideio/rbac-tool

started time in 4 hours

issue commentAltinity/clickhouse-operator

clcikhouse metrics is not showing

@surajmachamasi check SQL query with authorization

kubectl exec -n kube-system <clickhouse-operator-pod-name> -c metrics-exporter -- wget -qO- "http://clickhouse_operator:clickhouse_operator_password@chi-repl-prod-prdc01-0-0.ch-prod.svc.cluster.local:8123/?SELECT+count()+FROM+system.metrics"

sagarshrestha24

comment created time in 4 hours

issue commentAltinity/clickhouse-operator

clcikhouse metrics is not showing

Check connection between metrics-exporter and clickhouse-server kubectl exec -n kube-system <clickhouse-operator-pod-name> -c metrics-exporter -- wget -qO- http://chi-repl-prod-prdc01-0-0.ch-prod.svc.cluster.local:8123/ping

Check status of pods kubectl get pods --all-namespaces | grep repl-prod

sagarshrestha24

comment created time in 5 hours

issue commentAltinity/clickhouse-operator

clcikhouse metrics is not showing

@sagarshrestha24 could you share result for following command?

kubectl get pod -n ch-prod repl-prod-prdc01 -o yaml

sagarshrestha24

comment created time in 7 hours

issue commentAltinity/clickhouse-operator

clcikhouse metrics is not showing

@sagarshrestha24 do you install clickhouse-operator into kube-system namespace or into ch-prod namespace?

sagarshrestha24

comment created time in 7 hours

issue commentAltinity/clickhouse-operator

clcikhouse metrics is not showing

@sagarshrestha24 could you share?

kubectl logs -n kube-system -l app=clickhouse-operator -c metrics-exporter --since=24h | grep -i fail

sagarshrestha24

comment created time in 8 hours

issue closedAlexAkulov/clickhouse-backup

A way to persist metrics after restart

It would be nice if clickhouse-backup could somehow preserve metrics between restarts. This would help with alerting:

Imagine alerting on last backup date:

  - alert: ClickHouseBackupLastCreate
    expr: time() - clickhouse_backup_last_create_start > 172800
    labels:
      severity: warning
    annotations:
      summary: "Last backup is more than 2 days old at `{{ $labels.instance }}`"
      description: "Last backup created {{ $value | humanizeDuration }} ago at `{{ $labels.instance }}`"

Currently it will trigger after clickhouse-backup restart since all metrics will be reinitialized. I can change alert expression to something like expr: time() - clickhouse_backup_last_create_start > 172800 < 31557600 as a workaround, but still... Having statistics about backup in grafana dashboard is also nice.

Maybe statistics can be saved to a file and loaded at start if file is present? And if not file will be created and initialized with zeroes.

closed time in 9 hours

nahsi

issue commentAltinity/clickhouse-operator

Persistent storage is not working in my Mac

@everydots

everydots

comment created time in a day

issue commentAltinity/clickhouse-operator

Persistent storage is not working in my Mac

kubectl get chi --all-namespaces

everydots

comment created time in a day

issue commentAltinity/clickhouse-operator

Persistent storage is not working in my Mac

Could you share?

kubectl get pvc --all-namespaces
kubectl get pv --all-namespaces
everydots

comment created time in a day

issue commentAltinity/clickhouse-operator

Persistent storage is not working in my Mac

Which storage provisioner do you use in your kubernetes installation? Do you use minikube or something else?

Could you share

kubectl get storageclasses --all-namespaces
kubectl get csidriver --all-namespaces
everydots

comment created time in a day

issue commentAlexAkulov/clickhouse-backup

Dedicated clickhouse-backup server restores only the schema, not the data

when you starts clickhouse-backup server it load /etc/clickhouse-backup/config.yml and if environment variables exists it ovveride config values

please use

general:
  remote_storage: sftp

instead of

general:
  remote: sftp

also, try to use altinity/clickhouse-backup:1.1.1 image

LVH-27

comment created time in a day

issue commentVertamedia/clickhouse-grafana

Alerts use wrong timestamps for queries

Which grafana version and plugin version do you use?

ertong

comment created time in a day

issue commentAlexAkulov/clickhouse-backup

A way to persist metrics after restart

File is the worst choice for persistent metrics storage in containerizing environments, any external storage add dependency and complexity for support

I prefer the combination of following backup alerts

        - alert: ClickhouseBackupDoesntRunTooLong
          expr: |-
            (clickhouse_backup_last_backup_end > 0 and time() - clickhouse_backup_last_backup_end > 129600)
            or (clickhouse_backup_last_create_finish > 0 and time() - clickhouse_backup_last_create_finish > 129600)
            or (clickhouse_backup_last_upload_finish > 0 and time() - clickhouse_backup_last_upload_finish > 129600)
            
        - alert: ClickHouseRemoteBackupSizeZero
          for: "36h"
          expr: clickhouse_backup_last_backup_size_remote == 0

look to https://github.com/Altinity/clickhouse-operator/blob/master/deploy/prometheus/prometheus-alert-rules-backup.yaml

for details

nahsi

comment created time in a day

startedarttor/helmify

started time in a day

startedceache/zab_dissector

started time in 2 days