gunzip, k8up, ‘unexpected end of file’

Problem

When using the k8up annotations for a postgresql backup with :

k8up.io/backupcommand: /bin/bash -c 'pg_dumpall --clean | gzip --stdout

do not finish correctly, and as a result the gunzip complains about "unexpected end of file"

Solution

There is a workaround by first saving the file and then sending it to the standard output with

k8up.io/backupcommand: /bin/bash -c 'pg_dumpall --clean | gzip -c > /tmp/backup.gz
      && cat /tmp/backup.gz && rm /tmp/backup.gz'

Thanks Simon Beck for the suggestion https://community.appuio.ch/channel/k8up/thread/j8AcG6ZjgGbQAzth5?msg=xgTanRqqJbBkNj9sd

Post Process Forwarder – KafkaError “Offset Out of Range (Kubernetes – Sentry – Helm)

Problem

After an upgrade of a self hosted instance of Sentry in Kubernetes with a Helm chart, you are getting the following error and one or more pods are keep failing with:

Post Process Forwarder - KafkaError "Offset Out of Range

Solution

There is a section in Sentry’s documentation here that describes the issue and leads to the comment here

The comment and the steps are using the –bootstrap-server 127.0.0.1:9092 flag which is the one that works.

It is also important to run the command in the group that you have the issue with (ie snuba-events-subscriptions-consumers) to fix this.

So find out the kafka-0 pod in your kubernetes and login to it

kubectl -n sentry exec -it sentry-kafka-0 -- /bin/bash

Get a list of the groups

I have no name!@sentry-kafka-0:/$ kafka-consumer-groups.sh --bootstrap-server 127.0.0.1:9092 --list
snuba-consumers
ingest-consumer
transactions_group
snuba-post-processor
snuba-events-subscriptions-consumers
subscriptions-commit-log-1de9aaa...
snuba-post-processor:sync:880fbbb...
subscriptions-commit-log-b755cccc...
snuba-replacers
query-subscription-consume
r

Run the command in the group you have the issue (snuba-events-subscriptions-consumers)

I have no name!@sentry-kafka-0:/$ kafka-consumer-groups.sh --bootstrap-server 127.0.0.1:9092 --group snuba-events-subscriptions-consumers --topic events --reset-offsets --to-latest --execute               

GROUP                          TOPIC                          PARTITION  NEW-OFFSET      
snuba-events-subscriptions-consumers events                         0          4834425   
  

Grafana blank graphs in dashboards after update

Upgrading Grafana (kube-prometheus-stack) from version 23.1 to the latest one currently 44.2.1 results in many Grafana graphs to disappear from the existing dashboards.

Since editing the actual graphs and trying to use ‘Run queries’ does not seem to work, a work around is the following:

  • Edit the blank graph.
  • Add a new query (B) – even empty is fine
  • Use the ‘Run queries’ (it does not matter if you use the button on the original one or in the new query B)
  • Delete the new query B
  • The graph should appear again, so use the ‘Apply’ button on top right.
  • Repeat the process for any additional graphs in the dashboard
  • When you finish ‘Save’ the dashboard
  • The dashboard should be working again

PostgreSQL connection string for Percona PostgreSQL K8S operator

Since the documentation does not contain any information about how you can connect an existing application to the newly created percona pgo cluster, you can use something like the following in your pod postgresql connection string.

postgresql://username:password@cluster1.pgo-perc-production.svc.cluster.local/production

where cluster1.pgo-perc-production.svc.cluster.local points to your newly created cluster and the /production is the database to connect to.

Kubectl using -l a=something -l b=other or -l a=something,b=other

When you have two pods with different labels, let’s say one with two labels a=something and b=other, and the second one with label b=other, when you use kubectl to get them there is a difference in the way that the -l selector is used.

So using kubectl -n namespace get pods -l a=something -l b=other it will give you back both pods as it works as an OR operator.

If you wanted to get only the first one that has both labels, but not the second, you would need to use it as in kubectl -n namespace get pods -l a=something,b=other.

In other words the comma separator acts as a logical AND operator in selecting the labels.

sudo kubeadmin init returns with: [ERROR Swap]: running with swap on is not supported. Please disable swap

Problem

You are trying to set up kubernetes on your local machine but trying to use initialize it with kubeadmin init, returns the following error:

[init] Using Kubernetes version: v1.18.3
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

Solution

Switch the swap to off (Ubuntu command below) and try again.

sudo swapoff -a

WARN[0000] Your Skaffold version might be too old. Download the latest version (1.0.1) at https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-amd64

Problem

You would like to follow the example for working with skaffold by using the skaffold dev command but you are getting the following error:

kosmas:getting-started (master)$ skaffold dev
WARN[0000] Your Skaffold version might be too old. Download the latest version (1.0.1) at https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-amd64 

Solution

Download the latest release from here https://github.com/GoogleContainerTools/skaffold/releases and follow the instructions for installing it in your system (ie linux):

 curl -Lo skaffold https://storage.googleapis.com/skaffold/releases/v1.0.1/skaffold-linux-amd64 && chmod +x skaffold && sudo mv skaffold /usr/local/bin

kubectl error: “The connection to the server localhost:8080 was refused – did you specify the right host or port?”

Problem

You want to use the kubectl to get your cluster information but you are getting the following error message

$ kubectl cluster-info
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server localhost:8080 was refused - did you specify the right host or port?

Solution

That could happen if you have already setup kubectl before or have used a different environment.

So you will need to first unset the environment variable KUBECONFIG with:

$ unset KUBECONFIG

and then use the gcloud client to set your environment again:


$ gcloud container clusters get-credentials your_cluster_name --zone europe-west2-a --project your_project_name

So after this you should be able to use kubectl to get the cluster information

kubeconfig entry generated for your_cluster_name.

$ kubectl cluster-info
Kubernetes master is running at https://xxx.xxx.xxx.xxx
GLBCDefaultBackend is running at https://xxx.xxx.xxx.xxx/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy
Heapster is running at https://xxx.xxx.xxx.xxx/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at https://xxx.xxx.xxx.xxx/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://xxx.xxx.xxx.xxx/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy