OpenTelemetry docker demo error for otelcol container

Problem

You are trying to set up the OpenTelemetry docker demo application but the otel container fails with errors similar to the followng:

Error: failed to resolve config: cannot resolve the configuration: cannot retrieve the configuration: unable to read the file file:/etc/otelcol-config.yml: open /etc/otelcol-config.yml: permission denied

Error: failed to resolve config: cannot resolve the configuration: cannot retrieve the configuration: unable to read the file file:/etc/otelcol-config-extras.yml: open /etc/otelcol-config-extras.yml: permission denied

Solution

Change the permissions to the two configuration files in src/otelcollector to 644 and then stop and start the container.

Following the log file (docker logs -f otel-col) should have something like the following instead of the previous errors:

2024-01-10T14:44:29.062Z	info	service@v0.91.0/telemetry.go:86	Setting up own telemetry...
2024-01-10T14:44:29.063Z	info	service@v0.91.0/telemetry.go:203	Serving Prometheus metrics	{"address": ":8888", "level": "Basic"}
2024-01-10T14:44:29.064Z	info	exporter@v0.91.0/exporter.go:275	Development component. May change in the future.	{"kind": "exporter", "data_type": "logs", "name": "debug"}
2024-01-10T14:44:29.064Z	info	exporter@v0.91.0/exporter.go:275	Development component. May change in the future.	{"kind": "exporter", "data_type": "traces", "name": "debug"}

Create password for prometheus authentication

To create the password hash that can be used in basic authentication with prometheus you can can use the following:

htpasswd -nBC 10 ""  t -d ':\n'


10 is the bcrypt cost. Usually, the cost used should be between 10 and 12, with current computing power. Increasing this number will likely increase the security of the password at the expense of compute resources.

This can be then used in the configuration file like:

tls_server_config:
  cert_file: prometheus.crt
  key_file: prometheus.key
basic_auth_users:
  user: hhshdhdhdhhdhashedpassword

Note: Taken from the book ‘Prometheus Up & Running second edition’ by Julien Pivotto, Brian Brazil from Oreilly

gunzip, k8up, ‘unexpected end of file’

Problem

When using the k8up annotations for a postgresql backup with :

k8up.io/backupcommand: /bin/bash -c 'pg_dumpall --clean | gzip --stdout

do not finish correctly, and as a result the gunzip complains about "unexpected end of file"

Solution

There is a workaround by first saving the file and then sending it to the standard output with

k8up.io/backupcommand: /bin/bash -c 'pg_dumpall --clean | gzip -c > /tmp/backup.gz
      && cat /tmp/backup.gz && rm /tmp/backup.gz'

Thanks Simon Beck for the suggestion https://community.appuio.ch/channel/k8up/thread/j8AcG6ZjgGbQAzth5?msg=xgTanRqqJbBkNj9sd

Validating yaml to json conversion

When using yaml to json conversion as it happens when using Terraform’s helm provider with a helm chart’s values.yaml file, it won’t be possible to check and get any useful validation errors, even when your terraform plan is run.

In order to check before running your plan or if you have any errors you can use yq passing as below using the values.yaml file

yq -p yaml -o json values.yaml

Post Process Forwarder – KafkaError “Offset Out of Range (Kubernetes – Sentry – Helm)

Problem

After an upgrade of a self hosted instance of Sentry in Kubernetes with a Helm chart, you are getting the following error and one or more pods are keep failing with:

Post Process Forwarder - KafkaError "Offset Out of Range

Solution

There is a section in Sentry’s documentation here that describes the issue and leads to the comment here

The comment and the steps are using the –bootstrap-server 127.0.0.1:9092 flag which is the one that works.

It is also important to run the command in the group that you have the issue with (ie snuba-events-subscriptions-consumers) to fix this.

So find out the kafka-0 pod in your kubernetes and login to it

kubectl -n sentry exec -it sentry-kafka-0 -- /bin/bash

Get a list of the groups

I have no name!@sentry-kafka-0:/$ kafka-consumer-groups.sh --bootstrap-server 127.0.0.1:9092 --list
snuba-consumers
ingest-consumer
transactions_group
snuba-post-processor
snuba-events-subscriptions-consumers
subscriptions-commit-log-1de9aaa...
snuba-post-processor:sync:880fbbb...
subscriptions-commit-log-b755cccc...
snuba-replacers
query-subscription-consume
r

Run the command in the group you have the issue (snuba-events-subscriptions-consumers)

I have no name!@sentry-kafka-0:/$ kafka-consumer-groups.sh --bootstrap-server 127.0.0.1:9092 --group snuba-events-subscriptions-consumers --topic events --reset-offsets --to-latest --execute               

GROUP                          TOPIC                          PARTITION  NEW-OFFSET      
snuba-events-subscriptions-consumers events                         0          4834425   
  

Grafana blank graphs in dashboards after update

Upgrading Grafana (kube-prometheus-stack) from version 23.1 to the latest one currently 44.2.1 results in many Grafana graphs to disappear from the existing dashboards.

Since editing the actual graphs and trying to use ‘Run queries’ does not seem to work, a work around is the following:

  • Edit the blank graph.
  • Add a new query (B) – even empty is fine
  • Use the ‘Run queries’ (it does not matter if you use the button on the original one or in the new query B)
  • Delete the new query B
  • The graph should appear again, so use the ‘Apply’ button on top right.
  • Repeat the process for any additional graphs in the dashboard
  • When you finish ‘Save’ the dashboard
  • The dashboard should be working again