Restic index invalid data returned error

Problem

It is possible after a broken network connection to get the restic error about invalid data returned for the index (maybe when using check).

Solution

You would need to rebuild the index, but using the –read-all-packs flag as described [here](https://forum.restic.net/t/fatal-load-index-xxxxxxxxx-invalid-data-returned/3596/27) which does the rebuild from scratch

restic rebuild-index -r $REPO --read-all-packs

PostgreSQL connection string for Percona PostgreSQL K8S operator

Since the documentation does not contain any information about how you can connect an existing application to the newly created percona pgo cluster, you can use something like the following in your pod postgresql connection string.

postgresql://username:password@cluster1.pgo-perc-production.svc.cluster.local/production

where cluster1.pgo-perc-production.svc.cluster.local points to your newly created cluster and the /production is the database to connect to.

failed to create fsnotify watcher: too many open files

This is quite possibly caused by one of the limits set too low. It is common when using promtail (with Loki for example) to tail log files.

One of the ways to get over this is to increase the value (in this example max_user_instances) either for the session or by making the change permanent by adding to a file (/etc/sysctl.conf).

For testing and doing it for the session, login to the affected server and do the following

ubuntu@server:~$ cat /proc/sys/fs/inotify/max_user_instances 
128
ubuntu@server:~$ sudo sysctl fs.inotify.max_user_instances=8192
fs.inotify.max_user_instances = 8192
ubuntu@server:~$ cat /proc/sys/fs/inotify/max_user_instances 
8192

Request Entity Too Large when trying to import Gitlab project

Problem

You would like to import an existing Gitlab project, through an export file, to a new self hosted instance of Gitlab, but using the Web UI, even after changing the max-body-size in the ingress deployment you end up having the error message

Request Entity Too Large

Solution

There is another way to import the exported file, but is not documented anywhere as it is classed as EXPERIMENTAL from Gitlab.

You can copy the exported file to the gitlab-toolbox pod

kubectl --kubeconfig ~/.kube/gitlab_config cp local_export.tar.gz gitlab-toolbox-xxx-xxx:/tmp/

You can then login to the gitlab-toolbox pod

kubectl --kubeconfig ~/.kube/gitlab_config -n gitlab-system exec -it gitlab-toolbox-xxx-xxx -- bash

get to directory with the application

cd srv/gitlab

and finally use the rake task gitlab:import_export:import to import your project

git@gitlab-toolbox-xxx-xxx:/srv/gitlab$ bundle exec rake gitlab:import_export:import[your_new_gitlab_username,namespace_path,project_path,/tmp/2022-06-14_14-53-007_export.tar.gz]

Changing gitlab-runner configuration options

It is possible to change the configuration of a running gitlab-runner by editing the file ~/.gitlab-runner/config.toml.

For example to be able to switch the log_level from ‘info’ to ‘debug’ and back again, you can do so by logging into the gitlab-runner and then editing the file.

The file is reloaded without the need for rebooting the gitlab-runner.

Google Autopilot and Gitlab failed builds

Problem

You want to use Google’s Autopilot for your gitlab runners, but your job/builds fail because of low resources (ie ephemeral storage).

Solution

You can use a limit range to increase the limits for ephemeral storage or/and memory that will make Google’s autopilot to use them and scale them appropriately.

Create a limit range file like:

apiVersion: v1
kind: LimitRange
metadata:
  name: limit-ephemeral-storage
spec:
  limits:
  - default:
      ephemeral-storage: "10Gi"
      memory: "16Gi"
    defaultRequest:
      ephemeral-storage: "10Gi"
      memory: "16Gi"
    type: Container

And then apply it to your cluster

kubectl -n namespace apply -f limit_range.yaml