In order to capture the error output from a command in bash shell and use it further in a bash shell you can you the following
COMMAND_ERROR=$( { command_with_error_output; } 2>&1 )
In order to capture the error output from a command in bash shell and use it further in a bash shell you can you the following
COMMAND_ERROR=$( { command_with_error_output; } 2>&1 )To get the notes supplied with a specific helm release (ie might be useful if you have multiple repositories but not sure which one is used as with keycloak), use the following:
helm -n namespace get notes helm_release
It is possible after a broken network connection to get the restic error about invalid data returned for the index (maybe when using check).
You would need to rebuild the index, but using the –read-all-packs flag as described [here](https://forum.restic.net/t/fatal-load-index-xxxxxxxxx-invalid-data-returned/3596/27) which does the rebuild from scratch
restic rebuild-index -r $REPO --read-all-packs
Since the documentation does not contain any information about how you can connect an existing application to the newly created percona pgo cluster, you can use something like the following in your pod postgresql connection string.
postgresql://username:password@cluster1.pgo-perc-production.svc.cluster.local/production
where cluster1.pgo-perc-production.svc.cluster.local points to your newly created cluster and the /production is the database to connect to.
This is quite possibly caused by one of the limits set too low. It is common when using promtail (with Loki for example) to tail log files.
One of the ways to get over this is to increase the value (in this example max_user_instances) either for the session or by making the change permanent by adding to a file (/etc/sysctl.conf).
For testing and doing it for the session, login to the affected server and do the following
ubuntu@server:~$ cat /proc/sys/fs/inotify/max_user_instances 128 ubuntu@server:~$ sudo sysctl fs.inotify.max_user_instances=8192 fs.inotify.max_user_instances = 8192 ubuntu@server:~$ cat /proc/sys/fs/inotify/max_user_instances 8192
To get the total number of open file descriptors use the following
awk '{print $1}' /proc/sys/fs/file-nrYou would like to import an existing Gitlab project, through an export file, to a new self hosted instance of Gitlab, but using the Web UI, even after changing the max-body-size in the ingress deployment you end up having the error message
Request Entity Too Large
There is another way to import the exported file, but is not documented anywhere as it is classed as EXPERIMENTAL from Gitlab.
You can copy the exported file to the gitlab-toolbox pod
kubectl --kubeconfig ~/.kube/gitlab_config cp local_export.tar.gz gitlab-toolbox-xxx-xxx:/tmp/
You can then login to the gitlab-toolbox pod
kubectl --kubeconfig ~/.kube/gitlab_config -n gitlab-system exec -it gitlab-toolbox-xxx-xxx -- bash
get to directory with the application
cd srv/gitlab
and finally use the rake task gitlab:import_export:import to import your project
git@gitlab-toolbox-xxx-xxx:/srv/gitlab$ bundle exec rake gitlab:import_export:import[your_new_gitlab_username,namespace_path,project_path,/tmp/2022-06-14_14-53-007_export.tar.gz]
You are using helm list and there is a time that you don’t get the full list back, after some time.
By default helm only displays 256 items, so if you are over this default limit you will need to add the –max 0 flag to the command like
helm list --max 0
It is possible to change the configuration of a running gitlab-runner by editing the file ~/.gitlab-runner/config.toml.
For example to be able to switch the log_level from ‘info’ to ‘debug’ and back again, you can do so by logging into the gitlab-runner and then editing the file.
The file is reloaded without the need for rebooting the gitlab-runner.
You want to use Google’s Autopilot for your gitlab runners, but your job/builds fail because of low resources (ie ephemeral storage).
You can use a limit range to increase the limits for ephemeral storage or/and memory that will make Google’s autopilot to use them and scale them appropriately.
Create a limit range file like:
apiVersion: v1
kind: LimitRange
metadata:
name: limit-ephemeral-storage
spec:
limits:
- default:
ephemeral-storage: "10Gi"
memory: "16Gi"
defaultRequest:
ephemeral-storage: "10Gi"
memory: "16Gi"
type: Container
And then apply it to your cluster
kubectl -n namespace apply -f limit_range.yaml