mercredi 2 mars 2022

Sync secret management systems into Kubernetes secret

mardi 18 janvier 2022

GCP GKE VPN to on premises

Once we needed to consult an on-prem MS SQL from our PHP Lumen microservices.

Following the steps described in the following articles.

https://cloud.google.com/kubernetes-engine/docs/how-to/ip-masquerade-agent

Then change to false to not mask links and publish them to the firewall.

  masqLinkLocal: false


mercredi 29 décembre 2021

mercredi 27 octobre 2021

Use GCP and gsutil to backup your archive files

I have the case of an old server, still running but already past the end of life. I needed to archive vzdump backups and virtual machine images just in case.

OS is too old to install GCP tools like gcloud and gsutil, but there is curl. So I can archive my files.

From my laptop, I can get my access token once the login is confirmed.

gcloud auth application-default login

gcloud auth application-default print-access-token

You received in response the access token to use with curl to upload the file to mybucket in Google Object Storage.

curl -v --upload-file vzdump-qemu-2210-2020_09_27-14_31_04.vma.gz -H "Authorization: Bearer access-token" https://storage.googleapis.com/mybucket/vzdump-qemu-2210-2020_09_27-14_31_04.vma.gz

Replace access-token and mybucket with your values. 

This way, I can externalize my backups and even try to provision some old machines as VM. Stay tuned.

mercredi 21 juillet 2021

Kubectl plugins the easy way

You know kubectl is today the Cloud console tool, the new SSH and bash and much more.

I use it every day and as always I only rely on my bash history and CTRL+R to find the last correct use.

For a long time, I read Ahmet Alp Balkan tweets about kubectl plugin but it sounds difficult.

Kubernetes the hard way, right?😁

(if you don't already follow Ahmet on Twitter)

So I was happy with my kubectl history until I assist to Paris Containers Day, especially 

Gaëlle Acas & Aurélie Vache's presentation check their video Kubectl plugin in a minute ! (in french)

Incroyable, it's super easy to create your kubectl plugin and they demonstrate in 4 steps :

  1. Create a file named kubectl-myfantasticplugin
  2. Make it executable, chmod +x kubectl-myfantasticplugin
  3. Move it into your PATH, mv kubectl-myfantasticplugin /usr/local/bin
  4. Run "kubectl myfantasticplugin"
You can create your plugin with any language and especially Go, Python, Rust, Quarkus or Bash.

Test yourself :
  1. copy kubectl-halloween bash script from their Github, 
  2. chmod +x kubectl-halloween 
  3. sudo mv kubectl-halloween /usr/local/bin/kubectl-halloween
  4.  kubectl halloween pod
  5. Get your emoji

NAME                                READY   STATUS    RESTARTS   AGE
🎃 my-nginx-5d59d67564-5q4mk           1/1     Running   0          4h52m
🎃 my-nginx-5d59d67564-5shv7           1/1     Running   0          4h52m
🎃 my-nginx-5d59d67564-nwrwk           1/1     Running   0          4h52m
🎃 nginx-deployment-66b6c48dd5-m9rdt   1/1     Running   0          11d
🎃 nginx-deployment-66b6c48dd5-vzw48   1/1     Running   0          11d

It's Awesome.
The best is to come. Now you know how to create kubectl plugins. 
You can create your own, use existing ones and share yours using Krew.

You can also create your own Krew index to share your kubectl plugin in your team, project or company.

And what to do with kubectl plugins ?
Everything ! View secrets, view certificate, resources utilizations, logs, and more.
Plugins are a great way to increase your agility and productivity.

Remember your old bash script, now it's a kubectl plugin and you can share it.

Find Gaelle and Aurelie slide deck, they explain everything better. 
Thank you for the inspiration.

Official documentation

And check Ahmet famous plugin

vendredi 12 juin 2020

Google Cloud Tweak Ingress and Healthcheck

Now  you have everything is on the GKE cluster, differents namespaces, deployments, products and devs.

As always locally it works on the machine but once deployed you get the terrible HTTP/502 the new Blue Screen of Death.

Why ?
Then you troubleshoot ?
    You got 502 after a 30s timeout ?
    You log to the logs /index.html and get HTTP/404 !

What's wrong ? You look to ingress configuration Nginx container, ... then you realize each products have their specificites some have no /index.html, just a response to /, other need a longer timeout to upload or process stuff and so on.

Cloud brings another layer of complexity, for this reason sometimes you need to tweak backend-services and health-checks.

By default backend-services (loadbalancer) have a 30s timeout default.
You can list them and find you backend-services rules
gcloud compute backend-services list

Sometimes it's easier from the console to get the loadbalancer then the backend service you need.
Then you can check with describe
gcloud compute backend-services describe k8s-be-30080--9747qarggg396bf0 --global

Then you can update your timeout or any other settings
gcloud compute backend-services update k8s-be-30080--9747qarggg396bf0 --global --timeout=600

Take a coffee to give time to apply and Bingo your HTTP/502 disappear.
Well this one.

You can also tweak healthcheck
From the console find the healthcheck you need.
You can also list them 
gcloud compute health-checks list --global

Then describe to control
gcloud compute health-checks describe k8s-be-30569--9747df6bftswwq5c396bf0

Update the healthcheck to your needs
gcloud compute health-checks update http k8s-be-30569--9747df6bftswwq5c396bf0 --request-path=/

Now you managed a second HTTP/502 error.
Congratulations
What's next ?

mercredi 10 juin 2020

Automate backup Google Cloud CoudSQL

Google Cloud offers automatic backups but theses backups are bind to the instance.
You can only retain 7 and export them. Also if you need to restore just one database or table you will have to restore all the data of the instance. Finally and more important your business needs may require more frequent backups a smaller RPO.

Solution is automate export of your instances. This way you can choose the tables or databases to export and their frequency.

To do it I what to use Cloud Scheduler, Pub/Sub, Cloud Functions and Cloud Storage.
Based on the following blog I made several attempts

But somethings were missing :
1 - IAM, the permission to export in is Cloud SQL Viewer role and not in Cloud SQL Client role.
You may create a custom role or grant to use Cloud SQL Viewer.

2 - ExportContext.databases API is different if you use Mysql or PostgreSQL instance.
Databases to be exported.
MySQL instances: If fileType is SQL and no database is specified, all databases are exported, except for the mysql system database. If fileType is CSV, you can specify one database, either by using this property or by using the csvExportOptions.selectQuery property, which takes precedence over this property.
PostgreSQL instances: You must specify one database to be exported. If fileType is CSV, this database must match the one specified in the csvExportOptions.selectQuery property.
So if you use Mysql, you many not specify a database name to export all of them.
But if you use Pgsql you have to specify a database name.


This way I use two different schedulers, one for each instance with different payloads, same Pub/Sub topic and same Function.
Finally in the API it's databaseS and not database 😀 cost me some time to figure out my mistake.

Now backups are automated, exported to a bucket with a life cycle.
Production is safe and each dev can download dev db anytime.

Now I need to update the function to parse database of the instance and try a restoration 😄

Thank you