mercredi 21 juillet 2021

Kubectl plugins the easy way

You know kubectl is today the Cloud console tool, the new SSH and bash and much more.

I use it every day and as always I only rely on my bash history and CTRL+R to find the last correct use.

For a long time, I read Ahmet Alp Balkan tweets about kubectl plugin but it sounds difficult.

Kubernetes the hard way, right?😁

(if you don't already follow Ahmet on Twitter)

So I was happy with my kubectl history until I assist to Paris Containers Day, especially 

Gaëlle Acas & Aurélie Vache's presentation check their video Kubectl plugin in a minute ! (in french)

Incroyable, it's super easy to create your kubectl plugin and they demonstrate in 4 steps :

  1. Create a file named kubectl-myfantasticplugin
  2. Make it executable, chmod +x kubectl-myfantasticplugin
  3. Move it into your PATH, mv kubectl-myfantasticplugin /usr/local/bin
  4. Run "kubectl myfantasticplugin"
You can create your plugin with any language and especially Go, Python, Rust, Quarkus or Bash.

Test yourself :
  1. copy kubectl-halloween bash script from their Github, 
  2. chmod +x kubectl-halloween 
  3. sudo mv kubectl-halloween /usr/local/bin/kubectl-halloween
  4.  kubectl halloween pod
  5. Get your emoji

NAME                                READY   STATUS    RESTARTS   AGE
🎃 my-nginx-5d59d67564-5q4mk           1/1     Running   0          4h52m
🎃 my-nginx-5d59d67564-5shv7           1/1     Running   0          4h52m
🎃 my-nginx-5d59d67564-nwrwk           1/1     Running   0          4h52m
🎃 nginx-deployment-66b6c48dd5-m9rdt   1/1     Running   0          11d
🎃 nginx-deployment-66b6c48dd5-vzw48   1/1     Running   0          11d

It's Awesome.
The best is to come. Now you know how to create kubectl plugins. 
You can create your own, use existing ones and share yours using Krew.

You can also create your own Krew index to share your kubectl plugin in your team, project or company.

And what to do with kubectl plugins ?
Everything ! View secrets, view certificate, resources utilizations, logs, and more.
Plugins are a great way to increase your agility and productivity.

Remember your old bash script, now it's a kubectl plugin and you can share it.

Find Gaelle and Aurelie slide deck, they explain everything better. 
Thank you for the inspiration.

Official documentation

And check Ahmet famous plugin

vendredi 12 juin 2020

Google Cloud Tweak Ingress and Healthcheck

Now  you have everything is on the GKE cluster, differents namespaces, deployments, products and devs.

As always locally it works on the machine but once deployed you get the terrible HTTP/502 the new Blue Screen of Death.

Why ?
Then you troubleshoot ?
    You got 502 after a 30s timeout ?
    You log to the logs /index.html and get HTTP/404 !

What's wrong ? You look to ingress configuration Nginx container, ... then you realize each products have their specificites some have no /index.html, just a response to /, other need a longer timeout to upload or process stuff and so on.

Cloud brings another layer of complexity, for this reason sometimes you need to tweak backend-services and health-checks.

By default backend-services (loadbalancer) have a 30s timeout default.
You can list them and find you backend-services rules
gcloud compute backend-services list

Sometimes it's easier from the console to get the loadbalancer then the backend service you need.
Then you can check with describe
gcloud compute backend-services describe k8s-be-30080--9747qarggg396bf0 --global

Then you can update your timeout or any other settings
gcloud compute backend-services update k8s-be-30080--9747qarggg396bf0 --global --timeout=600

Take a coffee to give time to apply and Bingo your HTTP/502 disappear.
Well this one.

You can also tweak healthcheck
From the console find the healthcheck you need.
You can also list them 
gcloud compute health-checks list --global

Then describe to control
gcloud compute health-checks describe k8s-be-30569--9747df6bftswwq5c396bf0

Update the healthcheck to your needs
gcloud compute health-checks update http k8s-be-30569--9747df6bftswwq5c396bf0 --request-path=/

Now you managed a second HTTP/502 error.
Congratulations
What's next ?

mercredi 10 juin 2020

Automate backup Google Cloud CoudSQL

Google Cloud offers automatic backups but theses backups are bind to the instance.
You can only retain 7 and export them. Also if you need to restore just one database or table you will have to restore all the data of the instance. Finally and more important your business needs may require more frequent backups a smaller RPO.

Solution is automate export of your instances. This way you can choose the tables or databases to export and their frequency.

To do it I what to use Cloud Scheduler, Pub/Sub, Cloud Functions and Cloud Storage.
Based on the following blog I made several attempts

But somethings were missing :
1 - IAM, the permission to export in is Cloud SQL Viewer role and not in Cloud SQL Client role.
You may create a custom role or grant to use Cloud SQL Viewer.

2 - ExportContext.databases API is different if you use Mysql or PostgreSQL instance.
Databases to be exported.
MySQL instances: If fileType is SQL and no database is specified, all databases are exported, except for the mysql system database. If fileType is CSV, you can specify one database, either by using this property or by using the csvExportOptions.selectQuery property, which takes precedence over this property.
PostgreSQL instances: You must specify one database to be exported. If fileType is CSV, this database must match the one specified in the csvExportOptions.selectQuery property.
So if you use Mysql, you many not specify a database name to export all of them.
But if you use Pgsql you have to specify a database name.


This way I use two different schedulers, one for each instance with different payloads, same Pub/Sub topic and same Function.
Finally in the API it's databaseS and not database 😀 cost me some time to figure out my mistake.

Now backups are automated, exported to a bucket with a life cycle.
Production is safe and each dev can download dev db anytime.

Now I need to update the function to parse database of the instance and try a restoration 😄

Thank you

vendredi 13 septembre 2019

Current status

Now that 2019 is almost behind us, it's time to do a retrospective and thoroughly attack the last few minutes. A Tope as we say in Colombia

For this year I had 3 objectives :
  1. Get certified in Cloud
  2. Do a bikepacking
  3. To code my product
Not all the objectives have been completed but I did some

1. I pass Google Cloud Architecture exam and I'm now certified.
2. I did not travel by bike, but I did Paris - ST Aignan ~ 230 km by bike and I made a short trip with the full package.

3. The code is still waiting. I started designing a product I wanted to create but found a similar product in the Google Store and lost confidence. Maybe I should continue.

But as 4. I found a new job and will begin a new adventure soon.

So finally things are even.





mardi 23 juillet 2019

Kubernetes, secret and ingress

Hello,

Theses days I spent a lof of times to setup ingress and TLS, to keep the commands needed :
To create TLS secret used by the ingress :
kubectl create secret tls secret-tls-name --key private.key --cert bundle.pem

To create bundle :
cat cer_file chain_file root_file > bundle.pem
If certificate are CER convert them to PEM first
openssl x509 -inform der -in certificate.cer -out certificate.pem

Then in ingress configuration specify in spec, the host, the backend, the tls file for the host :
spec:
  rules:
  - host: myapp.myhost.net
    http:
      paths:
      - backend:
          serviceName: myservice
          servicePort: 8080
        path: /
  tls:
  - hosts:
    - myapp.myhost.net
    secretName: secret-tls-name
You can multiple host and tls cert by host.

samedi 18 mai 2019

Schedule stop and start VM in Google Cloud

Hello,

First month I check the invoice and What !!!
Oups why to run the staging and support environment during non working hours ?
How to do it ?
Schedule I cron from Cloud Shell, maybe ?
But a more elegant way is to use Cloud Scheduler to create a cron job, to trigger Pub/Sub to execute a Cloud Function to stop/start the VM.
I need to check the cost 😀 but at least I learn something new.

Tutorial is straight forward
https://cloud.google.com/scheduler/docs/start-and-stop-compute-engine-instances-on-a-schedule

I simply changed the memory to the function to 128Mo.
Maybe there is a way to pass a list of VM, instead to create a cron job by VM and tutorial uses Node JS 6.x backlevel in April 2020, so a change of the code will be need, but nonetheless is works like a charm.

lundi 11 mars 2019

Migrate DNS to Google Cloud Platform

Just done it, super easy to migrate your DNS service to GCP.

From your account, I use Cloud shell session directly.

Replace silverston.fr and silverston by your domain

Create your new zone :
gcloud dns managed-zones create --dns-name="silverston.fr." --description="My awesome domain" "silverston" 

Import your zone
gcloud dns record-sets import -z=silverston --zone-file-format silverston.fr.txt --delete-all-existing 

--delete-all-existing is necessary to delete existing NS records and use Google instead.

Get your GCP NS servers :
gcloud dns managed-zones describe silverston


You will get your NS servers, for example :
nameServers: 
- ns-cloud-a1.googledomains.com.
- ns-cloud-a2.googledomains.com.
- ns-cloud-a3.googledomains.com.
- ns-cloud-a4.googledomains.com.


gcloud dns managed-zones describe examplezonename
Update your NS in your current registar to use googledomains (use the servers you get in the previous step)

And you're done.
Control DNS propagation using :

watch dig +short NS silverston.fr

source: https://cloud.google.com/dns/docs/migrating