Affichage des articles dont le libellé est GCP. Afficher tous les articles
Affichage des articles dont le libellé est GCP. Afficher tous les articles

mardi 18 janvier 2022

GCP GKE VPN to on premises

Once we needed to consult an on-prem MS SQL from our PHP Lumen microservices.

Following the steps described in the following articles.

https://cloud.google.com/kubernetes-engine/docs/how-to/ip-masquerade-agent

Then change to false to not mask links and publish them to the firewall.

  masqLinkLocal: false


mercredi 27 octobre 2021

Use GCP and gsutil to backup your archive files

I have the case of an old server, still running but already past the end of life. I needed to archive vzdump backups and virtual machine images just in case.

OS is too old to install GCP tools like gcloud and gsutil, but there is curl. So I can archive my files.

From my laptop, I can get my access token once the login is confirmed.

gcloud auth application-default login

gcloud auth application-default print-access-token

You received in response the access token to use with curl to upload the file to mybucket in Google Object Storage.

curl -v --upload-file vzdump-qemu-2210-2020_09_27-14_31_04.vma.gz -H "Authorization: Bearer access-token" https://storage.googleapis.com/mybucket/vzdump-qemu-2210-2020_09_27-14_31_04.vma.gz

Replace access-token and mybucket with your values. 

This way, I can externalize my backups and even try to provision some old machines as VM. Stay tuned.

vendredi 12 juin 2020

Google Cloud Tweak Ingress and Healthcheck

Now  you have everything is on the GKE cluster, differents namespaces, deployments, products and devs.

As always locally it works on the machine but once deployed you get the terrible HTTP/502 the new Blue Screen of Death.

Why ?
Then you troubleshoot ?
    You got 502 after a 30s timeout ?
    You log to the logs /index.html and get HTTP/404 !

What's wrong ? You look to ingress configuration Nginx container, ... then you realize each products have their specificites some have no /index.html, just a response to /, other need a longer timeout to upload or process stuff and so on.

Cloud brings another layer of complexity, for this reason sometimes you need to tweak backend-services and health-checks.

By default backend-services (loadbalancer) have a 30s timeout default.
You can list them and find you backend-services rules
gcloud compute backend-services list

Sometimes it's easier from the console to get the loadbalancer then the backend service you need.
Then you can check with describe
gcloud compute backend-services describe k8s-be-30080--9747qarggg396bf0 --global

Then you can update your timeout or any other settings
gcloud compute backend-services update k8s-be-30080--9747qarggg396bf0 --global --timeout=600

Take a coffee to give time to apply and Bingo your HTTP/502 disappear.
Well this one.

You can also tweak healthcheck
From the console find the healthcheck you need.
You can also list them 
gcloud compute health-checks list --global

Then describe to control
gcloud compute health-checks describe k8s-be-30569--9747df6bftswwq5c396bf0

Update the healthcheck to your needs
gcloud compute health-checks update http k8s-be-30569--9747df6bftswwq5c396bf0 --request-path=/

Now you managed a second HTTP/502 error.
Congratulations
What's next ?

mercredi 10 juin 2020

Automate backup Google Cloud CoudSQL

Google Cloud offers automatic backups but theses backups are bind to the instance.
You can only retain 7 and export them. Also if you need to restore just one database or table you will have to restore all the data of the instance. Finally and more important your business needs may require more frequent backups a smaller RPO.

Solution is automate export of your instances. This way you can choose the tables or databases to export and their frequency.

To do it I what to use Cloud Scheduler, Pub/Sub, Cloud Functions and Cloud Storage.
Based on the following blog I made several attempts

But somethings were missing :
1 - IAM, the permission to export in is Cloud SQL Viewer role and not in Cloud SQL Client role.
You may create a custom role or grant to use Cloud SQL Viewer.

2 - ExportContext.databases API is different if you use Mysql or PostgreSQL instance.
Databases to be exported.
MySQL instances: If fileType is SQL and no database is specified, all databases are exported, except for the mysql system database. If fileType is CSV, you can specify one database, either by using this property or by using the csvExportOptions.selectQuery property, which takes precedence over this property.
PostgreSQL instances: You must specify one database to be exported. If fileType is CSV, this database must match the one specified in the csvExportOptions.selectQuery property.
So if you use Mysql, you many not specify a database name to export all of them.
But if you use Pgsql you have to specify a database name.


This way I use two different schedulers, one for each instance with different payloads, same Pub/Sub topic and same Function.
Finally in the API it's databaseS and not database 😀 cost me some time to figure out my mistake.

Now backups are automated, exported to a bucket with a life cycle.
Production is safe and each dev can download dev db anytime.

Now I need to update the function to parse database of the instance and try a restoration 😄

Thank you

vendredi 13 septembre 2019

Current status

Now that 2019 is almost behind us, it's time to do a retrospective and thoroughly attack the last few minutes. A Tope as we say in Colombia

For this year I had 3 objectives :
  1. Get certified in Cloud
  2. Do a bikepacking
  3. To code my product
Not all the objectives have been completed but I did some

1. I pass Google Cloud Architecture exam and I'm now certified.
2. I did not travel by bike, but I did Paris - ST Aignan ~ 230 km by bike and I made a short trip with the full package.

3. The code is still waiting. I started designing a product I wanted to create but found a similar product in the Google Store and lost confidence. Maybe I should continue.

But as 4. I found a new job and will begin a new adventure soon.

So finally things are even.





lundi 11 mars 2019

Migrate DNS to Google Cloud Platform

Just done it, super easy to migrate your DNS service to GCP.

From your account, I use Cloud shell session directly.

Replace silverston.fr and silverston by your domain

Create your new zone :
gcloud dns managed-zones create --dns-name="silverston.fr." --description="My awesome domain" "silverston" 

Import your zone
gcloud dns record-sets import -z=silverston --zone-file-format silverston.fr.txt --delete-all-existing 

--delete-all-existing is necessary to delete existing NS records and use Google instead.

Get your GCP NS servers :
gcloud dns managed-zones describe silverston


You will get your NS servers, for example :
nameServers: 
- ns-cloud-a1.googledomains.com.
- ns-cloud-a2.googledomains.com.
- ns-cloud-a3.googledomains.com.
- ns-cloud-a4.googledomains.com.


gcloud dns managed-zones describe examplezonename
Update your NS in your current registar to use googledomains (use the servers you get in the previous step)

And you're done.
Control DNS propagation using :

watch dig +short NS silverston.fr

source: https://cloud.google.com/dns/docs/migrating

vendredi 21 septembre 2018

Redirect HTTP to HTTPS using Apache and Google Cloud Platform Loadbalancer


Using Apache to redirect HTTP to HTTP, if https version of the site is not configured via Apache ModSSL it doesn't set %{HTTPS} variable to "on" and keeps redirecting infinitely.

The best way to do is to send X-Forwarded-Proto header from load balancer to Apache and configure RewriteCond as follow.

If not already done enablerewrite and ssl

a2enmod rewrite
a2enmod ssl
Then in HTTP vhost configure

<VirtualHost *:80>
....

RewriteEngine On
RewriteCond %{HTTP:X-Forwarded-Proto} !https
RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [QSA,L,R=301]

...

</VirtualHost>

Instead of the common usage :


RewriteCond %{HTTPS} off

source : https://stackoverflow.com/a/19722706


 

lundi 18 juin 2018

Google cloud platform, forward HTTP to HTTPS


 Hello,

One of the common issue using GCP is loadbalancer HTTP to HTTPS forward.
Still a feature request but not resolved yet.

The best solution I found is the following.
Using Nginx server, HTTP connection are forward to HTTPS in server:443 part.

server {
        listen 443 ssl default_server;
        listen [::]:443 ssl;
if ($http_x_forwarded_proto = "http") {
        return 301 https://$host$request_uri;
    }
(rest of your configuration : ssl, ...)
}


This way your site is always HTTPS.

jeudi 22 mars 2018

GCP SQL instance secure users

Hello

J'aime beaucoup Google Cloud Platform et leur solution de SQL instance.
Facile de creer des bases de donnees, des utilisateurs, le cloud sql proxy, ...

Mais attention quand vous creez des utilisateurs, ils auront les permissions root sur toute l'instance Mysql, quand vous avez plusieurs base de donnees dev, prod,  il vaut mieux securiser.

Facile a faire, une fois creer la bd et l'utilisateur, il suffit de revoker les droits et de mettre les droits necessaire sur la nouvelle bd.

Avec Sql-proxy on se connecte :
mysql -h 127.0.0.1 -u monusr -p mabd
Puis directement depuis Mysql on change les droits
revoke all privileges,grant option from monusr;
GRANT ALL ON mabd.* TO 'monusr';
Et voila comment ca l'utilisateur peut administrer seulement sa base de donnees.
On peut bien sur ajuster le grant en fonction des besoins.

dimanche 11 février 2018

Memo commandes GCP

J'utilise beaucoup Google Cloud Platform.

Je suis tres fan de leur CLI mais j'utilise plusieurs compte, projets et parfois je me souviens plus tres bien comment faire.

Ci-dessous un petit memo des commandes :

Pour ajouter un nouveau compte et selectionner le compte, projets, zones, ...
$ gcloud init

Pour lister ses configurations
$ gcloud config configurations list

Pour passer d'une configuration a l'autre
$ gcloud config configurations activate NAME

Pour lister les projets
$ gcloud projects list

Pour changer de projets
$ gcloud config set project projectname

Pour activer le compte lie a une configuration
$ gcloud config set account monemail@gmail.com

Pour tester les droits et l'environnement actif je liste les VM pour confirmer:
$ gcloud compute instances list

jeudi 16 novembre 2017

Stackdriver installer agent sur version non supporte de Ubuntu

Je teste GCP, l'objet d'un futur post ?
Et je teste le monitoring avec Stackdriver mais de chance stackdriver ne supporte pas Ubuntu 17.04.
Impossible d'installer l'agent a moins de modifier le script d'installation : stack-install.sh

En trichant un peu, remplacer la commande lsb_release -sc par la derniere version supportee : xenial
  -local CODENAME="$(lsb_release -sc)"
  +local CODENAME="xenial"


Et le tour est joue.

samedi 25 février 2017

Creer de la memoire swap sur des petites instances cloud

Si comme moi vous utilisez des petites instances Cloud chez Google Cloud, Digital Ocean ou autres, des petits servers a $5 avec 512Mo de ram et pas de swap. Parfois il ne reste plus assez de memoire disponible.
Par example, c'est le cas sur mon serveur : Asterisk, LAMP.

Dans ce cas le plus simple est de creer un fichier swap qui permet d'augmenter les ressources de votre serveur sans en augmenter le cout et qui en plus vous permet de profiter des performances de votre disque dur "virtuel", du SSD presque comme de la RAM.

  • fallocate -l 2G /swapfile
  • chmod 600 /swapfile
  • mkswap /swapfile
  • editer votre /etc/fstab
    • /swapfile    swap    swap    defaults    0 0
  • swapon -av
Plus rapide que de faire un dd, j'utilise fallocate pour creer un fichier swap de 2G. Puis swapon pour activer la swap et le tour est joue (comme ca vous etes sur que l'entree dans fstab est correcte).

Fini l'alerte concernant la swap et surtout un peu plus de memoire plus d'appels qui se coupe, enfin possible d'upgrader le serveur, ...