One of my costumers have a production environment in GCP. We helped them to migrate from on premises to GCP. We have setup the automated backup for their Cloud SQL for mysql 5.7 instances. Since the cloud sql backup using snapshot instance and only store/rotate 7 days of recovery point, they need another method to store the mysql backup with more retention.

We then use component below to create an automated export sql file from the Cloud SQL instances and store the on the Cloud Storage.

  • Cloud Scheduler
    As the schduler to run the task
  • Cloud Pub/Sub
    Payload source to trigger the automated export process
  • Cloud Functions
    The Function that run the export process using sql admin api.
  • Cloud Storage
    As the store target for the exported file from the automated process
  • Cloud IAM
    Permission and service account management for the related process/tools
  • Cloud SQL Admin API
    Make sure this enable, in order to run the allow the cloud function to run.

How to

  • Enable Cloud SQL Admin API
  • Create Cloud Storage bucket. In this case, we use Nearline storage type since the purpose is to store a backup. You can also set the lifecycle of the object in the storage. Ex. All the object older than 3 month automatically archive or delete.
  • Get the Cloud SQL instance service account and assign the related service account to the bucket with Storage/Bucket writer.
  • Create New IAM Service account with role to the CloudSQL Admin API
  • Create New Pub/Sub topic, Ex. named as “Backup-Payload”
  • Create a Cloud Function
    • Name: The function name
    • Region : which geographic location you want the CloudFunction run.
    • Memory : The size of memory you want to allocate for the Cloud Function. I chose the smallest one
    • Trigger : Chose method to trigger the Cloudfunction, in this case the newly created Pub/Sub.
    • Runtime: Chose the runtime, in this example we got the function from Nodejs 10
    • Source Code : Use inline editor and input the function from the codes below
    • Function to Execute : input the function name from the codes.
    • Service account for the cloud function that created before.
  • Create Cloud Scheduler
    • Name : The schedule name
    • Frequency : The time, you can use the cron format. In this case, we set the backup to run at 01.00 at morning.
    • Target : We chose the Pub/Sub
    • Topic : Chose the pub/sub topic that we created before
    • Payload : The jeson content about the project databse instance to export and storage target. Find the payload at the end.

Payload for Scheduler

{"project": "PROJECT_ID", "database": "DB_INSTANCE_NAME", "bucket": "gs://bucket-names"}

The Function (Updated)

Converting the time-stampt to format DDMMYYYY

const { google } = require('googleapis')
const { auth } = require('google-auth-library')
const sqladmin = google.sqladmin('v1beta4')

/**
 * Triggered from a Pub/Sub topic.
 * 
 * The input must be as follows:
 * {
 *   "project": "PROJECT_ID",
 *   "database": "DATABASE_NAME",
 *   "bucket": "BUCKET_NAME_WITH_OPTIONAL_PATH_WITHOUT_TRAILING_SLASH"
 * }
 *
 * @param {!Object} event Event payload
 * @param {!Object} context Metadata for the event
 */

var today = new Date();
var dd = today.getDate();
var mm = today.getMonth()+1; 
var yyyy = today.getFullYear();

if(dd<10) 
{
    dd='0'+dd;
} 

if(mm<10) 
{
    mm='0'+mm;
} 

exports.initiateBackup = async (event, context) => {
        const pubsubMessage = JSON.parse(Buffer.from(event.data, 'base64').toString())
        const authRes = await auth.getApplicationDefault()
        const request = {
                auth: authRes.credential,
                project: pubsubMessage['project'],
                instance: pubsubMessage['database'],
                resource: {
                        exportContext: {
                                kind: 'sql#exportContext',
                                fileType: 'SQL',
                                uri: pubsubMessage['bucket'] + '/backup-' +dd+'-'+mm+'-'+yyyy + '.gz'
                        }
                }
        }
        sqladmin.instances.export(request, (err, res) => {
                if (err) console.error(err)
                if (res) console.info(res)
        })
}

package.json
{
        "name": "cloudsql-backups",
        "version": "1.0.0",
        "dependencies": {
                "googleapis": "^45.0.0",
                "google-auth-library": "3.1.2"
        }
}

Reference
https://revolgy.com/blog/how-to-automated-long-term-cloud-sql-backups-step-by-step-guide/

Previous ArticleNext Article

Leave a Reply

Your email address will not be published. Required fields are marked *

P.

Passing LFCS Exam (Re-write)

This December 2019, i have passed the Linux Foundation Certified System (LFCS) Administrator exam. This exam is provided by Linux Foundation Organization. An organization to maintain the open source ecosistem by provide an event, training and certification.

Twitter serve 140 character

Reality

For more details, you can visit their site here. For the Lxcf exam, here.
Having prior experience in using Linux/Unix based OS is must. Especially understand how this Operating System works generally and how to use the command line. Most of the question/task in this exam is hands-on Linux command.

For the LFCS Exam, a score of 66% or above must be earned to pass. FAQ for Linux Foundation here.

The number of the question on this exam is slightly different from each person. Seems based on the question/task weight and scoring. In my case, i have about 65-70 question.

The question domain based on my exam is;
– User/group management
– Storage management
– File manipulation
– Other’s command
By the way we can chose 2 distribution for this exam, Cent-OS/Ubuntu.

For this exam, i used the bundle package exam + course with Black Friday coupon 😀 though. But, the course is only general course how the Linux working.



M.

Migrate VM Betwen Proxmox host Using Zfs

For Proxmox with zfs filesystem, we can utilize zfs to migrating a VM to another Proxmox host with minimal downtime. As long the target also use zfs file sistem. You can read more about zfs on this link.

With pipe viewer (PV) we also can limit the bandwith amount when performing migration using zfs send and recieve.

This is how to.

  • Configure ssh key-less from node source to node target
  • Check the vm disk in in zfs
    #zfs list
  • Make sure pv installed
    #sudo apt-get install pv
  • Snapshot the VM disk for initial data
    #zfs snap [pool/data/vm-id-disk-1]@snapshot-name
  • Send initial zfs data to the Proxmox host target. In this example we limit the transfer bandwith to 25Mb
    #zfs send -vc [pool/data/vm-202-disk-1]@snapshot-name | pv -q -L 25M| ssh node-target zfs recv -s [pool/data/vm-202-disk-1]@snapshot-name
  • Shutdown the vm to make sure there is no data changes to perform final disk migration and create second snapshot for the VM disk
    #zfs snap [pool/data/vm-202-disk-1]@snapshot-name2
  • Send inital and second snapshot of the vm disk to target node.
    #zfs send -i [pool/data/vm-202-disk-1]@snapshot-name [pool/data/vm-202-disk-1]@snapshot-name2 | pv -q -L 25M | ssh node-target zfs recv -s [pool/data/vm-202-disk-1]@snapshot-name
  • Send the vm config to the node-target
    #scp /etc/pve/nodes/node-source/qemu-server/202.conf node-target:/etc/pve/nodes/node-target/qemu-server/202.conf
  • Start the vm on the new host.

Hope this help.