In this chapter, we will cover the following topics:
- Hosting a Node.js application on Google Compute Engine
- Hosting the Node.js application on Google App Engine
- Hosting a Node.js application on Kubernetes Engine
- Hosting an application on Google Cloud Functions
- Hosting a highly scalable application on Google Compute Engine
Google provides four options for the computing needs of your application. Compute Engine gives us the option to run VMs on Google Cloud Platform's infrastructure. It also provides all the networking and security features needed to run infrastructure as a service (IaaS) workloads. Google App Engine is a platform as a service (PaaS) offering that supports most of the major programming languages. It comes in two flavors, a standard environment based on container instances and a flexible environment based on Compute Engine. Google Kubernetes Engine offers a Kubernetes-powered container platform for all containerized applications. Finally, for all serverless application needs, Google Cloud Functions provides the compute power and integration with other cloud services.
We'll implement a Node.js application (http://keystonejs.com/) on Google Compute Engine (GCE). GCE is Google's offering for all IaaS needs. Our simple application is built on expressjs and MongoDB. expressjs is a simple web application framework for Node.js and MongoDB is a document-oriented NoSQL database. KeystoneJS also uses a templating engine along with Node.js and MongoDB.
The architecture of our recipe is depicted as follows:
Single-tiered Node.js application on GCE
We will follow a single-tiered approach to host the application and the database on the same VM. Later in this chapter, we'll host the same Node.js application on Google App Engine and Kubernetes Engine.
Note
You'll be using the following services and others for this recipe:
- GCE
- Google Cloud logging
- Google Cloud Source Repositories
The following are the initial setup verification steps to be taken before the recipe can be executed:
- Create or select a GCP project.
- Enable billing and enable the default APIs (some APIs such as BigQuery, storage, monitoring, and a few others are enabled automatically).
- Install Google Cloud SDK on your development machine. Please follow the steps from https://cloud.google.com/sdk/docs/.
- Install Node.js and MongoDB on your development machine.
We'll approach this recipe in two stages. In the first stage, we'll prepare our development machine to run our sample Node.js application. Then we'll push the working application to the Compute Engine.
Follow these steps to download the source code from GitHub and configure it to work on your development machine:
- Clone the repository in your development space:
$ git clone https://github.com/legorie/gcpcookbook.git
Note
You can also download the code from: https://github.com/PacktPublishing/Google-Cloud-Platform-Cookbook.
- Navigate to the directory where the
mysite
application is stored:
$ cd gcpcookbook/Chapter01/mysite
- With your favorite editor, create a filename
.env
in themysite
folder:
COOKIE_SECRET=d44d5c45e7f8149aabc068244 MONGO_URI=mongodb://localhost/mysite
- Install all the packages required for the application to work:
$ npm install
- Start the
mongod
service in your development machine - Run the application:
$ node keystone.js
- You'll see the following message logged on the Terminal:
------------------------------------------------
Applying update 0.0.1-admins...
------------------------------------------------
mySite: Successfully applied update 0.0.1-admins.
Successfully created:
* 1 User
------------------------------------------------
Successfully applied 1 update.
------------------------------------------------
------------------------------------------------
KeystoneJS Started:
mySite is ready on port 3000
------------------------------------------------
- The application is now available on
http://localhost:3000
, as shown:
- You can stop the local server by pressing Ctrl + C.
To deploy the application to GCP, we'll first upload the working code from our development machine to Google Source Repositories. Then, instead of setting up the VM manually, we'll modify and use a start up script provided by Google to bootstrap the VM with the necessary packages and a runnable application. Finally, we'll create the VM with the bootstrap script and configure the firewall rules so that the application is accessible from the internet.
Each project on GCP has a Git repository which can be accessed by the GCE instances. Though we can manually move the code to an instance, moving it to Source Repositories gives the ability for the compute instances to pull the code automatically via a start up script:
- If you have made any changes to the code, you can commit the code to the local repository:
git commit -am "Ready to be committed to GCP"
- Create a new repository under the project:
- Follow the steps to upload the code from the local repository to Google Source Repositories. In the following example, the project ID is
gcp-cookbook
and the repository name isgcpcookbook
:
- After the
git push
command is successful, you'll see the repository updated in the Source Repositories:
The start up script is used to initialize the VM during a boot or a restart with the necessary software (MongoDB, Node.js, supervisor, and others) and loads the application from the source code repository. The following script can be found in the /Chapter01/
folder of the Git repository. The start up script performs the following tasks:
- Installs the logging agent which is an application based on Fluentd.
- Installs the MongoDB database to be used by the KeystoneJS application.
- Installs Node.js, Git, and supervisor. Supervisor is a process control system which is used to run our KeystoneJS application as a process.
- Clones the application code from the repository to the local folder. Update the code at
#Line 60
, to reflect your repository's URL:
git clone https://source.developers.google.com/p/<PROJECT ID>/r/<REPOSITORY NAME> /opt/app #Line 60
- Installs the dependencies and creates the
.env
file to hold the environment variables:
COOKIE_SECRET=<Long Random String>
- The application is configured to run under the supervisor:
#! /bin/bash # Source url: https://github.com/GoogleCloudPlatform/nodejs-getting-started/blob/master/7-gce/gce/startup-script.sh # The startup-script is modified to suit the Chapter 01-Recipe 01 of our book # Copyright 2017, Google, Inc. # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # [START startup] set -v
- Talks to the metadata server to get the project ID:
PROJECTID=$(curl -s "http://metadata.google.internal/computeMetadata/v1/project/project-id" -H "Metadata-Flavor: Google") # Install logging monitor. The monitor will automatically pick up logs sent to # syslog. # [START logging] curl -s "https://storage.googleapis.com/signals-agents/logging/google-fluentd-install.sh" | bash service google-fluentd restart & # [END logging]
- Installs MongoDB:
apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv EA312927 echo "deb http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.2 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.2.list apt-get update apt-get install -y mongodb-org cat > /etc/systemd/system/mongodb.service << EOF [Unit] Description=High-performance, schema-free document-oriented database After=network.target [Service] User=mongodb ExecStart=/usr/bin/mongod --quiet --config /etc/mongod.conf [Install] WantedBy=multi-user.target EOF systemctl start mongodb systemctl enable mongodb
- Installs dependencies from
apt
:
apt-get install -yq ca-certificates git nodejs build-essential supervisor
- Installs Node.js:
mkdir /opt/nodejs curl https://nodejs.org/dist/v4.2.2/node-v4.2.2-linux-x64.tar.gz | tar xvzf - -C /opt/nodejs --strip-components=1 ln -s /opt/nodejs/bin/node /usr/bin/node ln -s /opt/nodejs/bin/npm /usr/bin/npm
# git requires $HOME and it's not set during the startup script. export HOME=/root git config --global credential.helper gcloud.sh git clone https://source.developers.google.com/p/<Project ID>/r/gcpcookbook /opt/app
- Installs the app dependencies:
cd /opt/app/Chapter01/mysite npm install cat >./.env << EOF COOKIE_SECRET=d44d5c45e7f8149aabc06a830dba5716b4bd952a639c82499954 MONGODB_URI=mongodb://localhost:27017 EOF
- Creates a
nodeapp
user. The application will run as this user:
useradd -m -d /home/nodeapp nodeapp chown -R nodeapp:nodeapp /opt/app
- Configures the supervisor to run the
nodeapp
:
cat >/etc/supervisor/conf.d/node-app.conf << EOF [program:nodeapp] directory=/opt/app/Chapter01/mysite command=npm start autostart=true autorestart=true user=nodeapp environment=HOME="/home/nodeapp",USER="nodeapp",NODE_ENV="production" stdout_logfile=syslog stderr_logfile=syslog EOF supervisorctl reread supervisorctl update # Application should now be running under supervisor # [END startup]
After creating the start up script, follow these steps:
- With the start up script ready, we can create an instance using the
gcloud
command:
$ gcloud compute instances create mysite-instance \--image-family=debian-8 \--image-project=debian-cloud \--machine-type=g1-small \--scopes userinfo-email,cloud-platform \--metadata-from-file startup-script=./startup-script.sh \--zone us-east1-c \--tags mysite-server
- You can check the progress of the instance creation using the following command:
$ gcloud compute instances get-serial-port-output \ mysite-instance --zone us-east1-c
- Create a firewall rule to allow access to port
3000
to the instance:
$ gcloud compute firewall-rules create default-allow-http-3000 \--allow tcp:3000 \--source-ranges 0.0.0.0/0 \--target-tags mysite-server \--description "Allow port 3000 access to mysite-server"
The following screenshot shows the details of the firewall rule:
- Get the public IP of the instance from the Google Cloud Console or by using the following command:
$ gcloud compute instances list
- Navigate to
http://<public IP of the instance>:3000
to see the application running.
We'll implement the same Node.js application used in the first recipe on Google App Engine. App Engine is a PaaS solution where we just need to deploy the code in any of the supported languages (Node.js, Java, Ruby, C#, Go, Python, and PHP), and the platform takes care of scaling automatically, health checking, and updates to the underlying OS.
App Engine provides the compute power for the application and so for the database, we'll have to use a managed MongoDB service such as mLab or a MongoDB instance of GCE. As we already have a VM running MongoDB from our previous recipe, we'll use that to serve our application running on App Engine.
The following are the initial setup verification steps to be taken before the recipe can be executed:
- Create or select a GCP project.
- Enable billing and enable the default APIs (some APIs such as BigQuery, storage, monitoring, and a few others are enabled automatically).
- Verify that Google Cloud SDK is installed on your development machine.
- Verify that the default project is set properly:
$ gcloud config list
- The VM which runs MongoDB from our first recipe allows connections only from the localhost. We'll have to modify the configuration to allow connections from the external world.
- SSH into the VM from the Console:
- Navigate to the MongoDB's configuration file,
/etc/mongod.conf
, and update thebindIp
value to include0.0.0.0
:
# network interfaces net: port: 27017 bindIp: [127.0.0.1,0.0.0.0]
Note
In a few versions of Mongo, it is just enough to comment our the bind_ip
line in the mongodb
config to allow access from outside the instance.
- Reboot the machine and verify that the MongoDB service is up and running.
- We'll also create a new firewall rule to allow access to port
27017
from anywhere:
$ gcloud compute firewall-rules \ create default-allow-mongo-27017 \ --allow tcp:27017 \ --source-ranges 0.0.0.0/0 \ --target-tags mysite-server \ --description "Allow port 27017 access to mysite-server"
The following screenshot shows the details of the firewall rule:
The MongoDB instance is now open to the world without any login credentials. So for production systems, make sure you secure the MongoDB instance with an admin user and run the mongod
process using the --auth
option.
- Connect to the MongoDB instance running on the VM from your development machine:
$ mongo mongodb://<External IP>:27017
With the MongoDB server up and running, we'll make a few configurational changes and deploy the application to the App Engine:
- In the development machine, copy the
Chapter01/mysite
folder to a new folder calledChapter01/mysite-ae
from where we'll push the code to the App Engine:
$ cp mysite/ mysite-ae/ -r
- Navigate to the
mysite-ae
folder. Open the.env
file and update the path forMONGO_URI
to point to our VM:
MONGO_URI=mongodb://<External IP>:27017/mysite
- Verify that all the packages are installed and launch the application on the development machine, pointing to the database on the Cloud:
$ npm install
$ npm start
- The application's configurations are governed by a file called app.yaml. Create a new file with the following content:
# Basic configurations for the NodeJS application runtime: nodejs env: flex
- Now, we can deploy the application to the App Engine:
$ gcloud app deploy
..... 5cbd6acfb] to complete...done. Updating service [default]...done. Deployed service [default] to [https://<project-id>.appspot.com]
- You can stream logs from the command line by running:
$ gcloud app logs tail -s default
- To view your application in the web browser run:
$ gcloud app browse
We will containerize the KeystoneJS application and host it on Google Kubernetes Engine (GKE). GKE is powered by the container management system, Kubernetes. Containers are built to do one specific task, and so we'll separate the application and the database as we did for App Engine.
The MongoDB container will host the MongoDB database with the data stored on external disks. The data within a container is transient, and so we need an external disk to safely store the MongoDB data. The App Container includes a Node.js runtime, that will run our KeystoneJS application.
It will communicate with the Mongo Container and also expose itself to the end user:
Note
You'll be using the following services and others for this recipe:
- Google Kubernetes Engine
- GCE
- Google Container Registry
The following are the initial setup verification steps to be taken before the recipe can be executed:
- Create or select a GCP project.
- Enable billing and enable the default APIs (some APIs such as BigQuery, storage, monitoring, and a few others are enabled automatically).
- Verify that Google Cloud SDK is installed on your development machine.
- Verify that the default project is set properly.
- Install Docker on your development machine.
- Install
kubectl
, the command-line tool for running commands against Kubernetes clusters:
$ gcloud components install kubectl
The steps involved are:
- Creating a cluster on GKE to host the containers
- Containerizing the KeystoneJS application
- Creating a replicated deployment for the application and MongoDB
- Creating a load-balanced service to route traffic to the deployed application
The container engine cluster runs on top of GCE. For this recipe, we'll create a two-node cluster which will be internally managed by Kubernetes:
- We'll create the cluster using the following command:
$ gcloud container clusters create mysite-cluster --scopes "cloud-platform" --num-nodes 2 --zone us-east1-c
The gcloud
command automatically generates a kubeconfig
entry that enables us to use kubectl
on the cluster:
- Using kubectl, verify that you have access to the created cluster:
$ kubectl get nodes
The gcloud
command is used to manage resources on Google Cloud Project and kubectl
is used to manage resources on the Container Engine/Kubernetes cluster.
- Clone the repository in your development space:
$ git clone https://github.com/legorie/gcpcookbook.git
- Navigate to the directory where the
mysite
application is stored:
$ cd gcpcookbook/Chapter01/mysite-gke
- With your favorite editor, create a filename
.env
in themysite
folder:
PORT=8080 COOKIE_SECRET=<a very long string> MONGO_URI=mongodb://mongo/mysite
A custom port of 8080
is used for the KeystoneJS application. This port will be mapped to port 80
later in the Kubernetes service configuration. Similarly, mongo
will be the name of the load-balanced MongoDB service that will be created later.
- The Dockerfile in the folder is used to create the application's Docker image. First, it pulls a Node.js image from the registry, then it copies the application code into the container, installs the dependencies, and starts the application. Navigate to
/Chapter01/mysite-gke/Dockerfile
:
# https://github.com/GoogleCloudPlatform/nodejs-getting-started/blob/master/optional-container-engine/Dockerfile # Dockerfile extending the generic Node image with application files for a # single application. FROM gcr.io/google_appengine/nodejs # Check to see if the version included in the base runtime satisfies # '>=0.12.7', if not then do an npm install of the latest available # version that satisfies it. RUN /usr/local/bin/install_node '>=0.12.7' COPY . /app/ # You have to specify "--unsafe-perm" with npm install # when running as root. Failing to do this can cause # install to appear to succeed even if a preinstall # script fails, and may have other adverse consequences # as well. # This command will also cat the npm-debug.log file after the # build, if it exists. RUN npm install --unsafe-perm || \ ((if [ -f npm-debug.log ]; then \ cat npm-debug.log; \ fi) && false) CMD npm start
- The
.dockerignore
file contains the file paths which will not be included in the Docker container. - Build the Docker image:
$ docker build -t gcr.io/<Project ID>/mysite .
Note
Troubleshooting:
- Error: Cannot connect to the Docker daemon. Is the Docker daemon running on this host?
- Solution: Add the current user to the Docker group and restart the shell. Create a new Docker group if needed.
- You can list the created Docker image:
$ docker images
- Push the created image to Google Container Registry so that our cluster can access this image:
$ gcloud docker --push gcr.io/<Project ID>/mysite
- To create an external disk, we'll use the following command:
$ gcloud compute disks create --size 1GB mongo-disk \ --zone us-east1-c
- We'll first create the MongoDB deployment because the application expects the database's presence. A deployment object creates the desired number of pods indicated by our replica count. Notice the label given to the pods that are created. The Kubernetes system manages the pods, the deployment, and their linking to their corresponding services via label selectors. Navigate to
/Chapter01/mysite-gke/db-deployment.yml
:
apiVersion: apps/v1beta1 kind: Deployment metadata: name: mongo-deployment spec: replicas: 1 template: metadata: labels: name: mongo spec: containers: - image: mongo name: mongo ports: - name: mongo containerPort: 27017 hostPort: 27017 volumeMounts: - name: mongo-persistent-storage mountPath: /data/db volumes: - name: mongo-persistent-storage gcePersistentDisk: pdName: mongo-disk #The created disk name fsType: ext4
Note
You can refer to the following link for more information on Kubernetes objects: https://kubernetes.io/docs/user-guide/walkthrough/k8s201/.
$ kubectl create -f db-deployment.yml
- You can view the deployments using the command:
$ kubectl get deployments
- The pods created by the deployment can be viewed using the command:
$ kubectl get pods
apiVersion: v1 kind: Service metadata: labels: name: mongo name: mongo spec: ports: - port: 27017 targetPort: 27017 selector: name: mongo #The key-value pair is matched with the label on the deployment
- The
kubectl
command to create a service is:
$ kubectl create -f db-service.yml
- You can view the status of the creation using the commands:
$ kubectl get services
$ kubectl describe service mongo
- We'll repeat the same process for the Node.js application. For the deployment, we'll choose to have two replicas of the application pod to serve the web requests. Navigate to
/Chapter01/mysite-gke/web-deployment.yml
and update the<Project ID>
in the image item:
apiVersion: apps/v1beta1 kind: Deployment metadata: name: mysite-app labels: name: mysite spec: replicas: 2 template: metadata: labels: name: mysite spec: containers: - image: gcr.io/<Project ID>/mysite name: mysite ports: - name: http-server containerPort: 8080 #KeystoneJS app is exposed on port 8080
- Use
kubectl
to create the deployment:
$ kubectl create -f web-deployment.yml
- Finally, we'll create the service to manage the application pods. Navigate to
/Chapter01/mysite-gke/web-service.yml
:
apiVersion: v1 kind: Service metadata: name: mysite labels: name: mysite spec: type: LoadBalancer ports: - port: 80 #The application is exposed to the external world on port 80 targetPort: http-server protocol: TCP selector: name: mysite
To create the service execute the below command:
$ kubectl create -f web-service.yml
$ kubectl get services NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes 10.27.240.1 <none> 443/TCP 49m mongo 10.27.246.117 <none> 27017/TCP 30m mysite 10.27.240.33 1x4.1x3.38.164 80:30414/TCP 2m
Note
After the service is created, the External IP will be unavailable for a short period; you can retry after a few seconds. The Google Cloud Console has a rich interface to view the cluster components, in addition to the Kubernetes dashboard. In case of any errors, you can view the logs and verify the configurations on the Console. The Workloads
submenu of GKE provides details of Deployments
, the Discovery & load balancing
submenu gives us all the services created.
Google Cloud Functions is the serverless compute service that runs our code in response to events. The resources needed to run the code are automatically managed and scaled. At the time of writing this recipe, Google Cloud Functions is in beta. The functions can be written in JavaScript on a Node.js runtime. The functions can be invoked with an HTTP trigger, file events on Cloud Storage buckets, and messages on Cloud Pub/Sub topic.
We'll create a simple calculator using an HTTP trigger that will take the input parameters via the HTTP POST method and provide the result.
The following are the initial setup verification steps to be taken before the recipe can be executed:
- Create or select a GCP project
- Enable billing and enable the default APIs (some APIs such as BigQuery, storage, monitoring, and a few others are enabled automatically)
- Verify that Google Cloud SDK is installed on your development machine
- Verify that the default project is set properly
We'll use the simple calculator JavaScript code available on the book's GitHub repository and deploy it to Cloud Functions:
- Navigate to the
/Chapter01/calculator
folder. The application code is present inindex.js
and the dependencies in thepackage.json
file. As there are no dependencies for this function, thepackage.json
file is a basic skeleton needed for the deployment. - The
main
function receives the input via the request object, validates the inputs and performs the calculation. The calculated result is then sent back to the requester via the response object and an appropriate HTTP status code. In the following code, theswitch
statement does the core processing of the calculator, do spend some time on it to understand the gist of this function:
/** * Responds to any HTTP request that provides the below JSON message in the body. * # Example input JSON : {"number1": 1, "operand": "mul", "number2": 2 } * @param {!Object} req Cloud Function request context. * @param {!Object} res Cloud Function response context. */ exports.calculator = function calculator(req, res) { if (req.body.operand === undefined) { res.status(400).send('No operand defined!'); } else { // Everything is okay console.log("Received number1",req.body.number1); console.log("Received operand",req.body.operand); console.log("Received number2",req.body.number2); var error, result; if (isNaN(req.body.number1) || isNaN(req.body.number2)) { console.error("Invalid Numbers"); // different logging error = "Invalid Numbers!"; res.status(400).send(error); } switch(req.body.operand) { case "+": case "add": result = req.body.number1 + req.body.number2; break; case "-": case "sub": result = req.body.number1 - req.body.number2; break; case "*": case "mul": result = req.body.number1 * req.body.number2; break; case "/": case "div": if(req.body.number2 === 0){ console.error("The divisor cannot be 0"); error = "The divisor cannot be 0"; res.status(400).send(error); } else{ result = req.body.number1/req.body.number2; } break; default: res.status(400).send("Invalid operand"); break; } console.log("The Result is: " + result); res.status(200).send('The result is: ' + result); } };
- We'll deploy the calculator function using the following command:
$ gcloud beta functions deploy calculator --trigger-http
The entry point for the function will be automatically taken as the calculator function. If you choose to use another name, index.js
, the deploy
command should be updated appropriately:
Input JSON : {"number1": 1, "operand": "mul", "number2": 2 } $ curl -X POST https://us-central1-<ProjectID>.cloudfunctions.net/calculator -d '{"number1": 1, "operand": "mul", "number2": 2 }' -H "Content-Type: application/json" The result is: 2
- You can also click on the
VIEW LOGS
button in the Cloud Functions interface to view the logs of the function execution:
There are a number of ways to host a highly scalable application on GCP using Compute Engine, App Engine, and Container Engine. We'll look at a simple PHP and MySQL application hosted on GCE with Cloud SQL and see how the GCP ecosystem helps us in building it in a scalable manner.
First, we'll create a Cloud SQL instance, which will be used by the application servers. The application servers should be designed to be replicated at will depending on any events, such as CPU usage, high utilization, and so on.
So, we'll create an instance template which is a definition of how GCP should create a new application server when it is needed. We feed in the start up script that prepares the instance to our requirements.
Then, we create an instance group which is a group of identical instances defined by the instance template. The instance group also monitors the health of the instances to make sure they maintain the defined number of servers. It automatically identifies unhealthy instances and recreates them as defined by the template.
Later, we create an HTTP(S) load balancer to serve traffic to the instance group we have created. With the load balancer in place, we now have two instances serving traffic to the users under a single endpoint provided by the load balancer. Finally, to handle any unexpected load, we'll use the autoscaling feature of the instance group.
The following are the initial setup verification steps to be taken before the recipe can be executed:
- Create or select a GCP project
- Enable billing and enable the default APIs (some APIs such as BigQuery, storage, monitoring, and a few others are enabled automatically)
- Enable the Google Cloud SQL API
- Verify that Google Cloud SDK is installed on your development machine
- Verify that the default project is set properly
The implementation approach would be to first create the backend service (the database), then the instance-related setup, and finally the load balancing setup:
- Note the
IP address
of the Cloud SQL instance that will be fed to the configuration file in the next step:
- Navigate to the
/Chapter01/php-app/pdo
folder. Edit theconfig.php
file as follows:
$host = "35.190.175.176" // IP Address of the Cloud SQL $username = "root"; $password = ""; // Password which was given during the creation $dbname = "test"; $dsn = "mysql:host=$host;dbname=$dbname"; $options = array( PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION );
- The PHP application code is now ready to be hosted and replicated into multiple machines. Commit the changes to the Source Repositories from where the start up scripts will pick the code.
- The
startup-script.sh
can be found in theChapter01/php-app/
directory. The script installs the necessary software to run the PHP application, then it downloads the application code from Source Repositories and moves it to the/var/www/html
folder and installs the components for logging. Do update the project ID and the repository name in the following script to point to your GCP repository:
#!/bin/bash # Modified from https://github.com/GoogleCloudPlatform/ getting-started-php/blob/master/optional-compute-engine/gce/ startup-script.sh # [START all] set -e export HOME=/root # [START php] apt-get update apt-get install -y git apache2 php5 php5-mysql php5-dev php-pear pkg-config mysql-client # Fetch the project ID from the Metadata server PROJECTID=$(curl -s "http://metadata.google.internal/computeMetadata/v1/project/ project-id" -H "Metadata-Flavor: Google") # Get the application source code git config --global credential.helper gcloud.sh git clone https://source.developers.google.com/p/<Project ID>/r/<Repository Name> /opt/src -b master #ln -s /opt/src/optional-compute-engine /opt/app cp /opt/src/Chapter01/php-app/pdo/* /var/www/html -r # [END php] systemctl restart apache2 iptables -A INPUT -i eth0 -p tcp -m tcp --dport 3306 -j ACCEPT # [START project_config] # Fetch the application config file from the Metadata server and add it to the project #curl -s "http://metadata.google.internal/computeMetadata/v1/instance/attributes/project-config" \ # -H "Metadata-Flavor: Google" >> /opt/app/config/settings.yml # [END project_config] # [START logging] # Install Fluentd sudo curl -s "https://storage.googleapis.com/signals-agents/logging/google-fluentd-install.sh" | bash # Start Fluentd service google-fluentd restart & # [END logging] # [END all]
$ gcloud compute instance-templates create my-php-tmpl \ --machine-type=g1-small \ --scopes logging-write,storage-ro, https://www.googleapis.com/auth/projecthosting \ --metadata-from-file startup-script=./startup-script.sh \ --image-family=debian-8 \ --image-project=debian-cloud \ --tags http-server
The following screenshot shows the output for the preceding command:
Create the instance group as follows:
$ gcloud compute instance-groups managed create my-php-group \ --base-instance-name my-php-app \ --size 2 \ --template my-php-tmpl \ --zone us-east1-c
The following screenshot shows the output for the preceding command:
We'll create a health check that will poll the instance at specified intervals to verify that they can continue to serve traffic:
gcloud compute http-health-checks create php-health-check --request-path /public/index.php
The following screenshot shows the output for the preceding command:
- For the
Backend configuration
, we'll have to create a backend service which will point to the instance group and the health check that we have already created:
- For the
Host and path rules
andFrontend configuration
, we'll leave the default settings. - Once the settings are completed, an example review screen is shown as follows:
- In cases where traffic cannot be handled by the fixed number of instances under a load balancer, GCP provides a Compute Engine autoscaler. For scalability based on certain criteria, we can configure autoscaling at the instance group level. Instances can be scaled depending on CPU usage, HTTP load balancing usage, monitoring metrics and a combination of these factors:
When the user hits the endpoint URL of the load balancer, it transfers the request to one of the available instances under its control. A load balancer constantly checks for the health of the instance under its supervision. The URL to test for the health is set up using the Google Compute's health check.
The PHP applications running on both the instances are configured to use the same Cloud SQL database. So, irrespective of the request hitting Instance 1 or Instance 2, the data is dealt from the common Cloud SQL database.
Also, the Autoscaler is turned on in the Instance Group governing the two instances. If there is an increase in usage (CPU in our example), the Autoscaler will spawn a new instance to handle the increase in traffic: