I have been working on our Corporate web site* to automate a Dev / Production environment for a while now. It is not done but I thought I would let you know how it is going. * (Not this web site)
We use git, GitLab and AWS in order to make it all work.
First hardware or VMs.
We have in-house Server and a robust Internet Connection so we host our own WP and GitLab servers. This can be done on VPS.
2 LAMP servers running Ubuntu and 1 GitLab server. Also 1 AWS EC2 server powered down most of the time.
Dev using Git
The 2 LAMP servers 1 dev and 1 production are on all the time. The people handling the development only ever touch the dev server they do not even have access to login to production. Once an update is made I log in to the dev server and run these commands that will soon be put into a Bash script.
git checkout dev
mysqldump -u <USERNAME> -p<PASSWORD> --quick --extended-insert <DATABASE NAME> > /var/www/html/wp-admin/backup.sql
sed -i -e 's/dev.<DOMAIN>/www.<DOMAIN>/g' /var/www/html/wp-admin/backup.sql
git add .
git commit -m "Website Update"
git push origin HEAD
Dev to Production using Git
Then I login the GitLab server and do a merge from dev to master. In the near future, I will have a GitLab runner do the following. I just need to code the gitlab-ci.yaml file. I will post that once I have finished.
Next I power up the AWS server and logon to it and the Production machine and run this code which again will be automated in a GitLab runner soon.
git checkout master
git pull origin master
mysql -h localhost -u <USERNAME> -p<PASSWORD> <DATABASE NAME> < /var/www/html/wp-admin/backup.sql
We use AWS Route53 for our DNS and I use Route53 Health Checks to manage my fail over.
I update the AWS first. Then I invert the Heath check to initiate the failover. That way the DNS will point to the AWS as production. Then I log into the Production and update that server. This can be done with a GitLab runner using the “when: manual” option in the gitlab-ci.yaml.
Now everything is up to date and I can change the Heath Check to change the DNS back to the normal production Server.
In the event of a failure at the office, we use a Route53 Health Checks that triggers an SNS topic. That, in turn, triggers a Lambda Function to spin up the AWS EC2 backup server. This way we are not paying for the server to sit there running all the time it only runs for updates and when the primary site fails.
The Lambda function is very simple it is in Python 2.7 in the Handler “lambda_function.lambda_handler”
I created a role for this function called lambda_start_stop_ec2
region = 'us-east-1'
instances = ['i-<INSTANCES_ID>']
def lambda_handler(event, context):
ec2 = boto3.client('ec2', region_name=region)
print 'start your instances: ' + str(instances)