It works on my machine

keep-calm-it-works-on-my-machine

One of the most classic phrases in the web development world is “it works on my machine“, and you can hear it very often, specially when you are working with LAMP environments, and even more often when you are working with remote developers, specially if your system is growing and begin to require additional services, like caching, clustered databases, etc., because every new service requires the developer to install those services on their local computer to test quickly, and if you are working with freelancers, well, they have not too much time to install all those services.

To avoid that problems, organizations provides several environments for developers, usually small businesses have two environments: staging and production, however the bigger the product or service, the amount of environments increases, and the process of delivery and release involves a more complex pipeline that includes several stages like SITs (system integration testing), UATs (user acceptance testing), OATs (operations acceptance testing), etc.

Let’s focus on a simple LAMP environment for now, this kind of environment is the most popular, and could begin to become complex when your system is growing, by example, when it requires the use of services like Memcache, Varnish, a master-slave database model, or even a clustered one. At that moment the LAMP is not LAMP any more, now it’s some kind of LAMPMV or something else, however the name does not matter, but the amount of procedures to maintain those services.

In a chocolate world, developers have full access to services configurations and always asks others when they need to change those configurations, but, in the real world, developers do not care at all about that, and it’s very common to watch them destroying the work of others because they simply ignore the risk on modifying certain configuration.

To avoid that problem organizations gives the control to a system administrator, or operations team, these guys usually are very focused on security and stability, and that’s great, however, they usually create restrictions that most of the time blocks developers and reduces their delivery time.

So, developer, how to solve this problem? Easy, today we have virtualization systems for desktops, like VMware and Virtualbox, and if you think do not have time to waste to install an operating system per virtual machine and configuring each one, well, I have news for you, you can use configuration management tools like Puppet and/or Chef, these tools allows you install and configure systems on the fly, just by specifying what you want, and because of that you can install entire systems, break them, destroy them and no one will cry, because guess what? it’s your own environment!, but what about having to run all these configurations tools each time and installing the operating systems? Don’t worry there is a solution, it’s called Vagrant, this tool allows you to create new environments in less time than the one you already “invest” in sending hating emails to the operation’s guys.

OK, what if you do not have time to waste learning how to write Puppet’s modules & manifests, or Chef’s cookbooks & recipes? Well, some crazy people used some of their “free time” to create some kind of “frameworks” or pre-configured vagrants to make life easier, now all you need to do is: Install Virtualbox and Vagrant, and then use Git to clone one of those projects.

I created one project by myself and I named it DevEnv. You can download this project using git, I’ll show you how to use it.

Let’s use an example, imagine you need to install an environment with Linux, Apache, PHP, Memcached and a master-slave MySQL model.

Create a new directory named “project_one” and inside of it, clone devenv

$ mkdir project_one
$ cd project_one

$ git clone https://github.com/bbh/devenv.git
$ cd devenv

$ git submodule init
$ git submodule update
$ cd ../

Create a file named Vagrantfile

$ vi Vagrantfile

Use this for the content of the Vagrantfile

Vagrant::Config.run do |config|

 config.vm.define :web do |web|
  web.vm.box = "raring64"
  web.vm.box_url = "http://cloud-images.ubuntu.com/vagrant/raring/current/raring-server-cloudimg-amd64-vagrant-disk1.box"
  web.vm.network :hostonly, "192.168.10.10"
  web.vm.provision :shell, :inline => "apt-get update"
  web.vm.provision :puppet do |puppet|
   puppet.manifests_path = "devenv/puppet/manifests"
   puppet.module_path = "devenv/puppet/modules"
   puppet.manifest_file = "lamp.pp"
  end
 end
 
 config.vm.define :dbm do |dbm|
  dbm.vm.box = "raring64"
  dbm.vm.box_url = "http://cloud-images.ubuntu.com/vagrant/raring/current/raring-server-cloudimg-amd64-vagrant-disk1.box"
  dbm.vm.network :hostonly, "192.168.10.20"
  dbm.vm.provision :shell, :inline => "apt-get update"
  dbm.vm.provision :puppet do |puppet|
   puppet.manifests_path = "devenv/puppet/manifests"
   puppet.module_path = "devenv/puppet/modules"
   puppet.manifest_file = "mysql-master.pp"
  end
 end
 
 config.vm.define :dbs do |dbs|
  dbs.vm.box = "raring64"
  dbs.vm.box_url = "http://cloud-images.ubuntu.com/vagrant/raring/current/raring-server-cloudimg-amd64-vagrant-disk1.box"
  dbs.vm.network :hostonly, "192.168.10.21"
  dbs.vm.provision :shell, :inline => "apt-get update"
  dbs.vm.provision :puppet do |puppet|
   puppet.manifests_path = "devenv/puppet/manifests"
   puppet.module_path = "devenv/puppet/modules"
   puppet.manifest_file = "mysql-slave.pp"
  end
 end
 
end

Create a file named devenv/puppet/manifests/lamp.pp

$ vi devenv/puppet/manifests/lamp.pp

Use this for the content of the file:

include php::apache2
include php::cli
include subversion
include git

$listen_ip = "127.0.0.1"
$port = 11211
$cache_size = 64
$maxconn = 1024
include memcached

Once you saved this files run:

$ vagrant up web dbm dbs

Now Vagrant will download a clean Ubuntu Server, then install 3 Virtual Machines, use Puppet to install the required packages, and finally configure your services.

If you experience timeout problems, you can try to up each server one by one.

$ vagrant up web && vagrant up dbm && vagrant up dbs

After that, you can check the status of your servers

$ vagrant status

And as you can see, now you have 3 servers, one is your PHP/Apache & Memcached, the second one is your MySQL primary or master, and the third is your MySQL secondary or slave. Let’s connect to the web server.

$ vagrant ssh web

Now you are inside the server, here, if you want to verify that everything is working, you can write a script in /var/www

$ vi /var/www/test.php

Use this code for your PHP script

#!/usr/bin/env php
<?php
// Connect to master database
$conm = new mysqli( '192.168.10.20', 'root', '', 'test' );
if ( $conm->connect_error ) {
  die ( $conm->connect_error . PHP_EOL );
}
echo "Connected to master database", PHP_EOL;

// Connect to Memcache
if ( !isset( $mc ) ) {
  $mc = new Memcache;
  $mc->connect( '192.168.10.10', 11211) or die;
  echo "Connected to Memcache", PHP_EOL;
}

// Execute on master database
$sql="CREATE TABLE IF NOT EXISTS users ".
     "(id INT PRIMARY KEY AUTO_INCREMENT, name VARCHAR(64))";
if ( $conm->query( $sql ) === true ) {
  echo "Table users created", PHP_EOL;
}

$sql="INSERT INTO users (name) VALUES ('Basilio')";
if ( $conm->query( $sql ) === true ) {
  echo "Data inserted in table users", PHP_EOL;
}
$conm->close();

// Get data from Memcache
$user = $mc->get('user_name');
if ( !$user ) {
  // Connect to slave database
  $cons = new mysqli('192.168.10.21','root','','test');
  if ( $cons->connect_error ) {
    die ( $cons->connect_error . PHP_EOL );
  }
  echo "Connected to slave database", PHP_EOL;

  // Execute on slave database
  $sql="SELECT id,name FROM users LIMIT 1";
  $result = $cons->query( $sql );
  $val = $result->fetch_assoc();
  $from_slave = $val['name'];
  echo "Get data from slave database", PHP_EOL;
  $cons->close();

  // Save data on memcache
  $mc->set( 'user_name', $from_slave );
  echo "Set data in Memcache", PHP_EOL;

  $user = $mc->get('user_name');
  echo "Get data from Memcache", PHP_EOL;
}
$mc->close();

echo "Data from Memcache is: ", $user, PHP_EOL;

Let’s run it.

$ chmod +x /var/www/test.php
$ /var/www/test.php

As you can see this script will check the connection to the Memcached server, will put content inside and then get that content from the Memcached. After that will write into the master database and finally read from the slave database, this probes that the master-slave model is working and your entire environment is ready for work.

This is the expected output:

Connected to Memcache
Table users created
Data inserted in table users
Connected to slave database
Get data from slave database
Set data in Memcache
Get data from Memcache
Data from Memcache is: Basilio

That was not so painful uh? The same way you can install a MySQL cluster model, or any other service you require, and, as you can see, this is a very simple example of how to use Vagrant, now imagine all you can do if you learn a little bit of Chef or Puppet, and then you can say “It works on my machine, and also on production“.

Basilio Briceño

DevOps evangelist, SoftwareLibre activist, sometimes speaker & eclectic metalhead.

1 comment

  1. Rod   •  

    Good stuff Basilio. Thanks for the inspiration.

Leave a Reply to Rod Cancel reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>