We have been using Fabric to deploy changes to walkerart.org. Fabric is a library that enables a string of commands to be run on multiple servers. Though similar things could be done with shell scripts, we enjoy staying in one language as much as possible. In Fabric, strings are composed and sent to remote servers as commands over an SSH connection. Our Fabric scripts have been evolving over time with the project using the mentality: “If you know you are going to be doing something more than twice, script it!”
With Fabric we can tailor our deployments precisely. We deploy often with one of two commands:
[cci]fab production simple_deploy[/cci] or [cci]fab production deploy[/cci].
[cci]simple_deploy[/cci] simply pulls new code from the repo and restarts the web server.
[cci]deploy[/cci] does many things, each of which can be executed independently, and is explained below.
The scripts we run go both ways, code goes up to the server and data comes back to the workstation. We have [cci]fab sync_with_production[/cci], which pulls the database and images. The images arrive locally in a directory specified by an environment variable or the default directory. Conventional naming schemes simplify variables across systems such as the database name. Except for some development settings, our workstation environments are identical to the production environment, which means we can replicate a bug or feature locally and immediately.
We have been collecting all of the commands we normally run on the servers into our fabfile. And then we can group them by calling tasks from other tasks. Our deployment consists of 12 tasks. With this Fabric task, one can deploy to the production or staging server with this one command:
[cci]fab production deploy[/cci].
This makes it incredibly simple to put code that is written on developer workstations into production in as safe and secure way. Here is our deployment in Fabric:
First the “with” blocks put us onto the remote server, into the right directory and within Python’s virtual environment. From there “git_pull” gets the new code which contains the settings files, and “get_settings” copies any new settings into place. The task called “install_requirements” calls on pip to validate our virtual environment’s packages against the setting file called requirements. All third party packages are locked to a version so we aren’t surprised by new “features” that have adverse effects. We use celery to harvest data from other sites so we make sure they are running with fresh config files. The task “syncompress” does our compressing of css and js, “migrate” alters the database per our migration files and gunicorn is the program that is running django.
It takes about 60 seconds for a new version of the website to get into production. From there it takes 0-10 minutes for the memcached values to expire before the public changes are visible. We are deploying continuously so watch closely for updates!