ansible gitops ruby taskrabbit
12 Mar 2015 - Originally posted at https://tech.taskrabbit.com/blog/2015/03/12/rebuilding-capistrano-with-ansible/
↞ See all posts
At TaskRabbit we use Ansible to configure and manage our servers. Ansible is a great tool which allows you write easy-to-use playbooks to configure your servers, deploy your applications, and more. The "More" part was what led us to switch from Chef to Ansbile. While both tools can have a "provision" action, you can make playbooks for all sorts of things with Ansible, including application deployment!
For the past 4 years, TaskRabbit was using Capistrano to deploy our Rails applications. We built a robust and feature-rich set of plugins which:
Eventually, we started adding more and more non-rails (Sinatra), and then non-ruby (Node.js) apps. I’ve written before on how you can use Capistrano to deploy anything, including node.js applications. That said, at some point having a ruby dependency for a 500K node app seems silly… but at least we were consistent and clear how all of our projects were to be deployed. Any developer in the company, even if they never touched a line of node before, knew how the app was to be deployed to production.
Then came Ansible.
One of the things that always irked me about Capistrano was that it required duplication of data. Why do I need to keep a list of servers and roles in a deploy.rb file within each application when the authoritative source for that data is our provisioning tool (previously Chef-Server, now the ansible project’s inventory)? Doubly so, every time we added or removed a node from chef, I need to be sure to update the deploy.rb. There are some tools out there which attempt to link Chef and Capistrano, but none of the ones I tried worked. More worrisome was the fact that some of the steps for deployment were duplicated in chef, or Chef was shelling out to Capistrano (which required a full ruby environment) to deploy.
I’m happy to say that TaskRabbit now deploys all of our applications via Ansible, and no longer uses Capistrano. We were able to keep a homogenous command set and duplicate most of Capistrano’s features in very small amount of code. Here’s how we did it:
1/home/{{ deploy_user }}/www/{{ application }}/ 2 - current (symlink to release) 3 - releases 4 - timestamp_1 5 - app 6 - config (symlinks to ../../shared/config) 7 - tmp (symlink to ../../shared/tmp) 8 - pids (symlink to ../../shared/pids) 9 - timestamp_2 10 - timestamp_3 11 - shared 12 - tmp 13 - config (ymls and other config files previously config'd by ansible) 14 - public 15 - cached-copy (git repo in fullt) 16 - logs 17 - pids 18 - sockets
We define inventories by RAILS_ENV (or NODE_ENV as the case may be), and then divide up each application to the sub-roles that it requires. I’ll be using the following example inventories/production file as reference:
1myApp-web1.domain.com 2myApp-web2.domain.com 3myApp-worker1.domain.com 4myApp-worker2.domain.com 5myApp-redis.domain.com 6myApp-mysql.domain.com 7 8[production] 9myApp-web1.domain.com 10myApp-web2.domain.com 11myApp-worker1.domain.com 12myApp-worker2.domain.com 13myApp-redis.domain.com 14myApp-mysql.domain.com 15 16[production:vars] 17rails_env=production 18node_env=production 19cluster_env=production 20 21[myApp] 22myApp-web1.domain.com 23myApp-web2.domain.com 24myApp-worker1.domain.com 25myApp-worker2.domain.com 26 27[myApp:unicorn] 28myApp-web1.domain.com 29myApp-web2.domain.com 30 31[myApp:resque] 32myApp-worker1.domain.com 33myApp-worker2.domain.com 34 35# ...
The entry point to our deployment playbook is the deploy.yml playbook:
1- hosts: "{{ host | default(application) }}" 2 max_fail_percentage: 1 3 4 roles: 5 - { role: deploy, tags: ["deploy"], sudo: no } 6 - { role: monit, tags: ["monit"], sudo: yes }
and a rollback.yml playbook:
1- hosts: "{{ host | default(application) }}" 2 max_fail_percentage: 1 3 4 tasks: 5 - include: roles/deploy/tasks/rollback_symlink.yml 6 - include: roles/deploy/tasks/restart_unicorn.yml 7 - include: roles/deploy/tasks/restart_resque.yml
This allows us to have the following API options:
1ansible-playbook -i inventories/staging deploy.yml — extra-vars="application=myApp migrate=true"
1ansible-playbook -i inventories/staging deploy.yml — extra-vars="application=myApp migrate=true branch=mybranch" — limit staging-server-1.company.com
1ansible-playbook -i inventories/production deploy.yml — extra-vars="application=myApp migrate=true"
The beauty of the line — hosts: "" in the playbook is that you can reference the servers in question by the group they belong to, which in our case matches the application names, and then sub-slice the group even further via optional — limit flags.
To make this playbook work, we need a collection of application metadata. This essentially mirrors the information you would provide within an application’s deploy.rb in Capistrano. However, moving this data to Ansible allows it be used not only in both of the deployment/rollback playbooks, but also in provisioning if needed. Here’s some example data for our myApp application, which we can pretend is a Rails 4 application:
From group_vars/all
1applications: 2 - myApp 3 - myOtherApp 4 5application_git_url_base: git@github.com 6application_git_url_team: myCompany 7 8deploy_email_to: everyone@myCompany.com 9 10application_configs: 11 myApp: 12 name: myApp 13 language: ruby 14 roles: 15 - unicorn 16 - resque 17 ymls: 18 - database.yml 19 - memcache.yml 20 - redis.yml 21 - facebook.yml 22 - s3.yml 23 - twilio.yml 24 pre_deploy_tasks: 25 - { cmd: "bundle exec rake assets:precompile" } 26 - { cmd: "bundle exec rake db:migrate", run_once: true, control: migrate } 27 - { cmd: "bundle exec rake db:seed", run_once: true, control: migrate } 28 - { cmd: "bundle exec rake myApp:customTask" } 29 post_deploy_tasks: 30 - { cmd: "bundle exec rake cache:clear", run_once: true } 31 - { cmd: "bundle exec rake bugsnag:deploy", run_once: true } 32 33resque_workers: 34 - name: myApp 35 workers: 36 - { name: myApp-scheduler, cmd: "resque:scheduler" } 37 - { name: myApp-1, cmd: "resque:queues resque:work" } 38 - { name: myApp-2, cmd: "resque:queues resque:work" } 39#...
You can see here that we have defined a few things:
roles/deploy/main.yml Looks like this:
1- include: init.yml 2- include: git.yml 3- include: links.yml 4- include: config.yml 5- include: bundle.yml 6- include: pre_tasks.yml 7- include: reboot.yml 8- include: post_tasks.yml 9- include: cleanup.yml 10- include: email.yml 11- include: hipchat.yml
Lets go through each step 1-by-1:
1- name: Generate release timestamp 2 command: date +%Y%m%d%H%M%S 3 register: timestamp 4 run_once: true 5 6- set_fact: "release_path='/home/{{ deploy_user }}/www/{{ application }}/releases/{{ timestamp.stdout }}'" 7- set_fact: "shared_path='/home/{{ deploy_user }}/www/{{ application }}/shared'" 8- set_fact: "current_path='/home/{{ deploy_user }}/www/{{ application }}/current'" 9 10- set_fact: migrate={{ migrate|bool }} 11 when: migrate is defined 12- set_fact: migrate=false 13 when: migrate is not defined 14 15- set_fact: branch=master 16 when: branch is not defined and cluster_env != 'production' 17- set_fact: branch=production 18 when: cluster_env == 'production' 19 20- set_fact: keep_releases={{ keep_releases|int }} 21 when: keep_releases is defined 22- set_fact: keep_releases={{ 6|int }} 23 when: keep_releases is not defined 24 25- name: "capture previous git sha" 26 run_once: true 27 register: deploy_previous_git_sha 28 shell: > 29 cd {{ current_path }} && 30 git rev-parse HEAD 31 ignore_errors: true
You can see that we do a few things: — generate the release timestamp on server to use on all of them — save the paths release_path, shared_path and current_path, just like Capistrano — handle default values for the migrate, branch, and keep_releases options — learn the git SHA of the previous release
1- name: update source git repo 2 shell: "git fetch && git reset --hard origin/master" 3 sudo: yes 4 sudo_user: "{{ deploy_user }}" 5 args: 6 chdir: "{{ shared_path }}/cached-copy" 7 when: "'{{application}}' in group_names" 8 9- name: Create release directory 10 file: "state=directory owner='{{ deploy_user }}' path='{{ release_path }}'" 11 sudo: yes 12 sudo_user: "{{ deploy_user }}" 13 when: "'{{application}}' in group_names" 14 15- name: copy the cached git copy 16 shell: "cp -r {{ shared_path }}/cached-copy/. {{ release_path }}" 17 sudo: yes 18 sudo_user: "{{ deploy_user }}" 19 when: "'{{application}}' in group_names" 20 21- name: git checkout 22 shell: "git checkout {{ branch }}" 23 sudo: yes 24 sudo_user: "{{ deploy_user }}" 25 args: 26 chdir: "{{ release_path }}" 27 when: "'{{application}}' in group_names"
This section ensure that we git-pull the latest code into the cached-copy, copy it into the new release_directory, and then checkout the proper branch
1- name: ensure directories 2 file: "path={{ release_path }}/{{ item }} state=directory" 3 sudo: yes 4 sudo_user: "{{ deploy_user }}" 5 when: "'{{application}}' in group_names" 6 with_items: 7 - tmp 8 - public 9 10- name: symlinks 11 shell: "rm -rf {{ item.dest }} && ln -s {{ item.src }} {{ item.dest }}" 12 sudo: yes 13 sudo_user: "{{ deploy_user }}" 14 when: "'{{application}}' in group_names" 15 with_items: 16 - { src: "{{ shared_path }}/log", dest: "{{ release_path }}/log" } 17 - { src: "{{ shared_path }}/pids", dest: "{{ release_path }}/tmp/pids" } 18 - { src: "{{ shared_path }}/pids", dest: "{{ release_path }}/pids" } #Note: Double symlink for node apps 19 - { 20 src: "{{ shared_path }}/sockets", 21 dest: "{{ release_path }}/tmp/sockets", 22 } 23 - { 24 src: "{{ shared_path }}/assets", 25 dest: "{{ release_path }}/public/assets", 26 } 27 - { 28 src: "{{ shared_path }}/system", 29 dest: "{{ release_path }}/public/system", 30 }
This creates symlinks from each deployed release back to shared. This enables us to save logs, pids, etc between deploys.
1- name: list shared config files 2 shell: "ls -1 {{ shared_path }}/config" 3 register: remote_configs 4 when: "'{{application}}' in group_names" 5 6- name: symlink configs 7 shell: "rm -f {{ release_path }}/config/{{ item }} && ln -s {{ shared_path }}/config/{{ item }} {{ release_path }}/config/{{ item }} " 8 with_items: remote_configs.stdout_lines 9 sudo: yes 10 sudo_user: "{{ deploy_user }}" 11 when: "'{{application}}' in group_names"
Here we source every file in app/shared/config/* and symlink it into app/release/config/*
1- stat: path={{ release_path }}/Gemfile 2 register: deploy_gemfile_exists 3 4- name: bundle install 5 sudo: yes 6 sudo_user: "{{ deploy_user }}" 7 args: 8 chdir: "{{ release_path }}" 9 shell: > 10 bundle install 11 --gemfile {{ release_path }}/Gemfile 12 --path {{ shared_path }}/bundle 13 --without development test 14 --deployment --quiet 15 when: "'{{application}}' in group_names and deploy_gemfile_exists.stat.exists"
If there is a Gemfile in this project, we bundle install
1- name: deployment pre tasks (all hosts) 2 sudo: yes 3 sudo_user: "{{ deploy_user }}" 4 shell: > 5 cd {{ release_path }} && 6 RAILS_ENV={{ rails_env }} 7 RACK_ENV={{ rails_env }} 8 NODE_ENV={{ rails_env }} 9 {{ item.cmd }} 10 run_once: false 11 when: > 12 ('{{application}}' in group_names) and 13 ({{ item.run_once | default(false) }} == false) and 14 ({{ item.control | default(true) }} != false) 15 with_items: "application_configs[application].pre_deploy_tasks" 16 17- name: deployment pre tasks (single hosts) 18 sudo: yes 19 sudo_user: "{{ deploy_user }}" 20 shell: > 21 cd {{ release_path }} && 22 RAILS_ENV={{ rails_env }} 23 RACK_ENV={{ rails_env }} 24 NODE_ENV={{ rails_env }} 25 {{ item.cmd }} 26 run_once: true 27 when: > 28 ('{{application}}' in group_names) and 29 ({{ item.run_once | default(false) }} == true) and 30 ({{ item.control | default(true) }} != false) 31 with_items: "application_configs[application].pre_deploy_tasks"
In the application_configs part of our variable file, we defined a collection of tasks to run as part of the deploy. Here is where asset compilation would be run, etc. Note how when you define the task, we have the attributes "run_once" and "control", IE: { cmd: "bundle exec rake db:migrate", run_once: true, control: migrate }. This means that the migration task should only be run on one host, and that it should only be run when the playbook is run with the flags — extra-vars=’migrate=true’. This is how simple it is to build complex Capistrano-like roles.
1- name: Update current Symlink 2 sudo: yes 3 sudo_user: "{{ deploy_user }}" 4 file: "state=link path={{ current_path }} src={{ release_path }}" 5 notify: 6 - deploy restart unicorn 7 - deploy restart resque 8 when: "'{{application}}' in group_names" 9 10- meta: flush_handlers
Now that all of our pre-tasks have been run, it’s time to actually change the deploy symlink and "restart" our applications. This simple role just changes the symlink, but the notifications are fairly involved. Some of your servers (Unicorn) may be able to gracefully restart with a simple signal, while others (like resque workers) need to fully stop and start to accept new code. Ansible makes it easy to build notification handlers that fit your needs:
1## UNICORN ## 2 3- name: "deploy restart unicorn" 4 when: "'unicorn' in application_configs[application].roles and '{{application}}:unicorn' in group_names" 5 ignore_errors: yes 6 shell: "kill -s USR2 `cat {{ current_path }}/tmp/pids/unicorn.pid`" 7 sudo: yes 8 sudo_user: "{{ deploy_user }}" 9 notify: 10 - ensure monit monitoring unicorn 11 12- name: ensure monit monitoring unicorn 13 monit: 14 name: unicorn-{{ application }} 15 state: monitored 16 sudo: yes 17 18## RESQUE ## 19 20- name: deploy restart resque 21 ignore_errors: yes 22 shell: "kill -s QUIT `cat {{ current_path }}/tmp/pids/resque-resque-{{ item.0.name }}-{{ item.1.name }}.pid`" 23 with_subelements: 24 - resque_workers 25 - workers 26 when: "'{{ item.0.name }}:resque' in group_names and item.0.name == application" 27 notify: ensure monit monitoring resque 28 sudo: yes 29 30- name: ensure monit monitoring resque 31 monit: 32 name: "resque-{{ item.0.name }}-{{ item.1.name}}" 33 state: monitored 34 with_subelements: 35 - resque_workers 36 - workers 37 when: "'{{ item.0.name }}:resque' in group_names and item.0.name == application" 38 notify: reload monit 39 sudo: yes
You can see here that we chain notification handlers here to both restart the application and then ensure that our process monitor, monit, is configured to watch that application.
1- name: deployment post tasks (all hosts) 2 sudo: yes 3 sudo_user: "{{ deploy_user }}" 4 shell: > 5 cd {{ release_path }} && 6 RAILS_ENV={{ rails_env }} 7 RACK_ENV={{ rails_env }} 8 NODE_ENV={{ rails_env }} 9 {{ item.cmd }} 10 run_once: false 11 when: > 12 ('{{application}}' in group_names) and 13 ({{ item.run_once | default(false) }} == false) and 14 ({{ item.control | default(true) }} != false) 15 with_items: "application_configs[application].post_deploy_tasks" 16 17- name: deployment post tasks (single hosts) 18 sudo: yes 19 sudo_user: "{{ deploy_user }}" 20 shell: > 21 cd {{ release_path }} && 22 RAILS_ENV={{ rails_env }} 23 RACK_ENV={{ rails_env }} 24 NODE_ENV={{ rails_env }} 25 {{ item.cmd }} 26 run_once: true 27 when: > 28 ('{{application}}' in group_names) and 29 ({{ item.run_once | default(false) }} == true) and 30 ({{ item.control | default(true) }} != false) 31 with_items: "application_configs[application].post_deploy_tasks"
post_tasks are just like pre_tasks, and allow you to run code after the servers have been restarted. Here is where you might clear caches, update CDNs, etc.
Now the fun kicks in. Ansible makes it easy to keep adding more to your playbooks. We wanted to send the development team an email (and also notify hipchat in a similar role) every time a deploy goes out. Here’s a sample:
Here’s how to grab the variables you need:
1- name: "capture: sha" 2 run_once: true 3 register: deploy_email_git_sha 4 shell: > 5 cd {{ release_path }} && 6 git rev-parse HEAD 7 8- name: "capture: deployer_email" 9 run_once: true 10 register: deploy_email_deployer_email 11 shell: > 12 cd {{ release_path }} && 13 git log -1 --pretty="%ce" 14 15- name: "capture: branch" 16 run_once: true 17 register: deploy_email_branch 18 shell: > 19 cd {{ release_path }} && 20 git rev-parse --abbrev-ref HEAD 21 22- name: "capture: commit message" 23 run_once: true 24 register: deploy_email_commit_message 25 shell: > 26 cd {{ release_path }} && 27 git log -1 --pretty="%s" 28 29- set_fact: previous_revision='n/a' 30 when: previous_revision is defined 31 32- name: "capture: previous commits" 33 run_once: true 34 register: deploy_email_previous_commits 35 when: deploy_previous_git_sha is defined and ( deploy_previous_git_sha.stdout_lines | length > 0 ) 36 shell: > 37 cd {{ release_path }} && 38 git log {{ deploy_previous_git_sha.stdout_lines[0] }}..{{ deploy_email_git_sha.stdout_lines[0] }} --pretty=format:%h:%s --graph 39 40- name: "capture: human date" 41 run_once: true 42 register: deploy_email_human_date 43 shell: date 44 45- name: build the deploy email body 46 run_once: true 47 local_action: template 48 args: 49 src: deploy_email.html.j2 50 dest: /tmp/deploy_email.html 51 52- name: send the deploy email 53 run_once: true 54 when: no_email is not defined or no_email == false 55 local_action: shell sendmail {{ deploy_email_to }} < /tmp/deploy_email.html
and our email template is:
1From: {{ deploy_email_deployer_email.stdout_lines[0] }} 2Subject: Deployment: {{ application }} [ {{ cluster_env }} ] 3Content-Type: text/html 4MIME-Version: 1.0 5 6<h1> 7 <a href="https://github.com/{{ application_git_url_team }}/{{ application }}">{{ application }}</a> 8 was deployed to {{ cluster_env }} by {{ deploy_email_deployer_email.stdout_lines[0] }} 9 at {{ deploy_email_human_date.stdout_lines[0] }} 10</h1> 11 12<h2>The {{ deploy_email_branch.stdout_lines[0] }} branch was deployed to {{ vars.play_hosts | count }} hosts</h2> 13<p>The latest commit is: <a href="https://github.com/{{ application_git_url_team }}/{{ application }}/commit/{{ deploy_email_git_sha.stdout_lines[0] }}">{{ deploy_email_commit_message.stdout_lines[0] }}</a> </p> 14 15<strong>Hosts:</strong> 16<ul> 17{% for host in vars.play_hosts %} 18 <li>{{ host }}</li> 19{% endfor %} 20</ul> 21 22{% if deploy_email_previous_commits is defined and deploy_previous_git_sha.stdout_lines | length > 0 %} 23<strong>New on these servers since the last deploy:</strong> 24<br /> 25{% for line in deploy_email_previous_commits.stdout_lines %} 26 {{ line }}<br /> 27{% endfor %} 28 29{% endif %}
And that’s how you build Capistrano within Ansible! You can see how simple it is to translate a complex tool into a few hundred lines of Ansible… with very clear responsibilities and separation. It’s also very easy to extend this to fit your workflow.
I write about Technology, Software, and Startups. I use my Product Management, Software Engineering, and Leadership skills to build teams that create world-class digital products.
Get in touch