autozane

Use Bash Utilities to Update aws/credentials for AssumeRole

Some AWS assume roles bash foo that can come in handy.

aws sts assume-role --role-arn <ROLEARN> --role-session-name <ROLESESSIONNAME> |\
    tr '{}' ',,' |\
    awk -F:  '
                    BEGIN { RS = "," ; print "[PROFILENAME]"}
                    /:/{ gsub(/"/, "", $2) }
                    /AccessKeyId/{ print "aws_access_key_id = " $2 }
                    /SecretAccessKey/{ print "aws_secret_access_key = " $2 }
                    /SessionToken/{ print "aws_session_token = " $2 }
    '  >> ~/.aws/credentials

OR if you don't want to touch your .aws/credentials file

aws sts assume-role --role-arn arn:aws:iam::1111111111111:role/role-test --role-session-name "RoleSessionTest" |\ 
    grep -w 'AccessKeyId\|SecretAccessKey\|SessionToken' |\ 
    awk  '{print $2}' | sed  's/\"//g;s/\,//' > awscre
    export AWS_ACCESS_KEY_ID=`sed -n '3p' awscre`
    export AWS_SECRET_ACCESS_KEY=`sed -n '1p' awscre`
    export AWS_SECURITY_TOKEN=`sed -n '2p' awscre`

Switching to Docker pull in Travis-CI builds

Why switch from building Docker images in each Travis-CI build to pulling form the Docker Hub?

  1. The docker build step can often times be the slowest step in a build process. In this example case (Aptly_cli), approximately 6 minutes can be shaved off.

  2. Gain full control over Docker image being used for testing by leveraging a central storage platform such as Docker Hub

In this case I use a Rakefile to managed the Docker commands. A simple update to Rakefile with the 'pull' command needed will do the trick.

https://github.com/sepulworld/aptly_cli/pull/85/files#diff-52c976fc38ed2b4e3b1192f8a8e24cff

Here is it in action, making an update to the Dockerfile and pushing to the Docker Hub for later use by Travis-CI.

Then apply the changed build step in the .travis-ci.yml

https://github.com/sepulworld/aptly_cli/pull/85/files#diff-354f30a63fb0907d4ad57269548329e3

POOF! Rocking out some faster build times now.

Moving Aptly_cli test suite to Docker

Aptly_cli was created to work with the Aptly Debian repository management system. It provides a command line interface that can be used on remote systems that need to interact with Aptly repositories.

The initial testing framework was based on a combination of Webmock and Vcr. Vcr allowed me to record Aptly server responses while testing Aptly_cli API interactions. A local Vagrant VirtualBox instance that ran Aptly Aptly_Vagrant provided the server responses.

Aptly_cli is plugged into Travis-CI for running tests. The Webmock framework ran well there. The builds took advantage of Ruby RVM for testing multiple versions of Ruby.

The local development environment became cumbersome. I had to record server responses periodically from the Aptly server which would update the Vcr yml files (Where HTTP responses where recorded). The test setup and cleanup was sort of pain to deal with too. Sometime segments of your Vcr results would get overwritten when you weren't expecting it. I wanted a testing suite that didn't get in my way and instead encouraged adding tests.

Now here is where Docker comes in. I noticed that Travis-CI started supporting Docker builds. Great! Since almost all of the Aptly_cli tests relied on server responses it made sense to give Docker a shot. The idea to have tests working locally with Docker and also have Travis-CI run build tests using the same Docker build.

To get rolling with Docker I first got rid of the Vcr cassette recordings which contained expected Aptly server responses. Next, I created a Dockerfile

FROM debian:jessie

EXPOSE 8080

RUN echo "deb http://repo.aptly.info/ squeeze main" > /etc/apt/sources.list.d/aptly.list; \
apt-key adv --keyserver keys.gnupg.net --recv-keys 2A194991; \
apt-get update; \
apt-get install aptly curl xz-utils bzip2 gnupg wget graphviz -y --force-yes; \
wget --quiet http://mirror.as24220.net/pub/ubuntu-archive/pool/main/z/zeitgeist/zeitgeist_0.9.0-1_all.deb -O /tmp/zeitgeist_0.9.0-1_all.deb; \
wget --quiet http://mirror.as24220.net/pub/ubuntu-archive/pool/main/z/zsh/zsh_5.1.1-1ubuntu1_i386.deb -O /tmp/zsh_5.1.1-1ubuntu1_i386.deb

ADD ./test/fixtures/aptly.conf /etc/aptly.conf

RUN aptly repo create testrepo
RUN aptly repo create testrepo20
RUN aptly repo add testrepo /tmp/zeitgeist_0.9.0-1_all.deb
RUN aptly repo add testrepo20 /tmp/zsh_5.1.1-1ubuntu1_i386.deb

CMD /usr/bin/aptly api serve

This built a Docker container that installed the latest Aptly server, sets up a basic configuration and places a couple test packages into it.

Now with a working Aptly Docker container my next step was to add some Rake Task to my Rakefile to help control container image creation, starting and stopping.

desc "Docker build image"
task :docker_build do
  sh %{docker build -t sepulworld/aptly_api .}
end

desc "List Docker Aptly running containers"
task :docker_list_aptly do
  sh %{docker ps --filter ancestor='sepulworld/aptly_api' --format="{{.ID}}"}
end

desc "Stop running Aptly Docker containers"
task :docker_stop do
  sh %{docker stop $(docker ps --filter ancestor='sepulworld/aptly_api' --format="{{.ID}}")}
end

desc "Start Aptly Docker container on port 8082"
task :docker_run do
  sh %{docker run -d -p 8082:8080 sepulworld/aptly_api /bin/sh -c "aptly api serve"}
end

desc "Show running Aptly process Docker stdout logs"
task :docker_show_logs do
  sh %{docker logs $(docker ps --filter ancestor='sepulworld/aptly_api' --format="{{.ID}}")}
end

desc "Restart Aptly docker container"
task :docker_restart => [:docker_stop, :docker_run] do
  puts "Restarting docker Aptly container"
end

At this point I started refactoring some of my Minitests. Now that I wasn't needing to setup mocks and record responses with Vcr adding new tests became a faster process. After I ensured all of my previous tests and new ones were running locally, I needed to add some changes to .travis-ci.yml.

sudo: required
services:
  - docker

before_install:
- rake docker_build
- rake docker_run
- docker ps -a

script:
- bundle exec rake test

after_script:
- rake docker_show_logs

This takes advantage of the Rake commands I noted above, and activated the Docker testing infrastructure in Travis-CI.

Travis-CI ran the build on the new branch and found all tests past. At this point I have a new testing suite using Docker working on Travis-CI! The fain step was to remove the Gemfile dependencies for Webmock and Vcr and merge this branch into master.

Overall, this was a very fruitful process that put Aptly_cli into a better development state. I have already fixed about a dozen bugs thanks to this faster, easier to use framework driven by Docker. I went from 52% test coverage to 92% (Percentage based on Coveralls) thanks to Docker.

PuppetDB, Puppetdbquery and automation

PuppetDB and puppetdbquery offer a lot of power to dynamically generate configuration files. Here I provide an example use case with HAproxy systems running in a geographically dispersed system.

First off, if you use PuppetDB and haven't started using Puppetdbquery then now is the time to check it out. We will be using its functions inside of our Puppet manifest to gather information to act on inside our haproxy.cfg.erb

Data I am working with to make this happen:

  • Consistent FQDNs of global systems
  • Facter values $hostname, $domain,
  • All servers across multiple geographic regions report data back to a centralized PuppetDB

Most of my automation work relies on solid FQDN naming conventions. This is the root of system identification in a lot of cases and it needs to be consistent and straight forward. For our purposes here all web servers will follow this naming convention:

FQDN Structure

<component><numid>.<geolocation>.<provider>.<domain>.com

Examples:

web1.chicago.linode.autozane.com
web2.chicago.linode.autozane.com
web1.nyc.linode.autozane.com
web2.nyc.linode.autozane.com
web1.sf.linode.autozane.com
web2.sf.linode.autozane.com
web1.london.linode.autozane.com
web2.london.linode.autozane.com

As web servers spin up in each region ie, Chicago, NYC, SF, London, the HAproxy systems need to be aware of new or removed web systems and Puppet must update the haproxy.cfg and reload HAproxy accordingly.

Somewhere in the HAproxy .pp mainfests that house the system configuration logic we need to gather an array of systems for that particular region. Here is where Puppetdbquery comes into action.

$domainhosts = query_nodes("domain='$domain'", hostname)

This will query PuppetDB during a puppet catalog compilation and return an array of hostnames for all systems in the domain. For example, during a puppet run on lb1.london.linode.autozane.com I will get an array of all servers in london.linode.autozane.com ($domain facter fact) and the results in the array will be the hostname values for these hosts.

$domainhosts = ['web1', 'web2', 'web3', 'redis1', 'mongo1', 'monitor1', 'mysql1', 'vpn1' ]

With this array I can now use it in an ERB for haproyx.cfg

Here is what I need to add to the 'backend' section of the haproxy.cfg.erb template

backend autozane
<% @webhosts.sort.each do |host| -%>
<% if host =~ /^web/ ; host.gsub!(/\-.*/, '')-%>
        server <%=host%> <%=host%>:8080 maxconn 10 check inter 5s
<% end -%>
<% end -%>

If we had 5 web hosts in London then the configuration would generate...

backend autozane
     server web1 web1:8080 maxconn 10 check inter 5s
     server web2 web2:8080 maxconn 10 check inter 5s
     server web3 web3:8080 maxconn 10 check inter 5s
     server web4 web4:8080 maxconn 10 check inter 5s
     server web5 web5:8080 maxconn 10 check inter 5s

This will use the $webhosts data we collected in the .pp from puppetdbquery and pull out the 'web' hosts using a regular expression.

Drop a 'notify' on the Puppet file resource used to manage the HAproxy service in order to gracefully reload HAproxy upon dynamic configuration file updates.

notify => Service['haproxy'];

This was a quick example that could be modified or used in other data templating situations where you have PuppetDB information available to leverage.

Newer Posts Older Posts