autozane

Switching to Docker pull in Travis-CI builds

Why switch from building Docker images in each Travis-CI build to pulling form the Docker Hub?

  1. The docker build step can often times be the slowest step in a build process. In this example case (Aptly_cli), approximately 6 minutes can be shaved off.

  2. Gain full control over Docker image being used for testing by leveraging a central storage platform such as Docker Hub

In this case I use a Rakefile to managed the Docker commands. A simple update to Rakefile with the 'pull' command needed will do the trick.

https://github.com/sepulworld/aptly_cli/pull/85/files#diff-52c976fc38ed2b4e3b1192f8a8e24cff

Here is it in action, making an update to the Dockerfile and pushing to the Docker Hub for later use by Travis-CI.

Then apply the changed build step in the .travis-ci.yml

https://github.com/sepulworld/aptly_cli/pull/85/files#diff-354f30a63fb0907d4ad57269548329e3

POOF! Rocking out some faster build times now.

»

Moving Aptly_cli test suite to Docker

Aptly_cli was created to work with the Aptly Debian repository management system. It provides a command line interface that can be used on remote systems that need to interact with Aptly repositories.

The initial testing framework was based on a combination of Webmock and Vcr. Vcr allowed me to record Aptly server responses while testing Aptly_cli API interactions. A local Vagrant VirtualBox instance that ran Aptly Aptly_Vagrant provided the server responses.

Aptly_cli is plugged into Travis-CI for running tests. The Webmock framework ran well there. The builds took advantage of Ruby RVM for testing multiple versions of Ruby.

The local development environment became cumbersome. I had to record server responses periodically from the Aptly server which would update the Vcr yml files (Where HTTP responses where recorded). The test setup and cleanup was sort of pain to deal with too. Sometime segments of your Vcr results

»

PuppetDB, Puppetdbquery and automation

PuppetDB and puppetdbquery offer a lot of power to dynamically generate configuration files. Here I provide an example use case with HAproxy systems running in a geographically dispersed system.

First off, if you use PuppetDB and haven't started using Puppetdbquery then now is the time to check it out. We will be using its functions inside of our Puppet manifest to gather information to act on inside our haproxy.cfg.erb

Data I am working with to make this happen:

  • Consistent FQDNs of global systems
  • Facter values $hostname, $domain,
  • All servers across multiple geographic regions report data back to a centralized PuppetDB

Most of my automation work relies on solid FQDN naming conventions. This is the root of system identification in a lot of cases and it needs to be consistent and straight forward. For our purposes here all web servers will follow this naming convention:

FQDN Structure

<component&
»

Newer Posts