Added By: Feedage Forager Feedage Grade B rated
Language: English
api  berther matt  berther  file  git  google  matt berther  matt  new  originally published  project  published matt  site  tfs 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics

Matt Berther

Updated: 2016-05-23T20:32:39+00:00


Continuous Deployment to CloudFront Using circleci


Historically, I used a dynamic publishing engine to host this site. The engine would reside on a server that I was responsible for maintaining -- both security patches and uptime. Recently though, I made the switch to using the AWS CloudFront platform to make this site available on the Internet. Key benefits from this approach include: Edge locations around the world. When someone requests a file from this site, the request is automatically routed to a copy of the file at a data center nearest the visitor, which results in faster download times. Availability. When goes down, a massive team of people from AWS are working on correcting the problem. Cost. I pay only for the S3 storage and CloudFront bandwidth I use. Last month, my spend to provide this site's content in data centers around the world was a whopping $0.64. As I made the transition, I created this site using Jekyll and host the repository on github. I wanted to make sure that when I update the github repository that changes are automatically pushed out so visitors receive the most up-to-date version of the site. To do this, I leverage circleci which is a continuous integration platform that follows my github repository and performs actions when changes are pushed. This post describes the steps I took to configure circleci to send my changes out to CloudFront. Install and configure the s3_website gem Follow your github project with circleci Configure the test and deployment commands Push your project to github Install and Configure the s3_website gem The s3_website gem is available on github. Add it to your project's Gemfile with gem 's3_website'. The documentation page for the gem is pretty comprehensive and describes the various configuration options. At minimum, you'll need to perform the following steps: Run s3_website cfg create to generate the s3_website.yml file Modify the s3_website.yml file to suit your configuration Run s3_website cfg apply to configure your S3 bucket to function as a website At minimum, your configuration file will need to have the following elements to support a CloudFront distribution. s3_id: <%= ENV['S3_ACCESS_KEY'] %> s3_secret: <%= ENV['S3_SECRET_KEY'] %> s3_bucket: <%= ENV['S3_BUCKET'] %> cloudfront_distribution_id: <%= ENV['AWS_CLOUDFRONT_ID'] %> Note that you can use ERB in your yaml file, so please make sure you're using environment variables to store sensitive information. You'll want to make sure that these items don't end up available in your github repository. For a more comprehensive configuration, you can view the file this site uses. Follow the github project with circleci To have circleci listen for changes to your github repository, login to your circleci account and select 'Add Projects' from the left sidebar. Select the GitHub account and then the repository you're interested in following. This should start the project building. However, before the build succeeds, we'll need to configure the test and deployment commands. Configure the test and deployment commands Within your circleci project, select settings and Test Commands. In the post-test commands section, type bundle exec jekyll build. This tells the circleci environment to build your jekyll site. Via the website, you can configure deployments using Heroku or AWS CodeDeploy. However, since we'll be deploying to S3 and CloudFront, we need to configure a custom deployment to use the s3_website gem we configured earlier. To tell circleci to use this command, create a circle.yml file in the root of your github project with the following contents. deployment: aws: branch: master commands: - bundle exec s3_website push Push your project to github Using your preferred technique, push your repository to github. In under a couple minutes from pushing the changes to github, your changes are live on the interwebs. Continuous Deployment to CloudFront Using circleci was originally published by Matt Berther at Matt Berther on Febr[...]

Exporting a JSON Resume with gulp


Ive received questions from readers wondering about the résumé link on my page. I created the résumé using JSON Resume with the hope that it would make it easier to keep it up-to-date. I also love the idea of an open standard for résumés. So far, I have been very happy with the project and I am very grateful to the awesome team behind JSON Resume.

However, re-publishing the résumé became a manual step that I would sometimes forget when I rebuilt this site. This site is powered by Jekyll and Gulp helps me with some of the other asset related things.

I looked for an existing gulp plugin that would allow me to create HTML from my JSON Resume file. Since nothing existed, I did what anyone who had a few hours to kill would do... I made my own.

Introducing gulp-resume

The usage of this plugin is what you would expect. Make sure you npm install --save-dev gulp-resume and then add the following to your Gulpfile:

var resume = require('gulp-resume');
var rename = require('gulp-rename');

gulp.task('resume', function() {
  return gulp.src('resume.json')
      format: 'html',
      theme: 'elegant'

Download links

gulp-resume on GitHub | gulp-resume on npm

Exporting a JSON Resume with gulp was originally published by Matt Berther at Matt Berther on February 13, 2016.

How to Resize AWS EC2 EBS Volumes


It is impossible to resize an EBS volume. However, by creating a copy of the volume that is either larger or smaller, you can simulate a resize. Doing this with EBS volumes can be challenging, especially when they are mounted as the root device on an EC2 instance. This post is intended to provide step-by-step directions on how to either expand or shrink the size of an EBS volume. Before you start the steps below to resize your volume, please make sure that you have a backup. This means, shutting down your EC2 instance and taking a snapshot of the root volume. This will allow you to come back should things go horribly wrong. Shrinking an EBS Volume When you wish to shrink an EBS root volume, you will need to start a new, small EC2 instance that you can attach the volume you wish to resize. A t2.micro instance should be more than sufficient for this task. Once you have this instance created, proceed with the following steps. You took a backup, right? If you did not, stop your EC2 instance and take a snapshot now. Seriously, I'll wait Create a new EBS volume that is the size you wish to shrink to Detach the volume you wish to resize from the current EC2 instance and attach both volumes to the new, small EC2 instance you created Mount the old volume as /dev/sdf (this becomes /dev/xvdf) Mount the new volume as /dev/sdg (this becomes /dev/xvdg) Power on the new, small instance and wait for it to come online SSH into the instance and run the following commands To ensure that the file system is in order, run sudo e2fsck -f /dev/xvdf1. If you're resizing a different partition on the drive, change the number 1 to the partition number you wish to resize. If the e2fsck command ran without errors, now run sudo resize2fs -M -p /dev/xvdf1. Again, change the 1 to the partition number you wish to resize if you're not resizing the first one. The last line from the resize2fs command should tell you how many 4k blocks the filesystem now is. To calculate the number of 16MB blocks you need, use the following formula: blockcount * 4 / (16 * 1024). Round this number up to give yourself a little buffer. If you dont yet have a partition on your new volume (/dev/xvdg1), use fdisk to create one. Execute the following command, using the number you came up with in the previous step. sudo dd bs=16M if=/dev/xvdf1 of=/dev/xvdg1 count=numberfrompreviousstep. Depending on how large your volume is this may take several minutes to run -- let it finish. After the copy finishes, resize and check and make sure that everything is in order with the new filesystem by running sudo resize2fs -p /dev/xvdg1 followed by sudo e2fsck -f /dev/xvdg1. After this step is complete, detach both volumes from the new instance you created. Attach the shrunken volume to the old EC2 instance as /dev/sda1 (your boot device) and restart your old instance. Save the previous, larger volume until you've validated that everything is working properly. When you've verified things are working well, feel free to delete the new EC2 instance you created, plus the larger volume and snapshot. Expanding an EBS Volume Expanding the size of an EBS volume is a bit easier, since we dont have to execute a disk-disk copy. To expand the size of the volume, execute the following steps: You took a backup, right? If you did not, stop your EC2 instance and take a snapshot now. Seriously, I'll wait Create a new EBS volume from the snapshot specifying the new, larger size Attach the new EBS volume to your existing EC2 instance, as /dev/sda1 if this is the root volume Power on your existing instance and wait for it to come online SSH into the instance and run the following commands To ensure that the file system is in order, run sudo e2fsck -f /dev/xvda1. If you're resizing a different partition on the drive, change the number 1 to the partition number you wish to resize. If the e2fsck command ran without errors, now run sudo resize2fs -p /dev/xvda1. Again, change the 1 to the partition number you wish to resize if[...]

Good, Fast, Cheap. Pick Two.


Excellent Venn diagram describing the tradeoffs of the classic project management dilemma (author unknown).


Good, Fast, Cheap. Pick Two. was originally published by Matt Berther at Matt Berther on February 01, 2015.

Digital Ocean, Dokku, and SSL/TLS



Heroku powered this website for the past several years. I had no problems with the platform. Deployments were reliable and fast. However, when the time came to add SSL support to the site, Heroku became cost-prohibitive for the volume of traffic this site generates. Protecting the privacy of my readers led me to explore new ways to host this site.


I had a small set of requirements from a hosting platform. It needs to support SSL/TLS, it needs to be fast, and it needs to be inexpensive. I required SSL/TLS support because I wanted to to my part to secure the privacy of visitors. This site runs on a small, Sinatra-based application. While speed was important, I think that most any host wouldve been able to meet the speed requirement. Last, it needed to be inexpensive -- this site is a hobby for me and I wanted to limit my costs to less than $10 per month.

After doing some exploring and research, I found Digital Ocean (referral link). Digital Ocean allowed me to configure a 512mb/1cpu instance with 20gb of ssd storage for $5.00/month ($0.007/hour). As this is a full VPS instance, I had the capability of installing whatever I wanted on it (including my SSL certificate). One of the nice things about Digital Ocean is that you can quickly bring up many different applications on your droplet. One of the applications you can install is Dokku, which advertises itself as a mini-Heroku powered by Docker written in less than 100 lines of BASH script.


The setup of Dokku and DigitalOcean is straightforward. There is a page on the DigitalOcean site that describes exactly what you need to do to set up the infrastructure. After I installed the Dokku application on the DigitialOcean droplet, I needed to set up a remote to the website's git repository. Because I want to deploy to a naked domain (ie: vs, I needed to name my dokku application the full domain.

$ git remote add dokku

Now, whenever I want to publish this website, I push to that new remote with

$ git push dokku master

The last remaining piece was to add SSL support. I ordered a wildcard certificate from DNSimple (referral link). After I received the certificate, I copied the key and the certificate to my DigitalOcean droplet and copied them into my application's tls folder. In this case, that was ~/dokku/ Once the files were in place, I redeployed the app. When I redeployed the app, Dokku detected the certificate and set up nginx to accept traffic on the HTTPS (443) port.

Wrapping Up

At the end of all this, I have my own mini-Heroku with full SSL support for $5/month. This made it easy to secure my little corner of the interwebs. It also makes it easy for you to do -- so what are you waiting for?

Digital Ocean, Dokku, and SSL/TLS was originally published by Matt Berther at Matt Berther on August 13, 2014.

Most Popular articles from Google Analytics API, Part 2


In part one of this series, we set up the Google API and created a simple class that interacts with the API to return the most popular articles from Google Analytics. We can make a few enhancements to the class from the previous series to make things a little better. These enhancements include: filtering and limiting the number of results that come back caching the discovered API so we dont have a round trip on every access security enhancements to avoid storing credentials in your source code repository Filtering and limiting By default, the Google API client returns a large number of resources. For this website, I wanted to only return the ten most popular articles. I also wanted to filter certain pages from the result set. For example, I did not want the home page showing up in the list of most popular articles. Fortunately, accomplishing both of those goals was as easy as adding some parameters to the api call. Since we know that the articles on the site follow a certain naming convention (YYYY/MM/DD/slug), we can put together a simple regular expression to pass in to the filters parameter. class AnalyticsAPI # previous methods omitted def most_popular authorize! @client.execute(api_method:, parameters: { 'ids' => "ga:#{profile_id}", 'start-date' =>'%Y-%m-%d'), 'end-date' =>'%Y-%m-%d'), 'dimensions' => 'ga:pagePath,ga:pageTitle', 'metrics' => 'ga:pageviews', 'sort' => '-ga:pageviews', 'max-results' => 10, 'filters' => 'ga:pagePath=~^\/\d{4}\/\d{2}\/\d{2}[^/]*' }).data.rows end end This is much better -- now we're only getting a small number of articles that we actually want to list on our most popular articles page. Caching In the current implementation, the api method attempts to discover the Google API every time it is accessed. We can cache this non-changing data onto the file system and load from there, rather than over the network. Implementing the api caching is also easy to do. Update the api method this way: class AnalyticsAPI # previous methods omitted def api api_version = 'v3' cached_api_file = "analytics-#{api_version}.cache" unless File.exist?(cached_api_file) @api = @client.discovered_api('analytics', api_version), 'w') { |f| Marshal.dump(api, f) } end @api ||= Marshal.load( end end This is also a nice improvement, especially for cases where the class is reused. The api discovery is now stored as an instance variable and loaded from file if the api has already been discovered. Security The last piece to address is mostly an enhancement in the interest of security. As most of you probably know, a generally accepted way of storing secrets is to add them to the environment, rather than hardcoding them in your source code. To that end, we just need to update our four helper methods to read their values from the environment. class AnalyticsAPI # previous methods omitted def key_file; ENV['KEY_FILE']; end def key_password; ENV['KEY_PASSWORD']; end def service_account; ENV['SERVICE_ACCOUNT']; end def profile_id; ENV['PROFILE_ID']; end end This concludes the enhancements for the AnalyticsAPI class -- and also this series. I hope that you learned how to activate and interact with the Google APIs in this series. For your reference, the entire AnalyticsAPI class is included below. Final class class AnalyticsAPI def initialize(app_name, app_version) @client = application_name: app_name, application_version: app_version) end def most_popular authorize! @client.execute(api_method:, parameters: { 'ids' => "ga:#{profile_id}", 'start-date' => Date[...]

Most Popular articles from Google Analytics API, Part 1


Adding a page that lists the most popular articles from your website using data from Google Analytics is a relatively straightforward process. Recently, I added this feature to this site. This article intends to walk you through how to set up your application to access data from the Google API. Enabling API access Before we are able to query the Google API, we will need to visit the Google Developers Console. Once there, click Create Project to create a new project that will be used to control the APIs you will interact with. After the project has been created, select apis under the apis & auth section in the left navigation. This will bring a list of all the Google APIs that your project can access. For this feature, we are interested in activating the Analytics API, so go ahead do do that. After you enable the Analytics API, select the Credentials link in the left navigation and then click on Create New Client ID. Since we're calling the API on behalf of our application, create a new Service Account. After the service account is created, select Generate new P12 key and make note of the private key's password. This step will also download a key file to your computer. You'll need to copy this key file to a location that your application can access. To move on to the next step, you will need the following items. Make sure you have all of these before you move on. The private key password The p12 file The service account's email address Now, we need to enable the Google service account to access our analytics data. To do this, log in to Google Analytics and select the Admin tab. Under the View section, select User Management and add permissions for the Google service account email address to read and analyze your analytics data. To query the API, you will also need your Google profile ID, which can be found under View Settings in the Admin section. Writing the Code From the previous step, you should now have four items: The private key password The p12 file The email address of the service account The profile ID of your Google Analytics property In this section, we will use the google-api-client Ruby gem to interact with the Analytics API and return a list of most popular posts. Before we go any further, we'll need to add the google-api-client to our Gemfile: gem 'google-api-client' After we bundle our app, we can then start creating a class to interact with the API. This class needs to do a couple of things: Initialize the Google::APIClient Authorize the client using the keys and account information from the previous step Discover the API Execute the API method To get started, let's initialize the Google APIClient. The constructor allows you to specify an application name and application version. To make this generic, let's allow our wrapper class to accept these parameters. require 'google/api_client' class AnalyticsAPI def initialize(app_name, app_version) @client = application_name: app_name, application_version: app_version) end private def key_file; '/path/to/key.p12'; end def key_password; 'privatekeypassword'; end def service_account; 'serviceaccount@email.address'; end def profile_id; '123456'; end end Now that we have a Google APIClient instantiated, let's build the authorize method. Authorization uses OAuth2 and is signed with the p12 key downloaded from the previous step. The method looks like this: class AnalyticsAPI # previous methods omitted private def authorize! key = Google::APIClient::KeyUtils.load_from_pkcs12(key_file, key_password) @client.authorization = token_credential_uri: '', audience: '', scope: '', issuer: service_ac[...]

Running gulpjs tasks in order


When I did the redesign of this site, I wanted to also do some things that had been on my mind for a while, including combining and minifying my javascript and css resources. By doing so, I hoped that pages would load even faster than they already did. As I looked at how best to do this, I saw that there were several tools available to do this. One evening, I was part of a Twitter conversation between Scott Hanselman and Jason Denizac. During this conversation, I learned of a couple of tools called Grunt and GulpJS. These two tools are part of an ecosystem of tools called client-side task runners that run on the nodejs platform. At the time of this writing, GulpJS seems to be the new kid on the block. Its primary advantage is a much terser syntax and a stream-based processing model. Based on this, it was the one I chose for the build system for this site. There are many sites out there that describe exactly how to get Gulp and this article will not dive into those. Due to the asynchronous processing of the gulp file, some resources would complete before others. This sometimes caused confusion and unexpected output. For example, the clean task might take longer to run than the compliation tasks. As a result, that task might incorrectly delete items after compilation. There is a way to run gulpjs tasks in order -- using a plugin called run-sequence. You install the run-sequence plugin in the same way that you install any other gulp plugin: npm install run-sequence --save-dev. Once you install the plugin, you can define a task that has several task steps. Within this task, you can use the run-sequence plugin to define which tasks need to wait on others before completing. You can pass two kinds of parameters to the plugin, either a single task name or an array of several task names. When passing a single task name that task must complete before the next task(s) begin. When passing several task names as an array, all the tasks in that array will run in parallel. However, all the tasks in the array must complete before the next task starts. This allows the developer to be specific with task dependencies. For an example, I will show an abbreviated version of the gulpfile used for this site below. var gulp = require('gulp'); var runSequence = require('run-sequence'); // several task definitions removed for brevity gulp.task('default', function(callback) { runSequence('clean', ['lint', 'less', 'scripts', 'vendor'], 'watch', callback); }); In the 'default' gulp task defined above, the clean task will run. After it completes, the lint, less, scripts, and vendor tasks all run in parallel. When they have all finished, the watch task runs. Running gulpjs tasks in order was originally published by Matt Berther at Matt Berther on July 26, 2014.[...]

15 Styles of Distorted Thinking


I encountered this list some time ago and thought it would be great to repost here as a reminder of how sometimes our thinking can be distorted.

Filtering: You take the negative details and magnify them while filtering out all positive aspects of a situation.

Polarized Thinking: Things are black or white, good or bad. You have to be perfect or you're a failure. There is no middle ground.

Overgeneralization: You come to a general conclusion based on a single incident or piece of evidence. If something bad happens once, you expect it to happen over and over again.

Mind Reading: Without their saying so, you know what people are feeling and why they act the way they do. In particular, you are able to divine how people are feeling toward you.

Catastrophizing: You expect disaster. You notice or hear about a problem and start "what ifs". What if tragedy strikes? What if it happens to you?

Personalization: Thinking that everything people do or say is some kind of reaction to you. You also compare yourself to others, trying to determine who's smarter, better looking, etc.

Control Fallacies: If you feel externally controlled, you see yourself as helpless, a victim of fate. The fallacy of internal control has you responsibile for the pain and happiness of everyone around you.

Fallacy of Fairness: You feel resentful because you think you know what's fair but other people wont agree with you.

Blaming: You hold other people responsible for your pain, or take the other tack and blame yourself for every problem or reversal.

Should: You have a list of iron clad rules about how you and other people should act. People who break the rules anger you and you feel guilty if you violate the rules.

Emotional Reasoning: You believe that what you feel must be true - automatically. If you feel stupid and boring, then you must be stupid and boring.

Fallacy of Change: You expect that other people will change to suit you if you just pressure or cajole them enough. You need to change people because your hope for happiness seems to depend entirely on them.

Global Labeling: You generalize one or two qualities into a negative global judgment.

Being Right: You are continually on trail to prove that your opinions and actions are correct. Being wrong is unthinkable and you will go to any length to demonstrate your rightness.

Heaven's Reward Fallacy: You expect all your sacrifice and self-denial to pay off, as if there were someone keeping score. You feel bitter when the reward doesnt come.

15 Styles of Distorted Thinking was originally published by Matt Berther at Matt Berther on July 25, 2014.

Finding all SQL Server databases for a login


Earlier today, I had a need to deactivate a SQL Server login. Before I did that, I wanted to find out which databases the user was allowed to access. Rather than opening each of the 35 databases on the SQL Server in SSMS and looking to see whether or not the login was a user in the database, I wanted to create a query that would do this all in one fell swoop for me.

I learned a little more about sp_MSforeachdb, which is a stored procedure that executes the parameter against every database on the SQL Server. There are several documented problems with this stored procedure, but for what I needed to do, it was a great way to get the job done.

The command I used was:

exec sp_MSforeachdb 'if (select count(*) from [?].sys.sysusers where
  name = "usernametosearch") > 0 select "?"'

In the above command, you'll notice the '?', which is substituted for the database name on every iteration of the loop.

Again, not the right tool for every job... but it served well here.

Finding all SQL Server databases for a login was originally published by Matt Berther at Matt Berther on March 05, 2014.

Pushing large git repos with SSH


For various reasons, we have a MASSIVE (14gb) git repository that we work with. We have a clone of this repository out in the cloud behind an SSH server. Recently, when I would attempt to push the repository, I would end up with failures while compressing the objects.

$ git push aws master
Counting objects: 4456610, done.
Read from remote host my.gitserver: Connection reset by peer
fatal: The remote end hung up unexpectedly
Compressing objects: 100% (1984267/1984267), done.
fatal: sha1 file '' write error: Invalid argument
error: failed to push some refs to 'git@my.gitserver:repo.git'

On a hunch, I thought that the connection to the SSH server was timing out. I didnt know why the SSH connection would be opened before it was needed. However, the message seemed to indicate that some connection was being dropped by the server. To keep the SSH session alive so that the object compression could complete, I needed to update my ~/.ssh/config file and add an entry for my git remote.

Host my.gitserver
  ServerAliveInterval 60

When no data has been received from the server, the setting specified in ServerAliveInterval will determine the number of seconds after which a null packet will be sent to the server. The default setting is 0 which means that no keep alive packets are sent. This setting can be combined with ServerAliveCountMax which is the maximum number of ServerAlive messages that will be sent without response before the connection is terminated. The default value for ServerAliveCountMax is 3, which is good enough for what I need.

Pushing large git repos with SSH was originally published by Matt Berther at Matt Berther on December 29, 2013.

Removing duplicate messages from Outlook


I recently learned that Outlook for Mac had been uploading multiple copies of the same message to the Exchange server. At final count, I had approximately 280,000 email messages sitting in my "Archive" folder on the server. As you can imagine, this caused tremendous download times for resynchronizing my folders.

I looked for tools that could purge the duplicates for me, but had a tough time getting most of them to work on Microsoft Outlook 2010. I set out to try and solve this problem by creating a simple C# app that would iterate through my archive folder and identify and remove duplicate items.

I chose to parse the messages in two different ways to make sure that I was able to remove as many duplicates as possible. The first scan removed every message that had a duplicate message id. The second scan removed every message that had the same sender email, subject, and sent time.

This technique worked remarkably well. My archive folder now has less than 70,000 messages in it, which means that approximately 75% of the messages in that folder were deleted as duplicates.

I've made my source code available at github for anyone that is interested in using and/or forking the project.

Please keep in mind that there are no warranties with the code. It worked well for me; your mileage may vary.

Removing duplicate messages from Outlook was originally published by Matt Berther at Matt Berther on September 20, 2012.

Chrome extension for instapaper


I use as my read later service. I have installed the Chrome add-on to allow me to quickly tag an article to read later. Also, I have configured it as my read later service in Tweetbot, which allows me to quickly send articles to it for later reading. The one thing that has always bugged me about the instapaper website is that it does not open links in a new tab/window. To get around this, I set out to create a Chrome extension. This is what I came up with. // ==UserScript== // @name Instapaper New Windows // @namespace // @description Open Instapaper links in a new window // @include* // @require // ==/UserScript== (function() { function loadJQuery(callback) { var script = document.createElement("script"); script.setAttribute("src", ""); script.addEventListener('load', function() { var script = document.createElement("script"); script.textContent = "(" + callback.toString() + ")();"; document.body.appendChild(script); }, false); document.body.appendChild(script); } function main() { $("a.tableViewCellTitleLink").attr('target', '_blank'); } loadJQuery(main); })(); Copy the code above and save it to a location on your computer; I called mine instapaper. The latest versions of Google Chrome no longer allow you to add extensions from a third party source (like your own computer) by simply clicking on the javascript file. To install the extension, open the extensions window in Chrome and then dragging the file you created onto the window. Once the extension is activated, any links from your unread list in will open in a new window. Chrome extension for instapaper was originally published by Matt Berther at Matt Berther on September 14, 2012.[...]

Validating HABTM relationships with Rails 3.x


There comes a time as you build up a rails application that you end up using the has_and_belongs_to_many (HABTM) macro. This macro is an easy way to create a many-to-many relationship between two of your ActiveRecord models. In some cases you may want to validate that association. However, the traditional methods for validating rails models do not work. The unit tests below described how I wanted the relationship to function. class ProjectTest < ActiveSupport::TestCase setup do @project = end test "may have many developers" do 4.times { @project.developers << FactoryGirl.create(:developer) } assert end test "must have at least one developer" do assert_equal 1, @project.errors.count assert_not_nil @project.errors[:developers] end end In my case, I was hoping to validate that each project had at least one developer associated to it. Initially, I coded my models to make the first test pass. class Developer < ActiveRecord::Base end class Project < ActiveRecord::Base has_and_belongs_to_many :developers end To make the second test pass, I tried to implement a custom active record validator. class Project < ActiveRecord::Base has_and_belongs_to_many :developers validate :minimum_number_of_developers private def minimum_number_of_developers errors.add(:developers, "must have at least on developer") if developers.count < 1 end end This, however, does NOT work with HABTM relationships. The way that these relationships work is that the associated property is not available until after the record is saved. To get around this, we can validate as part of the after_save callback. Validating here and returning false from the callback will rollback the entire transaction. class Project < ActiveRecord::Base has_and_belongs_to_many :developers after_save :validate_minimum_number_of_developers private def validate_minimum_number_of_developers if developers.count < 1 errors.add(:developers, "must have at least on developer") return false end end end The test passes with the code above. Validating HABTM relationships with Rails 3.x was originally published by Matt Berther at Matt Berther on September 09, 2012.[...]

Fixed Position Footers


Posting mostly for my own reference...

One thing I find that I need to do a lot is position a footer bar across the bottom of the page. The most common way to do this is to set a fixed position on the element and anchor it to the bottom using this css:

#footer {
     width: 100%;
     position: fixed;
     bottom: 0;
     height: 75px;

Unfortunately, this doesnt work quite right in IE. When using this style definition in IE, the footer gets locked into a specific position in the viewport and when you resize from the corner anchor the footer does not move with the window.

The proper cross-browser way to declare a fixed position footer is to use negative margins, like this:

#footer {
     width: 100%;
     position: fixed;
     top: 100%;
     margin-top: -75px;
     height: 75px;

This appears to function properly in every browser I've looked at so far.

Fixed Position Footers was originally published by Matt Berther at Matt Berther on December 08, 2011.

The Software Behind the Site


Several times over the past months, I've received questions about the software and setup that I use to run the blog and related pages. Since the site recently underwent a dramatic change in tooling, I want to detail what I chose and why.

Before the change, the site was powered by wordpress. I had my own rackspace virtual server that I was using to host the apache and mysql server servers required by wordpress. Wordpress is not a bad piece of software, but what I realized is that it tries very hard to be all things to all people.

Enter the blog engine Im using now: toto

Toto's philosophy is akin to mine: use the best tool for the job. Toto's website states that "everything that can be done better with another tool should be". To that end, toto doesn't use complicated web frameworks. It doesn't use a database. There's no built-in commenting support. If you want comments, you'll use disqus. It also relies on git for version control. When you combine toto with a Heroku account, you also use git to deploy the site. Its designed to be used with a proxy cache for high availability and fast response times.

Migrating the posts from the mysql database to the text files proved to be relatively straightforward, using a simple ruby script to iterate over the rows and format a text file that matched toto's expectations. Importing the comments into the disqus platform was equally straightforward using their JSON API.

I much prefer the simplicity of this new setup. The entire blog engine weighs in at about 300 lines of code. If running a blog without putting your hands on the metal and being able to control every nuance of your blog platform appeals to you, then certainly take a look at the toto/heroku combination. If not, then I believe that wordpress is a fine solution.

The Software Behind the Site was originally published by Matt Berther at Matt Berther on December 06, 2011.

"Gitting" TFS out of your way


More and more, I have developed a passion for the Git source control system. I love how Git stays out of my way, until I need to use it. Git offers a very easy way to test things out, whilst utilizing the benefits of source control. In the traditional, connected model of source control, experimentation proves to be somewhat difficult because you don't want to corrupt your main development line. Branching and merging with most other systems is a nightmare at best. With Git, branches are very cheap and merging is virtually painless. I use Git for side projects as well as a few of the open source projects I am a part of. However, during the day, my organization uses Microsoft's Team Foundation Server for source control. I find that in a connected source control model, I am constantly waiting on TFS to catch up. Either it's out too lunch, or it needs to download and checkout a file before I can edit it. Experimentation is tough, for the reasons I mentioned earlier. Surely, there has to be something better, while still keeping TFS in the organization. Enter Git-TFS... Git-tfs is a two-way bridge between Git and TFS created by Matt Burke. It allows me to clone a TFS repository and use Git for source control (without the hassles of being tied to a TFS server). This means: 1) no more waiting on TFS, 2) no more locked files, 3) no more requiring network connectivity to work on a project. At a time that I believe a development effort is ready to be committed to the main line, I use git-tfs to push my changes to the TFS server so that the rest of the team can get them. Sound interesting? Here's how to get started: First step's first. You need to have a Git installation on your machine. I use msysGit 1.7.6 preview 20110708, but this should work with newer versions as well. I just used the default installation options, specifically, I chose to use the Git Bash only when prompted about how to use Git from the command line. I chose this option because it did not otherwise modify my system. Once you have a Git installation, you need to install the git-tfs plugin, which is available on github at Installing the plugin is pretty straightforward. You can choose to either download or build it yourself. I went with the download option and extracted the zip file to c:\git-tfs. Once the files are in place, git has to know how to find them. The easiest way to do this is to add c:\git-tfs to your path (Advanced System Settings | Environment Variables). At this point, you are ready to clone a TFS repository using git-tfs. You can do this one of two ways: git tfs clone http://tfs:8080/ $/TeamProject/folder or git tfs quick-clone http://tfs:8080/ $/TeamProject/folder At this point, you are ready to open your project. If you're using Visual Studio (which you likely are), you may still have the TFS source control bindings in place. There's several ways to handle this. You could 1) disconnect your network cable while starting the project, 2) update all of the projects to remove the source control bindings, or 3) install GoOffline. Option one is obviously less than desirable. Option two is reasonable, however, we are assuming that others on the team continue to use TFS and need to have the bindings in place. That leaves the GoOffline solution. GoOffline is a free Visual Studio add-in that adds a Go Offline button to the Source Control menu. When the solution is in offline mode, any file renames or moves happen without communicating with the TFS server. Perfect! I prefer to work in feature branches (sometimes also referred to as topic branches). Aga[...]

Goals and Goal Setting


Goals and Goal Setting: In the book What They Don't Teach you at Harvard Business School, Mark McCormak discusses the importance of setting goals. The students in the 1979 Harvard MBA program were surveyed and asked "Have you set clear, written goals for your future and made plans to accomplish them?" Only three percent of the graduates had written goals and plans; 13 percent had goals, but they were not in writing; and 84 percent had no specific goals at all. Ten years later, the members of the class were surveyed again. The 13 percent of the class that had goals were earning an average of twice as much as the 84 percent who had no goals at all. However, the three percent that had clear, written goals were earning on average, ten times the other 97 percent combined. For many of us, during this time of year, we begin to look at performance reviews for our team members. An important part of a good performance review are effective goals for the coming year. I am certainly not a management expert and most of the ideas that I will share with you today are not my own. However, I do see value in each of the ideas. For me, these ideas make the goal setting process easier. I certainly hope that they can do the same for you. Top Tips: Strong relationships with directs is paramount Many management leaders propose that having weekly one-on-ones is the single most important way to cultivate a positive relationship with your direct reports. 10/10/10 agenda 10 minutes for the direct to discuss whatever they want 10 minutes for you to discuss whatever you would like with the direct 10 minutes to discuss business and progress toward goals Create MT goals (measurable and timely) and use SMART to validate them Examples: By April 30, 2011, consolidate the presentation models used by the company's applications. Integrate the new ERP system with the CRM system by October 31, 2010. Create goals that intersect business and individual objectives Examples: By April 30, 2011, combine multiple automated test frameworks into a single framework and increase coverage of automated tests by 20% using the consolidated framework. Complete Microsoft developer certification 70-515 by April 30, 2011. Emphasize cross-team collaboration Examples: Participate in a mentorship program to grow understanding of our organization's business by April 30, 2011. Participate in a mentoring program to enhance leadership and communication skills by December 31, 2009. Create personal goals Examples: Run a 5K race in less than 32 minutes by April 30, 2010 Receive a belt promotion in my chosen martial art by December 31, 2011 Be flexible "I'm much more likely to accomplish something if I write my goals down and share my goals with others. Recruiting the support of other people also helps keep me on track." "What was/is valuable to me is taking into account what I want to do when setting goals. That results in my engagement right from the start." "I think the addition of a personal goal is a great idea. It's good for promoting personal improvement and makes the goal setting process less painful." I have used this model for a number of years. The quotes above are a few pieces of feedback that I received from my team regarding these ideas. Goals and Goal Setting was originally published by Matt Berther at Matt Berther on November 27, 2011.[...]

Cygwin - unable to remap to same address as parent


I find the windows command prompt somewhat limiting and have never really been able to make the leap to Powershell. Personally, I like to use a Cygwin shell for command line work. I am comfortable in a unix shell with previous Linux and Mac experience. I'm not an expert by any means, but I can get by.

Recently, I installed the Cygwin shell on my 64bit Windows 7 system. After installing the developer tools and trying to compile ruby 1.9.2, I found that I had a tremendous amount of problems building. Id see numerous errors that stated:

unable to remap file to same address as parent ruby ### fork: child XXX - died waiting for dll

The most commonly referenced solution was to rebase by doing the following:

  1. Start -> Run
  2. cmd.exe
  3. cd c:\path\to\cygwin\bin\
  4. ash.exe
  5. ./bin/rebaseall
  6. Reboot

However, this did not work for me. I found a message on the cygwin mailing list stating that the gems may not add information to the proper paths and that we must create the list manually.

To do this, first from your cygwin shell do:

find /lib/ruby/gems -name '*.so' > /tmp/

Then, follow steps the steps above, replacing step five with:

./bin/rebaseall -T /tmp/

Since performing this, I have not seen any more of the remap file errors. Your mileage may vary though.

Cygwin - unable to remap to same address as parent was originally published by Matt Berther at Matt Berther on November 22, 2011.

Hidden Gems in Code


This gem was spotted in some code that my team was working on today. It made us chuckle anyway. :)

function GetAgeOptions(localization) {
  var ageArray = new Array();
  ageArray[0] = "Nick Rocks";
  ageArray[1] = "Ben Rocks too";

  return ageArray;

Hidden Gems in Code was originally published by Matt Berther at Matt Berther on January 26, 2011.