Cymen Vig

Software Craftsman

Day 57: APIs and ruby gems

I spent the day expanding the API on an internal tool in order to support additional functionality that is required for another internal tool. This was interesting because the API is wrapped by a Ruby gem created with Jeweler. Ruby gems are small packages of code that have version numbers. Ruby on Rails is itself a gem and the typical Rails project relies on additional gems. I’d never created a gem before so it was an interesting experience.

Comments and Reactions

Easy CDN: Making use of Amazon CloudFront

I’ve had a few issues with the micro EC2 instance that hosts this blog on Amazon Web Services. I’ve adjusted some database and HTTP server settings in order to prevent the server running out of memory (it’s never a good thing when the OOM or out of memory handler on Linux has to randomly pick something to kill). While working on that issue, I discovered how easy it is to use the Amazon CloudFront CDN or content distribution network.

The purpose of a CDN is to offload the hosting of static files like images, javascript and CSS to another provider that has geographically distributed servers that are close to your audience. These static files can then be served from the CDN which means your web server doesn’t have to handle all the requests for static files. The reason this is a big benefit is that typical HTTP servers are configured to respond to any request type. So even if all you’re asking for is a simple static image file the server thread responding to your request is ready to parse some dynamic content, connect to a database or do some other potentially computationally expensive and more importantly memory intensive operation.

So the CDN servers can handle all the static requests and your web servers handle the requests that actually computation. Another benefit to this is that browsers typically can request only so many files at a time from a web server. Now that the static assets are off on the CDN the browser is connecting to a different web server for those assets. Which means more requests at the same time because it is talking to multiple web servers and thus the total response time should be faster.

What do you need to do to make use of a CDN? A CDN, a way for the CDN to get the assets and the asset URLs need to be rewritten to go to the CDN in the content served.

Amazon CloudFront can work in two ways: serve items from a simple storage bucket that you are responsible to load or it can be a proxy to your web server. I chose the later option so the CDN becomes a proxy for I then used a plugin for this blog that can rewrite the assets to the CDN URL.

So what looks different in the end? To the average user, nothing except perhaps the site is a bit faster (and your page rank may go up a bit on Google due to the faster response). If you dig down into what is being loaded, you’ll see that this URL:


The Amazon CloudFront pricing is fairly inexpensive to experiment with so if you’re curious it is well worth trying it. The biggest issue for a legacy site is modifying the static file URLs to use the CDN however it isn’t an all or nothing: you could start with the biggest static resources and start reducing the server load and increasing the performance of your site within minutes.

Comments and Reactions

Day 55 and 56: Wrapping up iteration for internal projects, presenting iteration

Wrapping up iteration for internal projects

Today I spent my time wrapping up some stories for the internal projects. We had a little extra time this iteration as the interation planning meeting where one iteration is ended and another is kicked off got pushed back a day to Friday.

Presenting iteration

I was the presenter at our IPM on Friday. I led going over the stories we completed and presented the client with a sign off sheet of the competed iteration. We then had a number of additional stories added by the client. There is a lot of complexity involved in getting to the end of what the application is supposed to do. The client has some stories of how the final application is supposed to look but each iteration brings up additional new stories that are between the current state of the project and the desired end functionality. As a team, we then estimated each story and then the client picked those stories that they wanted us to work on for the next iteration.

We tried to break down any story that was longer than half the iteration. This is a good thing to do for a number of reasons: we get better estimates when the scope is smaller and if a story is completed it can be billed for so it is better to complete 1 of 2 stories and get paid for 1 of 2 instead of 0 of 1.

It was an interesting experience leading the hand off. We didn’t have time to practice this week but it went well. I also didn’t try to be the only face of the team and didn’t get in the way when others wanted to speak. It seemed to flow well and I’m happy with how it went.

Comments and Reactions

Day 54: Backbone routers, Backbone collections...

Backbone router

We’re getting closer to the end of the iteration for the internal projects and we need to bring the design changes that Chris is making into the projects. Our pattern up to now has been to have everything in one window with any interaction (editing, creating) taking place in modal dialog boxes. That worked up to a point but now our applications are becoming more complex. So I began to look into Backbone.js Routers and to think about how to refactor our modal approach to one that can do edit/create in modal but navigating a hierarchy like a normal website where the page contents is replaced and the back button works. It turns out to be fairly simple and that is progressing well.

Backbone collections

We started out using Backbone-Relational. The parent-child relation with this looks like:

parent.get('child')  // get a collection containing the children
child.get('parent')  // get the parent of the child

This works okay however I’m starting to wonder if it would better to have a collection of each model type. So all of the parents would be in one collection and all of the children would be in another collection.

Comments and Reactions

Day 53: Pairing

Mark and I paired on the internal projects today. We made some progress however it was slow going. Part of that was that we used Mark’s laptop and he is currently using emacs. The problem is I’m not very familiar with emacs and he is just started a couple weeks ago so the answers to questions like “how do you delete the whole line” or how to quickly navigate the filesystem as with NerdTree in vim are not at hand right away. The most common editor at 8th Light from an apprentice perspective is vim with NerdTree and line numbering enabled. This is a good basic starting step that most seem to be familiar with. Some craftsmen are using GUI editors over vim as an overall preference and some use GUI editors when appropriate (working on Android, .NET, Java, etc) and otherwise use vim.

Of course it is good to get familiar with new tools however it can be frustrating when that is not what one is attempting to focus on. But we kept going forward. At one point, we almost switched to vim but in the end it turned out to not be necessary.

What I realized at the end of the day is that the added “grit” of an unfamiliar editor is a burden. Pairing is draining. There are two minds at the keyboard focused on the task and both do not always want to go the same way. This is not some way of saying that working with Mark was hard – it was not at all! It was a pleasure to do so. However working with another person is harder than working by ones self. Micah mentioned at one of the iteration project meetings for the internal projects that pairing ultimately produces better code. I certainly think that is the case however it definitely takes some time to get used to pairing (and to be figure out the patterns of being a good pair).

Comments and Reactions

Day 52: More work on internal project split

I spent part of the morning writing some CRUD code for an interactor. If you recall, an interactor is something a controller will talk to in order to retrieve data from the data store. The purpose of it is to move your application logic into the interactor instead of having it in the controller or the presentation models.

Then I figured out how to setup capistrano as vlad is painful. Maybe it’s just the way we’re using it but it just wasn’t very clear. It was fairly straight forward to get capistrano working including have it run the database migration.

After that, I finished the configuration of a wildcard SSL certificate for * We now have a valid certification we can use for SSL. It was interesting to see the whole process from the certificate signing request through to adding the certificate and intermediate certificate to the web server configuration.

Finally, I finished the split of the internal application into two applications by migrating the data to one of the new ones. Our database migration scripts didn’t handle data properly so that is one area we have to make sure we pay attention to now that we have actual production systems with production data.

Comments and Reactions

An example of using jQuery UI Sortable on a Backbone.js collection view with item views

Tonight while browsing stack overflow I came across a post that asked something relevant to what I’d been working on recently: Saving jQuery UI Sortable’s order to Backbone.js Collection.

I posted an answer with a link to this that showed the approach I used to tie the two together. The approach I used was to have the item view listen for an event triggered by jQuery UI Sortable. When that event triggered, the item view would trigger another event that included the model which the collection view would listen for. Then the collection view can be responsible for updating the view and updating the server. The solution as shown in jsfiddle could be refactored to have the collection instead of the collection view be responsible for updating the server.

This seems like a round about way of doing things however I like some of the clarity provided by an event-based approach. I’m curious to see how others approach something like this so hopefully the question will receive a couple more answers.

Comments and Reactions

Day 51: More pairing on Backbone.js, Wai Lee presents his challenge at 8th Light University

More pairing on Backbone.js

Mark and I paired in the morning on the internal project that utilizes Backbone.js. The project split into two and we worked on renaming from name A to B in both source and file and directory names. We used some bash scripting along with find, xargs, rename and sed to make the change. We were able to verify success by running our tests.

We also worked some more on Backbone.js views. When rendering a collection of items it can occasionally be confusing as the child item may need to append to DOM that is not yet attached to the document DOM. I think we’re both starting to understand how Backbone.js works internally which is proving to be useful. The source code to Backbone.js is available in annotated format which is quite useful.

Wai Lee presents his challenge at 8th Light University

Wai Lee presented fountain codes at 8th Light University. Fountain codes can be used as a form of error correction on binary streams. With say 5% overhead on the binary stream one can correct missing portions of the stream. The scheme is typically used in digital video transmission so there is a good chance it is used with ATSC or over the air HDTV in the United States. With this form of error correction, a very slight interference that corrupted a small portion of the binary stream that comprises the encoded video shown on the television can be corrected. This is likely also why it seems like one either has a good digital picture or it’s basically not watchable. Analog television broadcasting degraded more gracefully to the white static that most of us today are familiar with but future generation will only see if a device happens to use a recorded loop of static to show an error condition.

I also wondered during the presentation if Parchive uses fountain codes. According to wikipedia, version 1 uses Reed-Solomon error correction but it doesn’t mention what more recent versions use.

Comments and Reactions

Working at an Internet Service Provider

While completing my undergraduate degree in Computer Science at the University of Wisconsin at Madison I worked for a regional Internet Service Provider. I need a job during the summers to help pay for school and it seemed like an interesting option. My prior experience with IT was setting up small networks, Linux servers, and Lotus Notes. I’d also learned how to create websites using LAMP (Linux, Apache, MySQL and PHP) and develop Lotus Notes applications.

The prior summer I’d worked for a different company testing all their legacy applications to ensure they would run on Windows. These were DOS applications and I remember some were written in Clipper. The problem with the DOS applications was that some had timing loops that were calculated on start up and the CPUs they were running on where too fast so the time measured between point A and B was 0. Can you guess where that is going? The time delta was divided by some fraction for use in the system and when it was 0 the program crashed with a division by zero error. I figured out a hack to get these programs to work: use a program that hogged the CPU in order to cause the DOS application to run slower and get a non-zero time for the timing delta. In some cases the source code had been lost for the application. It was an interesting experience but working at an ISP sounded like a good change of pace.

I started out working the help desk at the ISP. This was interesting… About two months in I had my first experience with someone so irate that they verbally berated me over the phone and would not listen to reason. Thankfully a coworker walked over and pressed the hangup button after I’d repeatedly asked the caller to calm down. Welcome to first level technical support!

I moved up the ranks to work on the network operations side which worked with business customers and monitored an incident queue that included automated alerts from production environments. We monitored everything. Some customers only wanted to pay us for power, cooling and rack space. Some customers wanted managed services which included having our company keep their servers patched and up to date and respond to any other issues (hardware or software). I became exposed to a wide range of hardware and software. My troubleshooting skills improved quite a bit as I needed to quickly determine what was wrong when an alert came in and then pass it off to the appropriate technology team.

After I completed my degree at UW-Madison I returned to the Internet Service Provider to work on the team responsible for developing and keeping operational internal tools. I was one of two people responsible for the daily operation of a monitoring system that monitored everything including remote customer environments. There were thousands of servers, network devices and applications that we collected samples from every 5 minutes and stored in a database in order to generate reports. The same probes responsible for collecting the samples actually checked for faults more frequently – typically 30 or 60 seconds. When an alarm condition was met the message would flow through the monitoring system. If it wasn’t matched by a filter, it would emit to a queue that would be displayed to those working in the network operations center.

So that a is brief overview of what it was like to work at an internet service provider that started to focus more on providing managed services. Working in the network operations center (NOC) provided constant challenges every day. If one worked nights (6pm to 6am which I did for a couple of months), one had the additional excitement of trying to determine if floods of alerts were due to system maintenance or an actual issue. Working on the monitoring and internal tools team was a relief after a couple years of experience in the NOC. Making tools that made my coworkers jobs easier was satisfying and ultimately that pushed me on wards to focus on software development.

Comments and Reactions

An experience with Amazon EC2

This evening I decided to update the linux installation on the Amazon EC2 instance that hosts this blog. The machine image in use was from Amazon (aka AMI or Amazon Machine Instance) and was also fairly old. Somehow, the upgrade failed and I was left with an unbootable server. A complete outage.

Because my filesystem was on an elastic block store (EBS) I was able to create a new server using the most recent Amazon AMI and then attach the old EBS to the new server and mount it. I migrated the database files without issue and then the web server configuration, files, and logs.

This time around I created an elastic IP which gives me a static IP that I can point at an Amazon EC2 instance. This means I can create another EC2 instance and change where the elastic IP points without having to update DNS! I then updated my DNS records to point to the elastic IP and the site was operational once again.

I was initially somewhat dismayed that that the upgrade went wrong. However it was an interesting experience. If one has to do something like this with physical servers it can take a day. With Amazon web services it took roughly and hour and everything was back up and running. There are a whole host of issues with virtual servers but there are certainly wonderful benefits to be had too.

Comments and Reactions

Day 49 and 50: Internal project goes plural and pairing

Internal project goes plural

The apprentice team is now working on multiple internal projects. The two are split off of one. I think the whole team was happy with this outcome.


I’ve been pairing quite a bit lately and it has been enjoyable. I’ve now paired with Ben, Mark and Sue. My first pairing was with Sue and we worked on test driven developing of JavaScript using jQuery UI Dialog and Sortable. It was very satisfying to see our tests pass and the code work. However it became clear that developing this way was going to be very slow and without a good guide on how to structure our use of JavaScript the adhoc approach was going to cause pain later on. The testing itself was also close to the DOM.

I paired with Ben and Mark to work on using Backbone.js. Now we had more of a structure. The way Backbone uses models and collections makes sense. The place we’ve occasionally struggled and had to think hard about what we are doing is views and views within views. Mark and I created a reusable view that encompasses two child views: detail and edit. The DetailEdit view accepts a model and two view types to use and then creates the views and can handle switching back and forth between the two. This seemed quite clean however I am going to research views to see what other patterns are out there that may increase our productivity and avoid the event binding issues that can come up with Backbone views.

The test-driven development of JavaScript code that utilizes Backbone is very nice! Because Backbone is responsible for the DOM we don’t typically have to get down to that level unless our views interact with the DOM (for example, reading values from the DOM when an edit view is submitted). Even in those cases it is straight forward to put the DOM interaction into a single method that can be mocked. The one place we’ve had issues test-driving development is with views at the DOM level. An example today was a view that was tied to a DOM element with the ‘el’ option. In this case, “el: ‘.project’” failed as it was looking for DOM that wasn’t actually rendered yet. In other words, we had to fix where the view was being rendered to in the DOM before we could test drive it as it was going out to space as far as our testing was concerned. Again, more research and thought needs to go into how we are creating and using views.

Comments and Reactions

Day 48: Internal project work and pairing

The team working on the internal project was humming along today with two pairs going. It was hard to split up the work but we seemed to do a fairly good job of it. We’re all a little concerned at the growing complexity of the stories however we are making forward progress.

Ben and I extracted some basic CRUD methods shared by interactors into a base class. This cleaned things up quite a bit. I’d like to do the same with the mock interactors used in testing our controllers.

The afternoon was spent working on a new controller and hooking it up to Backbone with a new collection, model and a bunch of views. Developing Backbone with the test-driven approach worked very well. I am happy that the tests are serving to accurately document how Backbone is being used. It can be confusing at first as there are quite a lot of small pieces that interact together to create a view.

Comments and Reactions