Hosting with GitHub Pages
Mar 28, 2012

It occurred to me that having transitioned my blog to be a generated static site, I was paying Rackspace ~$11 to simply host static files. (And for me to have the pleasure of managing a server in the process!) This struck me as stupid, for what are probably pretty obvious reasons. So, I've made another change to my blog hosting. I'm now using GitHub Pages. Thanks to DNSimple it was drop-dead easy to change my DNS settings appropriately as well.

I've still got one more site hosted on my Rackspace CloudServer that I need to figure out what to do with. It's basically just an image server that I'm holding onto for historical reasons (there are still various sites out there that are linking to it), so I may just throw that on GitHub pages as well (it's less than 40 MB of images, so I don't feel too bad about it), though I would much rather have a solution where I could point the domain name at my Dropbox public folder for that sort of thing. Anyone know of a way to do that? I can't just CNAME the domain to dl.dropbox.com, because that would break the existing links, instead I need it to rewrite the URL to include the path to my public folder. It would be super easy to do with Nginx, but that would require me to run and manage a server, and the whole point of all this is to not have to do such things any more. Hopefully I'll be able to come up with a solution to that...

For all my non-static sites (including whatever personal and side-projects I might work on) I've been really, really happy with Heroku. If you haven't used the service I highly recommend checking them out. Hopefully I'll find the time to blog about it in more depth in the future. Suffice it to say, I will be attempting to use Heroku for pretty much every project I work on. Undoubtedly there will be some cases where it becomes necessary to manage (virtual) servers on RackSpace or EC2, but I think that's going to quickly become a small minority of cases.

Anyway, the serving landscape has been changing with remarkable speed lately, and I'm sure it will only continue to do so. The net result is that my life is much easier, and my hosting costs much lower. Vive la (hosting) révolution!

Now in syndication!
Feb 26, 2012

Well, I should now have a working RSS feed again. Unfortunately, I don't actually know for sure whether or not this is the case due to some complications with FeedBurner. I started using FeedBurner for this blog in 2006, well before they were bought by Google, and never really got around to doing whatever things were required when that acquisition happened. As a result, my account was never properly transitioned. I attempted to do that the other day, but it would appear that not only do I not know my FeedBurner password, I don't even know my username or the email address I used to sign up for it. Everything I tried was rejected. So, I have no way of changing my settings leaving me with a choice between changing the URL of my RSS feed (an annoying solution for my subscribers), or attempting to rebuild my new, RabbitFish-based feed in-place to keep the FeedBurner feed going. I've attempted to go with the second option, and I suppose we'll know shortly how well that worked. Looking at my server logs, it appears that FeedBurner will update roughtly every 30 minutes, so it shouldn't take too long to ascertain whether or not the new feed is working properly. I guess we'll see in about 30 minutes!

This blog is now powered by RabbitFish!
Feb 10, 2012

A few months ago I made some major changes to this blog. One of the obvious changes was the new design, based on Twitter Bootstrap, but in addition to that I also converted the blog from a Django site to a static site. At that point, all I actually did was scrape the site to static files using wget and throw them up on the server to be hosted with Nginx, but I've now taken that a step further.

For the past three months I've been working on a personal project named RabbitFish. It's a static site generator built on Python 3 that uses Jinja2 templates and YAML for storing configuration and content data. This blog is now powered by RabbitFish.

Largely because of the technical constraints of the still very immature state of RabbitFish, I've also made a few changes to the structure of the blog. Probably the most obvious is that I've gotten rid of the tag cloud that used to be in the right-hand sidebar. In addition, I've actually completely removed the tag system altogether. Of course, you'll notice that all the old posts, and this one as well, still have tags, but rather than being part of an actual tagging system, they're now basically just shortcuts for searching the blog; if you click on one, you'll see that it just takes you to the Google Custom Search for my blog with that tag filled in as the search query. Additionally, I've removed pretty much any way of navigating around the blog. Taking a page from Apple, I've decided to rely solely on search for finding blog posts. To ensure that all the posts get properly indexed, of course, I've also added a page that simply lists all the blog posts. (It's also linked to at the bottom of the index page.) Most of my traffic was organic search hits anyway, so I don't really see this causing any problems.

The one thing that I still have left to do is get syndication set up. Currently, the feed for everything prior to this post is still available thanks to FeedBurner, but I still need to set up a new, live feed to get new posts in there. Fortunately, that will be as easy as setting up a new ListPage in RabbitFish, and writing a template to build the appropriate XML file. I'll probably get that done this weekend.

Since I was already using Disqus for comments, all the old comments are already here and new comments shouldn't be a problem. I'm realy interested to hear what people think about RabbitFish, as well as the minimal interface I'm exposing for the blog now.

RabbitFish: https://github.com/joshourisman/rabbitfish

Adamanteus 0.5 released, now with more PostgreSQL!
Jun 05, 2010

I've now finally gotten around to adding PostgreSQL support to Adamanteus! It wasn't really difficult, just hard to find time to do with everything that's been going on lately (quite busy at work and with the new house). The usage of Adamanteus hasn't changed at all, now you can just specify 'postgres' as your backend rather than just 'mongodb' or 'mysql'. There is one slight caveat, however: the pg_dump utility does not allow you to provide a password non-interactively. For the time being, at least, that means that if you're using Adamanteus to back up a PostgreSQL database you can't specify a password (it will throw an exception if you do). The solution to this is to either 1) set up a read-only passwordless user for running backups, or 2) set up a .pgpass file on the machine from which you intend to run your backups (documentation here). I recommend the .pgpass option, it's quick and easy. Adamanteus 0.5 is available from both bitbucket and PyPi.

Adamanteus: versioned backups of databases using Python and Mercurial
Mar 26, 2010

Backing up databases is one of those things that I've always felt could be done in a better way. Traditionally I've done it with a simple shell script that used mysqldump or pg_dump to dump my database to an SQL file named using a timestamp, compress it, and maybe scp it off to some remote server for redundancy. This approach works just fine, except that I recently took a look at my backup directory for a project using that setup only to discover that there were nearly 5000 backup files taking up 11 GB (and this is using bzip2 to compress them!). Obviously not an optimal situation, especially considering that really very little changes from backup to backup, and it's quite possible that nothing changes at all for some of them. It simply makes no sense to store an entire dump of your database every single time!

Fortunately, this is a very familiar situation that we've got advanced tools to handle: version control systems. So I decided to write a little program to replace my shell script that would use a modern, advanced version control system to provide a much more reasonable solution. What I came up with was Adamanteus, a command line program written in Python that allows you to back up your database into a mercurial repository. It currently supports MongoDB and MySQL, and I plan on adding PostgreSQL support this weekend.

Using Mercurial immediately solves basically all the problems with my original approach. It stores diffs rather than full files, meaning you aren't wasting space with a lot of duplicate information. It also handles compression transparently keeping the file sizes down even for the diffs. Plus, because Mercurial is a distributed version control system it's very easy to provide redundancy by pushing and pulling to and from remote repositories. (Pushing/pulling to/from remote repositories isn't currently implemented, but that's also in my plans for this weekend.)

The project is far from complete, but I think it's sufficiently far developed to release as 0.1. Plans for the 1.0 release include:

  • PostgreSQL support
  • The ability to restore your database from a particular revision in the repository
  • automated cloning/pushing/pulling of the repository
  • Integration with Django as a management command
I think this is actually pretty close and it probably won't take too long for me to implement all of those, so hopefully I'll be able to push out a 1.0 release very soon. The one other issue holding up 1.0 is that I'd like to wait for MongoDB 1.5 which will bring mongoxport functionality in line with mongodump which is what I'm currently using. The issue here is that mongodump produces binary data files which don't play quite as nice with version control and lose you the advantage of only storing diffs. Mongoexport will export JSON or CSV files, which will allow it to take full advantage of Mercurial, but until 1.5 there's no easy way to use mongoexport to dump all the collections in a database which is the default behavior for mongodump.

Anyway, I'm definitely looking forward to some feedback on this project, as I suspect it could be quite useful to many people. Contributions are always welcome as well!

View all posts.