Learn Rails the Hard Way

Apologies to Zed Shaw, of course.

Learn Rails the Hard Way, in 10 steps

  1. Travel back in time to 1998, and build a mysql-backed CMS in php, because that was pretty much all there was.
  2. Take a bunch of CS courses, fixate on the concept of a REPL. Think to yourself, a video game is just a really fast REPL with fancy graphics.
  3. Learn Vim because it's cool, and because M-x tetris is distasteful.
  4. Learn struts for a job and feel the agony of what it's like to drown in a sea of meaningless xml files just to get a web page to display.
  5. Work for Ask.com and know the true horror that using xml for dispatch can bring: cower before the homebrewed struts-alike called Tako. Tako means octopus in Japanese, the naming was obviously a reference to the tentacles of Cthulu.
  6. Pick up the life-preserver that is Pragmatic Rails and learn Rails 1.2 because it's still 2007. You're time traveling, remember? Build a toy app that scratches an itch, learn to be smug about how fast you built it. Be slightly baffled that some constructs that work in Rails don't work in straight Ruby.
  7. Take a job that uses Python, deploy Pylons because of how pleasantly Rails-y it is. Read the source and find that it's basically a straight port of Rails. Continue to be smug.
  8. On a whim, take netcat and observe simple http traffic. Idly wonder what it would be like to have a web browser feed directly into a REPL... oh.
  9. Teach a class on Rails, even though you haven't touched it in five years.
  10. Advocate learning Sinatra, Flask, and Noir instead.

Learning to use Rails is almost as much learning what you're not using instead. It's hard to appreciate the utility of Rails without having a broader survey of what your options are. Indeed, if you lack the context to know why Rails was revolutionary in the first place, many of its features may seem esoteric and inconsistent. I've met many developers for whom Rails was a black box, and many of their problems were solved by trial and error. This is a bad sign.

"A fad is created when adoption exceeds education." Someone much smarter said that, and I can't remember who it was for the life of me. Rails was/is/will become a fad (time traveling!), and I worry that the quality of Rails developers (on average, obviously there are plenty of good ones) is declining as we move forward. I see people writing code that reminds me of the cut-and-paste web developers of the late 90s, and that's not good.

Many apps these days are little more than hastily-glued together gems, letting Rails and Devise automate everything else. I appreciate that there is a large body of knowledge involved in making that happen, and you might need to be a Rails expert to do that. I'm not so sure that means you're also a programming expert.

Don't get me wrong. Rails is wonderful. It has done amazing things for the current state of technology and the Rails team continues to innovate constantly. But if you're going to be a Rails developer then really learn it, what it does for you, and more importantly, what it hides from you. Be a developer first, then a web developer, then a Rails developer. When you finally understand Rails, you'll also understand why you don't need it. And by then, you might not even want it anymore.

Announcing the Winner of the 2012 IOJSCC: @fat

A quick back story.

Once upon a time, @fat, a Twitter employee, wrote a javascript library that was a really good idea. Everyone thought so, and pretty soon everyone was using it. Then one day, someone looked through the software and noticed that there was nary a semicolon in sight. They looked high and low and found not one.

Now, this person looked about him and saw that every other piece of software (of note) was laden with semicolons and said to @fat, "Yo dude. What's up with that?" To which @fat replied, "The parser is gracious and lenient. It permits me to leave the semicolons out and so I do."

Immediately, the old gods of Javascript descended to the earth, stroked their mighty beards and said, "@fat, seriously, that is super dumb. It's lenient in the event that you slip up and forget a semicolon, but it wasn't meant for you to omit them all the time. Also, your code looks crazy in some places just because you mislike semicolons. We regret even adding this feature in at all."

@fat, hurt by this rebuke, cried out, "I am merely what you made me! You permitted me to create without semicolons and now you bind my hands and call me a monster!" Looking at his hands with wild eyes he announced, "Then a monster I shall be! And with these hands, I will unmake you."

And so began the great Javascript war of 2012.

This semicolon issue is much more fun written this way.

For myself, I think @fat is being ridiculous. Yes, javascript can be written as if newlines were significant (like ruby or python) but to accomplish this, he pulls tricks out that look like they were lifted from entries in the IOCCC.

Let me reiterate, just in case it wasn't clear: In the service of 'aesthetics', @fat writes code that uses tricks that bear resemblance to entries in a competition whose goal is to demonstrate the importance of coding style through irony.

And because I hate to explain anything without the use of analogy, it's as if Stephanie Meyers wrote the Twilight series using Washington Post's Worst Analogy Contest as her style guide, and Shakespeare rose from his grave to tell her to cut it out.

My money's on zombie Shakespeare.

I Regret Everything: Episode 1 - Foreign Key Constraints

It's about time I really evaluated my stance on FK constraints on InnoDB tables in MySQL. Here was my position on the matter up till now: judicious use of foreign keys for some people can improve performance for certain queries and help maintain data integrity.

This is great because as a programmer, you probably want those things. Probably the thing you're writing reads much more than it writes. That's just the way the world works these days. Additionally, who's going to say no to data with integrity. The only way this could be better is if it also gave the data honor and humility.

So let's take a really shallow look at how foreign key constraints work before we go about properly criticizing them. First, let's talk about locks. Skip past the following paragraphs about chickens and Will Smith if you already know how they work.

Let's say you have a number that represents the total number of chickens you have wrangled into a children's ball pit. You and your buddy are tasked with maintaining this number at 25. You put chickens in, and when there are too many, you have to dispatch some. Before you start your task for the day, there are 23 chickens flapping about. You take the time to count them, and you go and get two more chickens to put in.

While you're off gathering said chickens, Will Smith, who you have convinced to partner with you for the day, also counts 23 and also decides to put two chickens in. He disappears to summon the birds, and you put yours in, not knowing he's doing the same. He returns some time later, throws his chickens in, and calls it a job well done. When you check your work two hours later, you have 27 chickens, and while you appreciate that you got to spend time with the Man In Black, you're beginning to question your line of work, and honestly, who's paying you to do this.

It's a little like the heisenburg uncertainty principle, but with chickens. Between observing a value and acting on it, there is a short but distinct amount of time in which someone else can muck it all up for you. That's where locks come in. Imagine you each have a padlock to the ball pit. When there's a lock on the door, no one else may mess with the chickens until you have removed it. As long as you can't remove someone else's lock, everything's dandy.

So... computers.

So imagine your database table to be a thing filled with chickens.

Right, let's just forget that.

Here's the point. To maintain integrity between two tables, B and A, in a scenario where a row in B depends on a row in A, InnoDB will lock both tables. If you are updating the child B, it will lock A to make sure no one deletes the parent row while you're not looking. We expect the B lock, but while it makes sense, we don't usually think that there would be a lock on A.

Now we've reached the crux: all roads from here on lead to deadlock, a scenario where a process has table A locked and is waiting for a lock on table B to free up while another process has B locked and is waiting for the lock on A to go away. Both processes wait forever, and you end up crying.

The real problem is that there are so many ways this can happen, and even as I write this post, I find more. A SELECT on the parent will lock the child, a SELECT on a child will lock the parent. If you have an S lock due to a foreign key constraint, it can't automatically be upgraded to an X lock. To avoid the 'phantom problem', InnoDB uses next-key locking and gap locking when using FK constraints.

Frankly, I don't understand that last paragraph, and the phantom problem sounds downright scary. I want to deal with data storage, not ghosts (although only just barely). This sounds like a fight where everyone's slinging locks instead of bullets, and I want none of that. I am not smart enough to deal with it.

Let's say for a second that you are. That you're very careful about your constraints, and how you lock, and the order in which you acquire your locks. You're diligent about deadlocks and it shows. Now, your entire codebase is migrated over to SqlAlchemy or ActiveRecord or any other fancy ORM. Where is your god-of-locking now? You have next to no control over locking in these environments, and the queries become problematically complex. Difficult enough that analysis may not be worth it.

Where does that leave us? Well, you have data. It has integrity. It might even have conviction, and after going through so many locks, some character. But it's awfully lonely, since the database frequently times out on lock-waits when more than one person is using it. All because you used a foreign key.

The ironic bit, of course, is that your app is already structured to avoid the real scenario that having foreign keys saves you from. Chances are, you will write your code so you never modify existing primary keys, and you won't accidentally delete records and orphan rows. Even if you do, orphaned rows may be acceptable; you can just clean them up later, no harm no foul.

So I amend my position here with finality, for all who come after me, now until eternity, world without end: judicious use of foreign keys for some people can improve performance for certain queries and help maintain data integrity, but you are not one of those people.

On Numbers: Part the First

In which we learn to count.

Okay peeps, let's talk about numbers. Specifically, integers and floats. As programmers, these are the kinds of numbers we're interested in. For the record, an integer (int) looks like this:

1

And a float looks like this:

1.0

In the real world, 1 == 1.0, but this isn't true for computers, so let's take a peek why. But before we can understand, we need to learn how to count.

Here are some numbers, you may be familiar with them.

1 2 3 4 5 6 7 8 9 10 11

For illustrative purposes, let's rewrite them.

0001 0002 0003 0004 0005 0006 0007 0008 0009 0010 0011

It's still the same, we've just prepended zeros to pad the numbers up to 4 digits. Now let's start a little thought experiment. Imagine, for a second, that we had never discovered the number 9. Just, we as a species, looked at our hands, immediately relegated thumbs to second class citizens, and decided numbers should go from 1 to 8. Let's see what that would be like.

0001 0002 0003 0004 0005 0006 0007 0008 ...

If you're not bored yet, you may be a little confused as to what comes next. Remember, we've stricken the number 9 from existence. But never fear, we can still _represent_ the concept of 9 things (even though we don't have a numeral for it), in exactly the same way that we can represent 10 things even though we don't have a single numeral for 10. Et voila:

.... 0005 0006 0007 0008 0010 0011

"But you skipped 9! That's just 10. That doesn't work. Cats would have a mysterious extra life that we couldn't quantify, and three squared would be a non-existent number! Nena's seminal work about red balloons would be, at the very least, incredibly awkward." Well, yes and no. 

Think back to the simplest form of counting you know: tally marks. A little line for _one_ item, and then 4 lines and a strike through for 5. For simplicity, we'll represent a collection of 5 with the plus symbol. So, counting to six:

|   ||   |||   ||||   +   +|

And so 13 is ++|||, and 20 is ++++. Now, 25 is +++++, but that's getting kind of messy, so let's pretend 25 is represented by an X, and look, now we have a very simple version of roman numerals.

Let's throw another monkey and the requisite wrench into the works, and say that 0, is represented by a -. And further, to keep things clean, we'll divide everything into neat columns. All the single tallies are grouped together, and the pluses go together, and the X, etc. So some numbers, say, 4, 12, 36, 44, 51:

4:     -    -    ||||
12:    -    ++   ||
36:    X    ++   |
44:    X    +++  ||||
51:    XX   -    |

We could keep going, but despite our best efforts, things are getting kind of hairy. What if we wanted to represent the number 125? We could do XXXXX, but if we continue in our pattern of simplification, the correct thing to do would be to create a new symbol to represent 125, maybe _. But we're running out of straight lines on our keyboard and this is becoming a mess. Like we said before, it also looks like roman numerals, but those suck! Yeah! Down with romans! Romani eunt domus!

Ahem. Back to the matter at hand. How can we improve our counting system? 

Let's do something tricky. In each column, we can only use 4 tallies before switching to using the column to the left, and leaving a - in the current column. We're going to cheat, and bring back our modern arabic numerals, 0 through 4. Instead of drawing individual tallies, we'll count the number of marks we made, and just put our numeral down:

4:     0 0 4
12:    0 2 2
36:    1 2 1
44:    1 3 4
51:    2 0 1

KABLAMMO. That's the sound of your mind being blown. This is the illustration of the relation between places, numerals, and actual numbers. Let's look at the number 5. In our Zebulon Numeral system, it looks like this:  -  +  -. Translated to arabic numerals: 0 1 0 -> 010 -> 10. WHAT. THE. EFF.

So now counting, going by the basis of having five tally marks, looks like this, 1 2 3 4 10 11 12... and so forth. So the number '10' does not actually mean ten things. It means, in a system, where we have N numerals and a zero, we are representing exactly N + 1 items. It is essentially a tally mark in the next column over. And so here, where our basis of counting is four tallies and a zero, 10 really means 'five'.

In our previous example, where we hate our thumbs, 10 means 'nine'.

FINE, you say, BUT WHAT IF YOU HAD 16 FINGERS? Well, just like we made up symbols to represent numerals bigger than ||||, we can make up more symbols. We could just do, ... 8 9 | + X. But instead of a contrived example, I'll just reveal that modern convention uses the letters A-F to represent ten through fifteen.

1 2 3 4 5 6 7 8 9 A B C D E F whatcomesnextquickyouknowitalreadytoolateI'mjustgonnatellyou 10

Look at that. I've just, in an incredibly long and convoluted manner, taught you hexadecimal. Or, in english, sixatennish. Maybe 'base 16' is better. And we've seen base 5 and base 9 counting systems.

So. Let's take this all the way in the other direction. Let's say we're a little bit slow, and we only ever learned two numerals: '0' and '1'. But somehow we're still smart enough to count to numbers greater than one. What does that look like?

0000
    0001
    0010
    0011
    0100

Recognize that? That there is binary, sonny. I remember back in ought six, well, we didn't have the numeral six back then, so it was ought one one ought...

Tune in next time when we link binary to hexawhatsitall, then look at what that means for integers on computers.

Wha? A smokescreen? Must be ninjas about!

So let me summarize Sr. Jobs' latest 'open letter'.

  1. Flash on iPhone sucks.
  2. Flash on iPhone sucks.
  3. Flash on iPhone sucks.
  4. Flash on iPhone sucks.
  5. Flash on iPhone sucks.
  6. Therefore we should not let anyone make the iPhone a compile target for their chosen language.

The real issue that Adobe is worked up about is section 3.3.1. Here is a discussion. In essence, you may not write an iPhone app in a language/framework that isn't a combination of Cocoa and Obj-C. For the non-technical, it's (almost) as if Apple is restricting you to only speaking English when calling someone on an iPhone. Almost.

Points one through five are awfully nitpicky reasons, and technically inaccurate to varying degrees. They're valid enough, though, if that is the position Apple wants to hold. My beef is with people reading along, nodding their heads, "This is well-reasoned" and then missing the curveball that he throws in point 6:

"Besides the fact that Flash is closed and proprietary, has major technical drawbacks, and doesn’t support touch based devices, there is an even more important reason we do not allow Flash on iPhones, iPods and iPads. We have discussed the downsides of using Flash to play video and interactive content from websites, but Adobe also wants developers to adopt Flash to create apps that run on our mobile devices."

One of these things is not like the other. "Adobe wants developers to adopt Flash to create apps that run on our mobile devices." Re-read it carefully. Adobe wants people to use Flash (a development environment) to build apps that run on iPhones. CS5 had the ability to write a program in actionscript that compiled down to a native iPhone app. Short of running the app through a debugger, you would not know that the app was built in the Flash environment. It's not a matter of running Flash apps compiled to bytecode and running in a Flash environment on an iPhone, it's a matter of running apps which were built in Adobe's development environment, which is expressly prohibited in section 3.3.1 of the developer agreement.

This means Unity, Mono, Haxe, Titanium, which all use the same approach to compile native apps are all forbidden. None of them are especially concerned though, because the general mood at the moment seems to be that Apple is only interested in blocking Adobe.

There are valid reasons for Apple to not want Adobe on their platform. The other environments are relatively niche environments. However, by enabling iPhone targets in CS5, Adobe allows a veritable glut of Flash devs to build multiplatform (including android) apps. It is in Apple's interests to lock in developers and establish platform exclusives, the way Sony and Microsoft do with their gaming consoles. However, to hide it under the guise of not wanting to compromise the performance or quality of their product is disingenuous. After all, we know that Apple delivers only the finest in fart apps.

The Sound of a Thousand Birds Taking Wing: a #chirp Postmortem

If you've been following my tweets, you would know that I've been at the Chirp conference for the past two days. You'd also know that I'm whiny and complainy. Some might even say saucy. Still, the troll-esque comments I've been tweeting the whole time belie the fact that Chirp was interesting and successful in its own way, and worthwhile besides.

The morning of the first day set a bad precedent. Given that the event was presented as a conference for developers, I was sorely disappointed to attend the first two sessions consisted largely of Twitter execs laughing at their own failings and attempting to excite the crowd about Twitter. If you are a developer, imagine the circumstances under which you would like to attend a Twitter conference. If you are having trouble, imagine being a C++ developer and spending $1000 to attend Java One. The vast majority of the attendees were already enthusiastic about Twitter, and this early morning cheerleading session only served to waste time. Helen Lawrence of daredigital.com later mentioned as much in a conversation with Twitter's Alex Macgillivray, saying the 'rah rah twitter' sessions were a bit unnecessary.

Hopeful for some unique insight into the Twitter strategy, I floated in and out of the rest of the day's sessions, in between stuffing my face with cupcakes. I attended the @anywhere presentation and the two sessions on monetization (I was present for the panel on investing, but I'll admit I was actually just reading a book). Succinctly: it all fell flat. From a purely technical standpoint, the sessions were boring. @anywhere is an abbreviation of existing tools to make twitter integration easier, but not any more twitter-like than you could already do. The nytimes example of integration was banal at best.

In terms of strategy, it was almost worse. Consider the monetization approach: ads in search. If you can find the video, you can see Dick Costolo sweating and tugging uncomfortably at his collar as he attempts to explain how promoted tweets are somehow different from ads (in short, they're not). As of now, they only appear in searches from twitter.com. It's easy to see from various public sources that search is in fact a tiny fraction of their traffic. When google announced adwords, search was practically 100% of their traffic. The promotion of @anywhere does nothing to improve those numbers. Minutes prior, they admit that 75% of their traffic comes from the APIs and they want to increase that. Whatever their real plan is for monetization, promoted tweets can't be more than a distraction.

Disheartened, I skipped the Q&A session with @ev, where I missed some interesting tidbits on whether or not Twitter was going to eat other people's lunches by launching official in-house versions of various external services. I'm looking at you, twitpic. A stroke of fortune left me sitting at the same table as Alex Macgillivray for dinner that night. He expounded a little bit more openly about Twitter's approach to media sharing services. Specifically, they have no plans to choose one as being canonical. If anything, they might integrate with all of them, and let the user decide which to share through. That said, he left me with a rather sinister comment: "At the end of the day, all these services do is add a link to the end of your tweet, and we're not going to prevent that." It was meant to sound reassuring, but it makes me wonder if they might not attempt to take a bite out of posterous and brizzly by building native 'rich' tweets with embedded media as part of the tweet. A year earlier or two earlier, it would have been unthinkable, given the ties to SMS as a delivery method. The growing ubiquity of smartphones has decreased that reliance, making this possibility much more viable.

I skipped out of the evening's activities shortly after that, unwilling and unable to engage in a '24 hour hackday' forgoing sleep and showering. Unfortunately, I may have been one of the few people to decide that. Comments on day 2 to follow later today.