Inch, Foot, Yard, Mile, Temp, and Date are all four letter words... but you may not agree with me why.
This picture has been making the rounds on Google+ for the past couple of weeks. I can't remember the first time I saw it, tho I'm not sure I've seen anyone post it without commenting how silly the US system is compared to the rest of the world. Typically, the argument mostly focuses on the system of weights and measures used in the US vs the rest of the world (which I will refer to as Imperial vs Metric systems) and less so on temperature (Fahrenheit vs Celsius) or date format, even tho the picture calls all of them into question.
Many of my friends have jumped onboard the pro-metric bandwagon.
I'm going to disagree and make the case that the Imperial system is not "arbitrary" and while the Metric system may be "smooth", it is not always logical. I'm going to make similar arguments for temperature and date formats. But make sure you read all the way to the end.
The crux of my argument is something called "human centered design". It is a pretty big buzzword in various design communities, both online and off, and has the perspective that if you are designing things for humans to use - you should make them have a "human scale" and view things from a human perspective. Many of those same friends who are so pro-metric are also very good designers, so their attitude baffles me a little. Designing for a human scale can be tricky, of course... humans come in all sorts of shapes and sizes. But we generally have some broad parameters that we can work with.
The Imperial system of measurements, the Fahrenheit temperature scale, and the date system we use in the US (with some exceptions, that I'll talk about later) are all more in line with how humans work and think, and even have some (historical) logic to them. The Metric system and the Celsius scale are more straightforward, but also more cold and impersonal, with an arbitrary historical basis, and the date format is completely opposite how humans tend to think.
Let's take a look at each in more detail.
Distance, Weight, and Volume
The usual argument about the Imperial system starts by talking about length - making schoolchildren convert some number of feet into miles is a stupid thing to learn. On this, I would agree. I can't tell you how many feet are in a mile. I also couldn't tell you the last time I needed to do this conversion. For most people it doesn't really matter because we almost never use the two units the same way - when we need to measure a short length, such as our own height, we will use feet or inches, but when we need to measure a long distance, we will use miles. And the sizes chosen for each isn't arbitrary - feet are very much human-scaled since they're based on actual body part measurements. Breaking a foot down into 12 parts makes sense since it means you can easily divide something into a half, a third, or a quarter (we'll see this again when we look at time). A yard is based on a persons stride - still human centered. Historically, a mile is as "logical" as the metric system is - the "mil" part of mile means 1000, and in Roman times it was used to indicate 1000 yards. You can argue that this historical basis is fine... but that it has no place in a modern system and that people will get used to whatever. That is probably true to some extent, but I'd pont out that having a human-based system lets people work on things more intuitively, and I will be providing more examples of this for other measurements. For now, we can observe things such as how most people can walk a mile in about a third of an hour - imprecise, but very human focused.
On the other hand, the metric system of length is very precise. With everything multiples of 10, it is pretty clear how to convert up and down. The problem is that things on a human scale are rarely multiples of 10, and while it makes it easy to divide things into 10 parts, or 2 parts... it is much harder to divide things into 4 parts... nevermind 3! And the length of the meter, which is the fundamental basis for every other calculation of distance, mass, and volume in the metric system, is almost entirely arbitrary at this point. Historically, it was based on calculations about the polar circumference of the earth along a particular meridian... but those calculations resulted in an "official meter" based on a specific platinum bar that was stored in the French archives. While different bars were made over time, and accorded international standard recognition, they all harken back to the original (somewhat arbitrary) standard metric bar.
I won't drone on about weight and volume too much. Yes, converting between tablespoons and cups may be more common in the kitchen than elsewhere, and knowing those unit conversions is still the bane of my culinary existence. But this also serves as a good check sometimes to know that I'm not doing something wrong. I know that teaspoons generally measure things that take small measures, such as seasonings, while cups are more often reserved for larger measures - if I find something that I usually associate with teaspoons that needs to be measured in tablespoons, I'll doublecheck my instructions. I don't have that same kind of "backup" when measuring things by mililiters or grams.
Speaking of the strange relationship between weight (or mass, but I'm really not going to go into the difference right now) and volume, can we look at how arbitrary the metric system seems to be here? I would expect, for example, that a liter would be one cubic meter. Doesn't that make sense? But no... a liter used to be defined as the volume of 1 kilogram of water (under standard conditions), and is now defined as 1000 cubic centimeters. A gram is actually defined in terms of the kilogram, which is defined in terms of another platinum object. This shift of scale in the original definition certainly seems somewhat confusing.
Contrast that with the definition of the ounce - both fluid and "dry". They are surprisingly close to each other - a fluid ounce of water weighs about an ounce, but imprecise enough that one has to wonder how they diverged over time. Scaling up to pints and pounds is similarly easy - both are 16 units (almost as good as 12, but not quite) so, to quote Alton Brown, "a pint's a pound the whole world round" when you're dealing with water-like liquids.
While you have those 16 units in mind... lets take a look at that.
Dividing things up
I've made reference to dividing things up by 12 and 16 being easier than 10, but that is somewhat inaccurate. Dividing things into 12 and 16 is more "human scaled" than 10, however. But how can that be? We have 10 fingers, and our number system is base-10... what could be more human than that?
Well, quite simply, when we divide most things, we're not using our fingers to divide them. If I asked you to cut a pizza in half, how would you do it? Measure the circumference and carefully draw a line such that half the circumference is on each side? No - you'd take a pizza cutter and run it roughly down the middle. Want four slices? Run it down the middle again sideways. Eight slices? Two more times. Sixteen? If your pizza is big enough, run it just four more times.
What if you wanted six slices? Judging thirds isn't too difficult. Same if you wanted twelve - slice into quarters and then three more slices for each half. Three itself is tricky tho, since you have to stop mid-way or start in the center.
But what about ten slices? You'd have to make the one slice across... and then divide it into five more equal slices, which is somewhat trickier. You don't actually divide the circumference into ten parts.
The division of measurement into 12 and 16 were likely done because those represented easy ways for a person to accurate divide something up. Metric advocates would, of course, point out that it is easy to divide into quarters, just measure .25, or eighths, just measure .125. If pressed to measure thirds, however, they would pont out that .33 is probably "good enough" and use significant figures to justify the imprecision... but these same significant figures, in my mind, justify why using 12 or 16 to divide things makes the precision as good or better.
But speaking of dividing things up, what about how we divide up the thermometer?
The chief criticism in the infographic about fahrenheit temperatures seems to be that the scale is arbitrary in regards to water freezing. And I suppose thats correct... except that it isn't really true. It had its roots in trying to create some easy division points between a calculated 0 point (a defined brine mixture), a calculated point at 32 degrees (freezing point of water) and a human scale point at 96 (3x32) degrees which was "body temperature". The multiples of 32 were intentional, again, because of the same ability to easily divide things as we observed above. It was only later that the scale was redefined in terms of the freezing and boiling points of water that broke this scale.
However, fahrenheit still has the benefit that it tends to identify a "comfortable" range of temperatures that humans tend to live in. While there are certainly extremes, 0 tends to be around the low end of normal human habitation, while 100 tends to be around the upper range, making it very easy to say that "0 is cold, 100 is hot, and 50 is pleasantly warm".
Celsius is based on two fixed points, with exactly 100 units between them. This is very useful if you're interested in the boiling and freezing points of water... at exactly 1 atmosphere... but isn't quite so useful otherwise. The 0 point doesn't really foretell if the precipitation coming out of the sky will be water, sleet, ice, or snow since that is based on a number of other complicated factors. In fact, it is so useless when it comes to industries that rely on temperature as it relates to weather, such as aircraft pilots, that most either continue to use fahrenheit, or require additional significant figures for celsius readings.
There is that word "significant" again. What is significant when it comes to dates?
For most of the others, I am a bit ambivalent about Imperial vs Metric. Sure, I argue that Imperial is more human centered and easier to divide the way humans divide things, but Metric isn't an unreasonable system. But when it comes to dates, I need to be perfectly clear: both of the systems outlined are wrong, but the system in use by the US is less wrong than the other one. Why? It comes down to the most significant digits.
You can think of the "most significant" parts of a number as the part that causes the most impact if you change it. For example, look at the number 4321. If you change the "1" to a "5", increasing that digit by 4, you are increasing the entire number by 4. But if you change the "4" to a "8", also increasing that digit by 4, you are increasing the entire number by 4000. So in our example, the 4 is the most significant digit, and the 1 is the least significant digit. In our typical numbering system, the most significant digits are written on the left, with the least on the right.
But if you look at the d-m-y scheme of writing a date, you will find that the number on the left, the day of the month, is the least significant - a change of 1 in that number changes only one day. The number to the right, the year, is the most significant - a change of 1 in that number changes about 365 days. If you write it with all the digits (dd-mm-yyyy) it gets even worse, since you're now burying most significant digits in a number that may be most significant right in the middle.
So why is the US system "less wrong"? If we look at just the month and day, we have them in the proper significant order. So mm/dd reads just like a number does if we remove the / and always write the days as two digits. But what about the year? It is understandable and somewhat human centric to omit the year - after all, it changes pretty rarely and it is only for a small portion out of the year that we really care about it at all. If you think this doesn't make sense, ask yourself for how long you are writing the century as part of the year - or have you switched back to using two digit years? Since the century changes so infrequently, people take it for granted and leave it out when routinely writing the date, and the same may be true of writing the entire year! But since it may be sometimes included and sometimes omitted, it does make a little sense (but only a little) that it be appended to the right.
What is the best format? Why yyyy-mm-dd of course, making sure you write the years with four digits and the month and day with two digits. This puts the most significant digits to the left and, if you omit the hyphens, can almost be treated as an eight digit number... with some numbers that will never be seen. In fact, this format is used by most computer programmers because it preserves a normal ordering of the dates without doing anything complicated.
This final aspect, that some fields naturally gravitate to a system, is really part of the takeaway for this whole argument.
The Bottom Line
If you have read this far, you may be thinking that I advocate the Imperial system as the better system. But I don't.
The Imperial system is better for humans to use for human tasks.
Other systems are better for other tasks. Metric is much better for most scientific, and many engineering, tasks because it provides the consistency and ease of scaling that these fields demand. The yyyy-mm-dd date format is better for technology because it is easy to sort and requires little conversion to human-centered formats.
But one thing about humans is that we can adapt to our environment. There are many things that we no longer do at human scale. While a mile made sense when we talked about walking, we are doing less walking, so a different scale may work better for our automobiles. People seem to be fine with liter bottles of soda, although we still use gallons of milk. We live in areas of "extreme" heat and cold, or in areas of amazing temperate zones. As Randal Munroe described it in this xkcd comic, "the key to converting to metric is establishing new reference points." It is not learning to convert between one system and another - it is in thinking in the new system.
And there is a strong case to be made that the rest of the world is using metric - and the US should do the same. It is silly and wasteful that we maintain two systems, although we'll have to do so for a while after we end the practice. There is currently a petition to the White House requesting them to start the move to Metric as a full standard.
It isn't as if we aren't already on that road, however. When I was a child, I learned about Metric in science class. But that doesn't address the problem - Metric is perfect in science, we already discussed that... but measurements themselves aren't "scientific"... we use them in our everyday lives. If anything, this is something that should be learned as we learn regular life skills. We need to make it part of our everyday thinking if it will become part of our everyday lives.
But I'll be honest - I'll miss the bit of human centric scaling that the Imperial system provided.