Come on, people, it's not even December yet!
I require a list of countries where Christmas is generally not celebrated, sorted by suitability and desirability as a holiday destination where I can hide out until the madness is over.
Please hurry, I can't hold out much longer.
If you're a Python coder, and you haven't been living under a rock for the past few years, then you've probably heard of a thing called Python 3000, or Python 3.0. Python 3.0 is presented as a "new version" of Python, but with a twist: it represents a complete backwards compatibility break with the Python 2.x series, in order to make a whole slew of backwards-incompatible changes with no direct migration path. Thus, in some sense, this is actually a brand new language that just looks a lot like Python 2.x.
Is this a problem? Superficially, it doesn't "feel" like, say, trying to migrate from Java to C#; after all, it's just Python with a few cleanups and new features, right? Unfortunately, any non-trivial codebase is going to be riddled with code that is affected by the migration, and so the effort involved is still quite substantial. The problem runs deeper than that, however; code doesn't exist in a vacuum, it exists in a community of other code. All of your Python code has to run in the same runtime, so if you port your code to Python 3.x, you need all of the libraries / frameworks that you depend on to be available for Python 3.x. Of course, if you port your library to Python 3.x, people still using Python 2.x can't use any newer versions of your library anymore, so you either leave those users completely stranded, or you have to maintain two versions of your library simultaneously; that's really not much fun.
The Python developers have a solution to this problem: a tool called 2to3. 2to3 attempts to automatically convert Python 2.x code to Python 3.x code. Obviously, it can't do the right thing in every case, so the idea is that you modify your Python 2.x code so that it remains valid and correct Python 2.x code, but is also in a form that 2to3 can automatically convert to valid and correct Python 3.x code. Thus, you can simply maintain your Python 2.x codebase, generating the Python 3.x version, until one day you decide you no longer need Python 2.x code; at this point, you would run 2to3 over your codebase one last time and commit it, thus dropping Python 2.x forever.
Sounds great, right? Unfortunately, 2to3 is still in a fairly immature state, with lots of bugs and issues. In addition, there are many situations where 2to3 simply can't do the right thing. For example, in code that handles strings correctly, any use of the
str type would simply be converted to use of the
bytes type, and
unicode would become
str. However, a lot of code incorrectly uses the
str type for strings of text, or because it uses APIs that are incorrectly written; in these cases, use of the
str type might need to be converted to the Python 3.x
str type, and 2to3 is never going to be able to figure this out on its own.
In the end, it's not yet clear that 2to3 will be a viable migration strategy, which opens the door to the possibility that we will see a schism in the Python ecosystem; with some people clinging to Python 2.x, while other people working in Python 3.x, isolated from their 2.x counterparts. Fortunately, the community has a lot of time to think about this: the 2.x series will probably continue until at least 2.7, if not beyond that. I'm personally still on Python 2.4 (my production servers run Debian stable), so I've got at least another 3 versions to go, which is quite a lot of time. If a viable migration strategy hasn't emerged by then, then I'm hoping that I'll either be able to switch from CPython to PyPy, or I'll already have made the jump to greener pastures, and thus not have to worry about Python at all.
Mandela moment? Nope, sorry to disappoint you. The 2008 US Elections may well be the most significant elections yet… but the fact remains that it just doesn't matter who warms the President's chair in the White House. Bush's 8 years in office have certainly not been America's finest hour; but would things really have been different if the Democrats won the elections? Well, in fact, they might have been; but the problem here is that the framing is all wrong. The President is just the tail of the dog, and the dog does the wagging, not the tail. The problems currently facing the USA come from the ground up, and not the top down.
The "Mandela moment" in South Africa was not important because of Nelson Mandela. This does not diminish Mandela's contribution to the country's renaissance in any way, but the fact remains that Mandela is largely symbolic of the turning of an era, a widespread socio-political change that, once again, came from the ground up. Mandela was merely the acrobat at the top of the pyramid.
The Obama campaign may have focused on the "real issues" far more than McCain's platform; unfortunately, in many cases, there is just lip service and handwaving as to how these issues are to be addressed, but really, that's just how the political game is played. Either way, it just doesn't matter in the end; regardless of how badly Obama wants to make things change, he just doesn't have that kind of power. He's going wherever the base of the pyramid takes him, and that's all there is to it. The irony is that he will most likely be remembered for how well or how badly the USA weathers the coming economic storm, but it's unlikely his efforts will have much influence on the situation one way or another.
Still, couldn't Obama be representative of the changing situation on the ground? In order for that to be true, his victory would have to stem somehow from that change, but the reality is that Obama won the election primarily through votes from those who would not have otherwise voted. There's no sudden change of heart here, just the relentless progression of time, as the newer generation replaces the older generation; a process that has been well underway for a long time now. The fact that he managed to rope in new voters is, perhaps, interesting in and of itself, but isn't likely to have any influence beyond the sphere of electioneering.
Let us assume, for argument's sake, that had Hitler not committed suicide during World War II, he would have been successfully prosecuted and sentenced to death or life imprisonment, and that we find such an outcome acceptable. Then, let us assume that we have access to to time travel technology that would allow us to go back in time and exact the same sentence on Hitler before his rise to power; in other words, go back in time and assassinate or kidnap him. Would this be an acceptable course of action?
For bonus points, compare the situation of having time travel technology at the point in time that the Nüremberg trials were conducted, with the situation where time travel is only developed many years later.
On two occasions I have been asked, – "Pray, Mr. Babbage, if you put
into the machine wrong figures, will the right answers come out?" In
one case a member of the Upper, and in the other a member of the Lower House put this question. I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.
– Charles Babbage, FRS (1864)
Nearly 150 years later, I continue to be just as confused as the esteemed Mr Babbage.
At some point in my youth, I discovered one of the sad truths of history; mankind has discovered a great many things, and then lost that information again, repeating this tragedy over and over throughout the course of history. In some cases we can only speculate at what may have been lost (consider the construction of the pyramids, for example), in other cases we can confirm with relative certainty have rediscovered the lost information or technology, but nevertheless the historical periods in which mankind was without this information are depressingly long. Of course, to my naïve thinking, computers clearly provided the solution to this problem; store all of the information digitally, and never discard it; hard drives are cheap, Moore's law blah blah etc.
Unfortunately, that's only the beginning of the problem, not the end. For starters, we still don't have the storage facilities to store all of the information produced, although we are in a much better position to deal with it; storing books is no problem, for example, but storing all of the audio/visual information produced is a much more difficult task. However, even as our storage capabilities increase, another problem looms; it's no use storing information if you can't retrieve it again later, and retrieval by location is a severely limited mechanism. What we really need is retrieval by description; in other words, searching and "filtering". In the early days of the internet, even searching this "massive" distributed network was still a manageable problem; and so various search engines sprung up, indexing all of the content on the internet — or at least, all of the content reachable via the hyperlink graph. As the Internet continued to grow, the difficulty of indexing the internet grew along with it, and today search engines like Google rely on truly staggering farms of storage and indexing servers to keep up with it all.
Here we meet the real problem; they *don't* keep up with it all, anymore. Despite the massive infrastructure being applied to the problem, Google (and the other search engines) still only manage to index a small fraction of the Internet today. They offset this by trying to make sure they index the "important" stuff, but the net result is that we're still losing information every day. Most pages will eventually fall off the internet if left undiscovered, but even if the information remains on the network, it's of no use to anybody if they can't actually retrieve it; and to retrieve it, you first need to find it.
There are parallels to this problem in other areas; for example, the information storage mechanisms of the human brain: our "memory". People often have the experience of struggling to retrieve a particular piece of information from their memory; it's still there, but they have to wait until something "jogs" their memory before they can finally retrieve it. Research into the functioning of the brain is only beginning to give us an inkling of how memory storage and retrieval actually works on a neurological level, but certainly the high-level process seems to have many of the same problems that our external information storage systems have. In passing, it's interesting to note some of the scientific theory associated with my intuitive feelings about some of these issues; without departing into complete mysticism, you may find it interesting to look at Holonomic brain theory, as well as the Holographic principle, and maybe even take a look at some of the crackpots trying to unify all of this. I'm intuitively expecting something scientifically sound to emerge in this area, but we'll have to wait and see how that turns out.
That digression aside, I'm not sure where this leaves us. The closest biological model we have seems to suffer from the same problems, so that doesn't help us; and I'm not sure where else we have to look to. Is there some research in this area that I've missed? Some potential new technology that might solve the problem? Let me know if I'm missing out on something.
What have I been up to recently?
- Helping Jonathan design Procyon.
- Started hacking on an IFulltextIndexer implementation using SQLite FTS3.
- Fixed an annoying bug in Axiom that would prevent upgrades from happening under some cases.
- Nearly finished packaging Quod Libet 2.0 for Debian.
- Wrote a simple Haskell program to display a filesystem tree annotated with file sizes, mostly just to practice my Haskell coding skills.
- Did some work on an Athena / IE bug; hopefully it'll be fixed by the time you read this.
I'm a highly asynchronous individual. What the heck does that mean, you say? One of the ways this shows up is in my approach to mental activities (programming, writing, etc.): as soon as I reach a point where I take a mental "pause", I switch to another activity to fill that pause, pushing the other task onto the back burner for as little as a few seconds, or as long as several hours, depending on circumstances. To other people, it looks like I'm being continually distracted; but actually, it's by for the most effective way for me to marshal my concentration. When it comes to things like my day job, where deadlines and other such considerations are inescapable, there are limits to how long something can "pause", waiting for the return of my attention, but when it comes to other activities like reading that interesting URL someone mentioned to me, it may lie open in a browser tab for a year or more, especially if it's something really long. This isn't necessarily procrastination, and unlike some people, I don't just eventually give up saying "oh, I'll never get around to it now"; I really will get around to it, even if I only manage to do it in another year's time. The same thing happens with my feeds (I use Google Reader via Feedly); I lightly skim the surface from day to day, reading a handful of entries every day, and skipping some of the non-interesting stuff, but I only dip down into the "meat" every few weeks, going through all or most of the unread entries; as a result, I have a rather unwieldy number of unread items at most points in time. Again, this isn't a problem; the time-sensitive items are usually included in my daily reading, so the rest can wait until I get around to it… whenever. Unfortunately, when it comes to writing about time-sensitive issues on my blog, this asynchronicity doesn't work so well; if I wait until next year to blog about an event that happened this weekend, I'll probably have forgotten everything I wanted to say, and nobody will care anymore anyway.
I've given up on writing about my brief holiday in Cape Town at the beginning of this month; I took a bunch of photos, and I'll probably upload them at some point, but otherwise, whatever.
In other news, this weekend's Geekdinner was great, although the afterparty (held in the parking lot outside Piaceri) was better. I remembered to take some pictures (to be uploaded as above), made an attempt at real-time coverage via FriendFeed, and I got a chance to expand on my snarky and petulant comments in some high-bandwidth conversation with Dom. In the end, I think we decided we were mostly on common ground; my point was that privacy is a socially defined convention, related to concepts such as intimacy (as Dom pointed out), and as such is an ever-changing standard. More specifically, the trend in recent times has been moving towards much greater levels of openness; some of my older realtives wouldn't even think of uploading their holiday photos to a site like Flickr without keeping it completely private, whereas thousands (millions?) of Flickr users have everything they've ever uploaded freely available for public consumption. The important thing at the end of the day is that the individual (the "user") should be the one to decide what is private and what is not, and the "defaults" shouldn't be such that you are pressured into changing your standards of privacy for the convenience of Google's advertising, or whatever — the latter is where Google gets demerits, and they probably need to learn a lesson or two from Facebook's handling of their users.
I was inspired by Dom's post to finally get around to retaking this test, with the following results:
Economic Left/Right: 5.62
Social Libertarian/Authoritarian: -6.36
My previous results (care of Spinach):
<+Spinach> mithrandi.political_compass is 7.25/-6.31 (2006/01/10), 5.88/-5.95 (2004/06/22)
Interesting that my views seem to have swung in a similar pattern to his, over time; perhaps it was the test that changed, not our views? I'm somewhat confused in the left-wards swing in my Economic score, since I don't think that's an accurate representation of my actual viewpoint. As before, I was left with a fair amount of dissatisfaction with many of the questions, as I was not able to adequately represent my views on a particular subject through the limited options available.