blogstrapping

Code Reuse and Technological Advancement

One of the (supposed) holy grails of software development is code reuse. This is not the kind of code reuse where you copy code from an old program into a new program because you have written this kind of code before. This is the kind of code reuse where you write code once, then every time you need to run some code like that, you use the code that has already been written, automatically, without duplicating it.

Object oriented programming is a set of techniques for code reuse that has become incredibly popular. By building a class, a prototype method, or some other kind of categorical definition of a particular type of code segment, you essentially create a button you can press to generate a new interface to the same code, with different data attached to it, allowing effectively infinite reuse of the same code for many different circumstances.

Libraries serve the purpose of another kind of code reuse. Write the code for a library once, and forever after you can get the functionality of that library by simply accessing it from a new program. This means that, even if the same object class (to draw on the previous example) is needed in many different programs, you only have to write it once. Of course, this depends on compatibility between the library on one hand and the environment and language constraints of the new program on the other. Portable libraries with APIs accessible from multiple languages can help ensure such reusability across widely divergent use cases, but it also requires a bit more work than most people put into their libraries.

An operating system environment that is designed specifically for the purpose of allowing programs to interoperate can avoid the problems of library incompatibility; as long as the program runs on that platform, and is capable of interacting with that program's interprocess operation model, code can be written once and used many times. An excellent example is Unix pipes, which allow the output of one program to be fed directly into the input of another program, chaining them together with a deceptively simple OS facility that can be used to construct complex processions of operations to produce sometimes astounding results.

Code reuse is, in fact, merely a particularly modern example of something that has existed since the first time some prehistoric biped picked up a stick, bone, rock, or other object in its environment and used the object to gain leverage in the attempt to affect its environment, multiplying its effectiveness in that task. Give Archimedes a long enough lever and a place to stand, and he can move the world.

Every single substantive advancement in technology -- substantive, in that it both does something to increase the leverage we can apply to tasks of intentionally affecting the world around us, and comes with a greater understanding of that world or the effects of technology as productivity multipliers -- ultimately serves as the foundation for further technological advancement (at least in theory). Occasionally, some edge case may stand in the way of the use of some advancement to, in fact, apply leverage to the task of achieving the next advancement. When benighted religious zealots executed a forward thinker and burned all his books, an advancement might be lost forever, its discovery doomed to require rediscovery from first principles at some future time the same painstaking way it was originally discovered, for instance. Such cases are comparatively rare; usually the best such obstacles can do is slow us down a little.

As the rate of advancement increases, however, each delay has a more disproportionately significant effect. A delay of around twenty years thanks to an inconvenient patent a century ago could multiply the time between advances by a factor of two or three. The same delay at the turn of the current millennium could multiply the time between advances by a factor of twenty or thirty. The same delay in ten years could multiply it by two or three hundred.

It's time to take a step back and explain the changing effects of such obstacles over time.

A Singular Trend

That incredibly strong, almost irresistible tendency for technology to inspire and enable the development of more advanced technology contributes to the exponential growth over time of technological advancement. Ray Kurzweil [0], author of The Singularity is Near -- technologist, visionary, futurist, and potential crackpot -- predicts the arrival of the technological singularity in or about 2045. This event, in vague, hand-wavy terms, is the point where technological advancement accelerates to such a rate that "it represents a rupture in the fabric of human history." In more precise terms, it is that point where all the rules of thumb we have adopted over the millennia for understanding social and technological change go flying out the window, because the development of disruptive technology will become so commonplace and frequent an occurrence that every time we blink the world will have changed in ways we could not have imagined moments before.

Chances are good we'll all die. Chances are about equally good we'll all become immortal. Ultimately, the defining fact of the technological singularity is that all the rules are broken; the world as we know it will end, by metamorphosing into something previously incomprehensible to us. More to the point, the metamorphosis will be ongoing, and will likely continue to accelerate at much the same rate. It is, in short, the moment when Everything Changes.

Whatever you may think of the seemingly outlandish descriptions of the technological singularity as a consequence of Kurzweil's calculations about the date of the prophesied event, the underlying calculation is simple, clean, and inescapable. All he did was calculate the approximate date that, given thousands of years of technological advancement at an exponential rate beginning with agriculture and carrying the trend forward, we will achieve artificial intelligence with computing power greater than the sum total of computing power represented by the brains of the entire human race for the equivalent of today's price of the laptop I used to write this [1]. I suppose I should have saved my money.

Given both the effectively unlimited potential benefit and effectively unlimited potential danger immanent to the technological singularity (and, more to the point, imminent -- given an ETA of approximately thirty five years from now), the question of whether we should pursue or fight this onrushing event becomes incredibly important. Should we fear it as an existential threat or pursue it as the coming of an anticipated Utopian future where we achieve immortality and godlike power? Regardless of what it represents, what it is right now is a shockingly real and relevant possibility, or even probability. Computing technology today advances more in an hour than it did in the entire ninety year period following Babbage's invention of the Analytical Engine. With that kind of headlong rush into the future, every second we waste erects new roadblocks to any attempts to alter or direct the course of that future yet to come. We had better make up our minds in a hurry. You have ten seconds to make a difference.

Are you done yet? Too bad. Too late.

Seriously, though . . .

We are faced by increasing tension between the conservative forces of modern sociopolitical authority and the progressive forces of pure, unfettered, headlong technological progress at breakneck speed. The efforts of humans to dictate the course of history by controlling the structure of the dominant social order often run afoul of technological progress. These conservative forces include the Catholic Church of the middle ages, Luddites and other labor movements of the industrial revolution, Keynesian and Chicago schools of economics, copyright and patent lobbies, environmental custodianship lobbies, stem cell research prohibition lobbies, the FCC, Net Neutrality advocates, and perhaps most visibly the dominant presence of market-manipulating, governmentally chartered legal entities known as "corporations" that attempt to ensure their financial solvency through the crushing of competition that gets too innovative (and, incidentally, oppose the Net Neutrality advocates as an equally conservative force for stagnation).

These corporations, like their mercantilist ancestors of centuries past, are only too happy to use the machinery of copyright and patent law to prohibit the development of new technologies that might compete against their own stodgy, outdated products. Just as steam engine patents stifled the continued development of engine technology for more than a decade, so too do software patents today have a chilling effect on the advancement of the software state of the art -- and software is the necessary twin to hardware in the advancement of computing power that brings the future rushing toward us at such an alarming rate.

If you are Bill Joy, former free-wheeling Berkeley hacker and producer of such productivity enhancers as vi and the Berkeley Software Distribution of Unix, you might be a neoluddite, fully cognizant of many of the implications of the accelerating advancement of technology and fearful of what it portends for the future in the form of the technological singularity. If you are Ray Kurzweil, you may be a reverent Singularitarian disciple eagerly awaiting the Rapture of the Nerds [2], almost radiant with a faith in the ultimate beneficence of the Second Coming of AI [3]. If you are more like me, you're a cynic: the glass is half empty, but it's probably better that way. Sure, billions might die, but maybe the rest of us will become immortal demigods.

Yes, the potential for the ultimate end of the human race with the coming of the technological singularity is substantial. No, we should not avoid it; we should, in fact, embrace it and pursue it with all due haste -- not at any cost, but at the cost of the "safety" we find in stagnation. It is better to try to grab the brass ring, to make the leap toward apotheosis and risk everything in the attempt, to gamble on the chance that we will avoid going out with a bang, than to accept the alternative: the absolute certainty that we will instead go out with a whimper a few years later down the line. Technology, at this point, is our only chance at salvation for the human species. Any attempts to interfere with that by banning or centrally controlling it will, at best, slow things down, throw them off balance, and make the ride rougher. At worst, they may actually grind the bullet train to a stop, and leave it to rust and crumble away to nothing.

Let us then build upon what came before, stand on the shoulders of giants to reach for the stars. Let us claw at the vault of heaven until we find purchase, tear a rent in the sky, and climb through to see what lies beyond. Let us brave the final frontier, our own human limitations, and explore the unknown regions beyond.

Let us look for ways to brush aside the obstacles set in the way of advancement by the petty machinations of small minds.

Code Reuse, Redux

If you want to hasten the advancement of technology, one thing should be clear; the reusability of technology is of paramount importance. The single most dire threat to the reusability of technology today is government. Its restrictions on the use, development, and distribution of technology affects almost every aspect of modern life, retarding such things as education about fertility modification technologies in school, research using embryonic stem cells scraped from a blob of largely undifferentiated protein soup that happens to carry human-compatible DNA, home construction of cellphones from kits, deployment of mesh networking repeaters in urban areas, and -- perhaps most harmfully -- the sharing of software and source code that could enhance a developer's productivity in the pursuit of ways to do things that are more advanced by orders of magnitude [4].

The religion of Intellectual Property has become so ingrained in the public consciousness that even the basic premises of singularity-oriented fiction, such as the roleplaying game Sufficiently Advanced, are to a significant degree centrally reliant on blind faith in the mythic creativity inspiring powers of copyright and patent law [5]. Those who dare to not only question such dogma, but demonstrate their principles by violating that dogma on a daily basis -- the founders of The Pirate Bay and WikiLeaks, for instance -- are figuratively burned at the stake for their heresy [6]. Security researchers are pilloried by corporate behemoths for the crime of sharing their research with other researchers and with the very users of those corporations' products who are most susceptible to the ill effects of the vulnerabilities they have discovered.

People are called "terrorist" for daring to point out that Microsoft often covers up the existence of critical vulnerabilities in its software rather than fixing them to protect its customers.

. . . and strict enforcement of End User License Agreements and mutually incompatible copyleft "free software" licenses force developers to reinvent the wheel daily, wasting uncounted millions of hours inventing what already exists when they could instead advance the state of the art. Every time someone distributes code under a license that is more restrictive than it needs to be, that does not attempt to approximate the conditions of a copyright-free world as much as it reasonably can, someone else will end up having to reinvent that wheel -- assuming the code in question is innovative and useful, and thus worth the effort of reusing it at all.

Programming paradigms come and go; open source licenses come and go; programming languages come and go [7]. A key benefit toward which all these things strive, and which encourages the ever-more rapid advancement of technology by building on existing technology, is code reuse. All these technical solutions to the problem of code reuse pale beside the vibrant truth of the single most effective step that could be taken toward easier, more effective code reuse:

Tear down the walls of the Church of Intellectual Property. Let my source code go.

Notes

0: Ray Kurzweil is today probably the most famous singularitarian in the world, and used a piano to play a song on national TV in the '60s that had been composed by a computer he built. Science fiction author Vernor Vinge coined the usage of the term "singularity" in the early '80s to refer to the future-historical critical acceleration of technological advancement. British mathematician I. J. Good used the phrase "intelligence explosion" to refer to the same concept -- also in the '60s -- and may be the first person on record to theorize about what would later come to be known as the technological singularity based on the very real factors in human action that compel us to work toward that event.

1: The reason the power of a computer exceeding the sum total of the computing power in the brains of the entire human race is so important is simply that this is the point at which a computer, without interference or help from a human, possesses the resources (if not at first the direction, though it seems likely someone would have come up with an answer for that problem by then) to advance the state of the technological art faster than the entire human race. The one type of advancement likely to receive the most attention from such a computer will, of course, be the improvement of computing power, thus magnifying the essentially incomprehensible (by today's standards) effect of computers on the accelerating rate of technological advancement. Ponder that for a moment, and your life may never be the same. Are you a singularitarian yet?

2: Ken MacLeod referred to the technological singularity, through the agency of a character in his novel The Cassini Division, as "the Rapture for nerds". Many singularitarians have adopted variations on that phrase as a tongue-in-cheek bit of self-deprecating humor, wearing it proudly like a badge and -- in the case of the most self-aware among them -- as a reminder to remain humble.

3: Maybe the First Coming of AI was the original 1958 specification of the (then theoretical) programming language LISP, which went on to become the go-to language for generations of artifical intelligence researchers. Maybe it was the design of the Analytical Engine. Maybe it was something else. Suggestions are welcome. If you send me a good one via my contact page, maybe the idea will end up in a novel some day, and I'll credit you by name -- or maybe the approach of the singularity will disrupt my life enough that the novel never gets written.

4: This, by the way, is one of the reasons I prefer open source Unix-like systems over closed source MS Windows systems as my development, writing, and even entertainment computing platforms. I like the fact that open source Unix-like systems, comparative to more restrictive environments such as those offered by Microsoft, allow -- nay, encourage -- me to build tools that make the task of building more tools easier, faster, and more successful.

5: . . . though at least Sufficiently Advanced has been released to the world under the terms of a Creative Commons license. Sure, it's a noncommercial license, which immediately burdens it with a terrible, reuse-discouraging set of restrictions, but at least it's not as bad as "all rights reserved".

6: No, the original Napster does not count. The Pirate Bay and WikiLeaks took principled stands against copyright and other forms of censorship. Napster, on one hand, provided a network that facilitated copyright violation, and on the other hand rabidly defended its own copyright claims, proving itself hypocritical and perhaps a touch sociopathic rather than principled and perhaps a little quixotic.

7: That is, excepting perhaps LISP, which just keeps coming. I'll leave the crudely humorous analogy to the reader's imagination.