The “Gig” Economy?

This article talks about the rise of people working part-time, doing various ‘gigs’:

Perhaps her intended audience is not interested in “dull” things like economics, but even someone like me with just a smattering of reading in economics can’t help but think of Coase and his Theory of the Firm when reading it.

The question is: why do firms exist? Why isn’t the economy composed of a huge network of independent contractors? If markets and prices and such work so well, why do these big, monolithic companies, which internally, are not ‘free markets’ exist?

His answer is, greatly simplified, “transaction costs”. From the article:

Every time the boss turns around asking for a key member of staff to join today’s frantically convened cost-cutting strategy meeting the reply comes back, “It’s not Sam’s day to come in and he’s the one working on it. Julia can come, though.” “Julia? What she got to do with it?” “Yeah, well, we’ll have to bring her up to speed.”

In other words, while there is a savings from not having Sam present every day, there is also a cost associated with it, which includes having had to look for and interview Sam to do some work, and bargaining over a price for Sam’s services, and then “bring him up to speed”. You have to do those things with regular employees too, but you have to do them less often with people who stick around for years.

So if one wanted to look, in a less anecdotal way and slightly more scientific way, about whether the US, or other countries, are turning into “gig economies”, one could do worse than look at the transaction costs associated with utilizing contractors as opposed to permanent employees. If those costs have fallen (it may be easier to find people, thanks to the internet, for example), perhaps it is more efficient to have more contractors and fewer full time employees and so the equilibrium will tilt in that direction. Of course, another explanation might simply be that the economy is bad, and companies don’t have the budget to take on more people, so they get by with what they can in the short term.

In terms of the social costs and benefits to a “gig economy”… well, that discussion is best left for other sites, as it’s a big, long, complex one with lots of politics and economics and eventually boils down to everyone’s own view of how the society they live in ought to look, which is very much outside the scope of this journal.

Startups and the role of capital and investments

One of the most exciting things about the computer industry these days is the ease with which can get started. Decent computers can be had for well south of $1000, hosting is cheap, services like Amazon EC2 make it ever easier to scale rapidly should the need arise, and the only other things you need are an internet connection and a place to sit. This is leading to more people, like 37 Signals to question the need for investment entirely and others, like YCombinator to successfully make very small investments (thousands of dollars, rather than millions).

Sometimes, however, I wonder – is it just a passing moment in time, a window of opportunity, or is it a long term trend? Historically, to set up something like a factory required a great deal of money, putting it beyond the reach of anyone unable to obtain financing. Even in this day and age, there are plenty of endeavors that require large amounts of capital, and a lot of time, prior to seeing returns: that’s how things work in my wife’s field, biotech. Some fields have even become more expensive with time. High end computer games are very expensive propositions in this day and age, compared to the low budget stuff typical of, say, the Commodore 64 era, although it’s also true that the market has also grown a lot, and that there is still space for smaller-budget operations.

How does all this look historically? Have there been industries in the past where it was so easy to get started? Anything that was able to scale? By which I mean: it might have been relatively easy to start some kind of small business, but it would most likely always stay small, whereas things like Craigslist or 37Signals have the means to grow a great deal without adding lots of people. Will things change in the future so that one or a few programmers can’t compete with a big team? Or perhaps things will go in the other direction and more industries will become like computing is today and it will be possible to a biotech startup in your home office?

No next big language?

Ola Bini, one of the JRuby hackers, and a very bright guy, posits that there “won’t be a next big language”:

There might be some that are more popular than others, but the way development will happen will be much more divided into using different languages in the same project, where the different languages are suited for different things. This is the whole Polyglot idea.

I’m dubious, and wonder what he would consider to be the underlying sociological and economic factors driving this change. Programming languages are, in the end, about people in all their weirdness, so to understand where languages are going to go, you have to consider those human factors, as I’ve attempted to do here.

One trend that points to a slow proliferation of languages is of course the lock-in cited in my article. Today’s big languages (Java, and on the web, PHP) won’t just go away from one day to the next, just as C, Cobol and Fortran have not disappeared with the advent of Java. That process will continue, making it likely that new languages will carve out new territory for themselves rather than exclusively cannibalizing existing installations from older languages. This naturally leads (slowly) to more languages, even if the next generation has a Next Big Language.

And why shouldn’t there be one? Ola talks about a situation where various languages run and interact on top of a runtime (JVM). Isn’t that similar to what we’ve had with C, though? Perl, Tcl, Python, etc… all run on top of C. Sure, the JVM is a step up from that in some ways, most notably GC, being a bit more consistently cross-platform, and having a wider array of libraries, but in the end, it still comes down to the network effects of being able to read and write a common language, whatever it runs on. Obviously, the network externalities of programming languages are not so strong that they hit a tipping point after which one language crushes all the others, but they are strong enough to consolidate leadership in one or a few languages. Programmatically, Jython, JRuby, (and Hecl?:-) may even find it easier to interact on the same platform, but the humans writing the code will still push for consistency and the minimum set of common tools in order to aid the sharing and review of code.

Another way to look at it might be from an organizational point of view. Today’s biggest fish in the pond, Google, only allows four languages for their production systems. What would a “no big language future” organization look like? I can’t believe they’d welcome a big hodge podge of things.

In conclusion, as computers are ever more widely used, it’s certain that more languages will be utilized. However, it’s also likely that from time to time a few languages, with one in the lead, will emerge as the leaders.

Charlie Munger’s criticism of economics

I found this transcript of a speech by Charlie Munger thanks to Greg Mankiw’s blog:

Academic Economics: Strengths and Faults After Considering Interdisciplinary Needs (pdf)

Despite the very dry sounding title, it’s an astute criticism of economics that I greatly enjoyed. My own interest in economics stems from attempting to understand how things like open source software and programming languages fit into the world at large, and what forces govern their rise and fall. Economics is generally a pretty good way of thinking about problems like that – one of the best there is. But it’s certainly a system that is far from perfect, and Munger points out some of its defects in an effective way, without going overboard and trashing the whole discipline, as some do. His words give voice to somewhat vague doubts in my own mind, and back them up with the experience and successful career of Mr. Munger. Like all good criticism, it’s also constructive and suggests improvements rather than simply tearing down.

I don’t agree with his complaints about free trade and China, but they’re not completely without foundation, and that’s probably a discussion best left to someone else’s online journal. Suffice it to say that I think he’s performing a bit of slight of hand by turning hypothetical numbers where everyone is objectively better off into a relative ranking, where there can only be one “winner”.

Overall, though, it’s a good read, especially if you have read a few things about the subject of economics.

One reason to think Rails is “all that”

The economics of programming languages point to Rails being significantly better than what went before it.

I got to thinking about this when reading a comment on a site I like to read, which said:

Rails in itself is, to me, not that impressive. It does a lot of things right, but it does probably just as many wrong. Not the least of which is scaling.

It seems that these sorts of “after the fact” “I know better” comments are a dime a dozen in the world of programming discussions. It’s easy to come along after something’s been built and puff yourself up by pointing to defects in existing systems and show that, therefore, by comparison, you’re a clever fellow.

That’s not my point, though – what I wish to explain is that yes, Rails really was that much better than what was around before it came onto the scene:

“Switching costs” between languages are high. Less so for really sharp programmers, but for the masses that use one or two languages, learning a new language, tools, deployment, etc… is a big step to take, with potentially high risks. Even most A-list programmers I know use a few languages at a time – it’s simply easier if you’re not tripping over your own feet by switching to a different system every day. “Flow” is easier to attain when you’re ensconced in the thinking of one language. For companies, this effect is magnified, and switching to something new is not done lightly.

Since companies are beginning to explore Rails, successfully, I might add, you have to conclude that the big step into the unknown was worth it for some reason. Especially considering that a number of other languages rushed to copy various nice aspects of Rails, lessening the need for users of those systems to consider taking the leap.

Of course, that’s not to say it’s a perfect system, without reproach, or has no negative aspects, but in the spirit of honesty, and credit where credit is due, Rails really did move things a step forward, and the willingness of people to incur high switching costs to obtain its benefits is strong evidence of that.