Some difficult decisions?

I am, by nature, a tinkerer.  I love to build things to see how they will work or because I have an idea that I think is fun.  I am not so much of a natural business guy.  I go build something "because it's there".  Or rather, it's there in my mind, and I want to make it.

I earn money, however, via consulting work.  There are problems with this line of business though, namely that it doesn't scale very well unless, besides being good at a skill, you've also got the skills to build up a consulting business, which is very much about adding and managing people, something I'm not all that passionate about and would rather avoid.  So what happens is that you go through slumps with less than full capacity, and then things pick up again, and before you know it, you hit your upper limit and can't take on any new work, putting a ceiling over your potential earnings.  Sure, you can raise prices a bit, but that'll only get you so far.  For a while now, I've been considering this problem, and I think some sort of product would be an ideal way out.  Stories like Balsamiq's are inspirational, and along the lines of what I think might be nice.  Don't get me wrong, I like my clients and like to help people solve problems a lot, but it's a stressful job in that you're never very far from having no income.

I'm not much of an entrepreneur, though, in the sense of someone with a nose for making money.  I tend to think about building something cool first, and not really worrying about making money with it.  Predictably, some of the sites I have created are popular, but don't make any money.  Sure, they have some ads, but that doesn't really make much unless you get really huge page views or have some sort of content that attracts users who will click on very lucrative advertisements.  I've started to think that perhaps I should constrain myself in new initiatives to creating things that have a direct source of revenue: you want to use it, you pay.  In my daydream world, I'd probably just hack on open source all day long, but there's no money in that for someone like me who is not the author of some huge, famous piece of software, nor a particularly great hacker in any case (I've met some, like Andrew Tridgell, and it's a really humbling experience).

So that brings up the question of what to do with the aforementioned sites. One option would be to sell them off via a site like flippa.com, another would be simply to just let them sit there and stew, but they do take up some resources on my server, and require some maintainance now and then, and they're a bit of a distraction too (I start wondering about things to add or how to fiddle with them).  Maybe there's a way to make money from some of them, but I'm too blind to see it.

The ones I'm thinking about, are, in no particular order:

  • http://langpop.com – this one gets decent traffic, but programmers are about the best people in the world at ignoring advertisements.
  • http://leenooks.com – the Linux Incompatibility List attempts to point out hardware that people should avoid if they intend to run Linux on their computers.  Happily, there is less need for this site than when I conceived of it some 10+ years ago, but I think it's still a helpful resource.
  • http://linuxsi.com – (in Italian) this one is a place that highlights stores and consultants who are "Linux friendly".  Want to buy a computer with Linux?  There's probably a store near you.  I have had fun with this site, but, once again, no revenue.
  • http://squeezedbooks.com – "open source" business book summaries and discussion.  There are plenty of competitors in this space who make their money by churning out summaries and charging for access for them.  I had hoped to build a community interested in reviewing and discussing books out in the open.  There is a lot of fluff in business books, mostly because you can't sell a 10 page book that describes your idea, but need to pad it out with lots of examples and repitition and so on to make an actual book out of it, but that doesn't mean the idea at the heart of the book in question isn't an interesting or new one.

Erlang: Log to Files

Since I had a heck of a time finding out how to do something that ought to be really easy, I thought I'd post what I found.  If you want Erlang to write errors to a file instead of stdout, add this to the command line:

-kernel error_logger '{file, "/tmp/erl.log"}'

Maybe there's a better solution, in which case feel free to correct me.

Better Software 2010

Florence, Italy, is an extremely popular tourist destination, and for a good reason: it's absolutely full of beautiful architecture and famous art.  It apparently gets more than a million tourists a year.  So it felt kind of odd for me, as a foreigner, to hole up in a conference talking about computers, software and business within a stone's throw of this.

It was well worth it though.  "Better Software" was a very interesting conference, and extremely professional.  One anecdote: since there were so many people signed up, they did the lunch and coffee breaks in another facility nearby, about a block away.  However, the weather wasn't very nice, so the organizers rounded up a bunch of umbrellas, and put them near the door with a sign saying "take one if you need it", so that people could stay dry while going back and forth.  Of course, having grown up in Oregon, I don't trust the weather much, and carry an umbrella, but it was a really nice gesture and attention to detail.

I expected it to be good, but it ended up being a really great event.

Some memorable bits:

  • As always, it's good to catch up with old friends, and meet new people.
  • Giacomo "Peldi" Guilizzoni's talk about Balsamiq was really good: he's an entertaining speaker, and the success of his company is a great story.
  • Andrea Santagata gave an insightful talk about startups in Italy, and managed to cover a number of points that I've seen from people like Paul Graham and other "startup gurus".
  • I had some interesting discussions with people about open source, business and economics, a topic I gave a presentation on.
  • It was quite motivating to meet so many people working on cool things.  Italy absolutely does not have a talent problem.
  • The dinner was great: Bistecca alla Fiorentina for me, accompanied by red wine.

I suppose that, given the huge success of the event this year, they may look for a larger facility for future events, but I hope it doesn't end up in some soulless place in the suburbs; even if I didn't do any sightseeing, I liked being in the city center.

While the organizers would be more than capable of putting on a more international event, I enjoyed the focus on Italy, and the cool things people are doing here, and I feel it also contributed some to the positive atmosphere.  It's one thing to have some guy come in from abroad and tell you something, another to see someone like Peldi talk about how he did what he did.  And what with all the problems Italy has right now, some "positive energy" was more than welcome.  Like I said, Italy has a lot of really talented people; the problems lie elsewhere.

A big thanks to the organizers and everyone who came, and I hope to see you there next year.

Mochiweb proto-deb and hacking

I had some time over the weekend to play a bit with Mochiweb.  It's a very nice little bit of work, but kind of strange for an open source project.  I think both things come from the fact that it was built to do some very specific things in a business, and once those goals were accomplished, that suited the owner just fine.  There is no nice/fancy project home page, no links to documentation, and in general, all of the stuff you're supposed to do for a successful open source project.   The code, however, is good, fast, and pretty easy to deal with.

Anyway, I wanted to use it for a thing or two I'm working on, so I created some .deb packaging for it.  It's been ages since I've done that (indeed, I officially retired from Debian), so it's probably missing something, but I thought I'd throw it out there for people to have a look.

The git repository is here: http://github.com/davidw/mochiweb – my changes live in the 'deb' branch.

The big thing I've tried to do is make it so that if you have several mochiweb projects on the same machine, they'll all be started/stopped by the init scripts, if you so choose.  What you do is add a file with the path to the application to the /etc/mochiweb/apps directory, and when you do /etc/init.d/mochiweb start, it will start all of those.  At stop time, it will run the stop.sh script, which shuts things down.  It runs things as the user that owns the start.sh script, unless it's root, in which case it simply doesn't run.

The thing that's missing right now is a way to get the thing to write logs somewhere, and I'm not quite sure how to do that, so here's hoping someone will write in with a suggestion.

The .deb file I generated ought to be attached to this post, although there are no guarantees that I'll keep it up to date.

Erlang vs node.js

I've written about Erlang in the past, and my suspicion that, soon or later, other languages/systems will come along and "eat its lunch".  Scala is one such potential contender.  Another that has been gaining some visiblity lately is node.js, a simple framework for creating networked applications on top of Google's V8 Javascript engine.

I should define what Erlang's lunch is a little bit better before we go on.  Erlang does several things really, really well; better than many existing, mainstream systems:

  1. Concurrency – the Actor model that it uses is much easier and more pleasant than dealing with threads.
  2. Distributed systems – Erlang makes this fairly easy and pleasant as well.
  3. Fault tolerant systems – using its distributed powers, Erlang is good for writing things like telephone switches that can't spend any significant time down.

Of these, I think the big one is concurrency.  Distributed systems are nice, but not a critical point for most people.  Same goes with fault-tollerant: those who need it really need it, but the rest of us are willing to make some compromises.  Our web applications are important, but generally not something where people's lives hang in the balance.

How does Erlang do "concurrency"?  The developer creates Erlang "processes" that interact with one another by passing messages.  These are not real, OS level processes, though, and this is critical to how Erlang operates.  Since these processes must all coexist within one real, system level process, it is absolutely essential that no operation that they perform hangs the entire system!  The Erlang runtime system is built around this concept.  Any Erlang process can do things like read and write to the disk or network, or have a "while (true) { …. }" loop (it doesn't actually look quite like that in Erlang, but that's the idea), and it won't wedge the system.  This knowledge is also critical when you want to interface Erlang to the outside world: if your C library contains a routine that might block for a long time, you can't just call it from Erlang as it won't be a well-behaved part of Erlang's world (there are ways around this of course, but make life a bit more complicated for the developer).  All this is done with Erlang's scheduler: each Erlang process gets a number of operations it can run before some other process gets to run, so even our while loop will only run for a bit before the system moves on to something else.   IO is rigorously done with non-blocking calls internally in order to keep that from becoming an issue.

No other system that I know of has such a focus on being non-blocking, and node.js is no exception: a while(true) loop is perfectly capable of wedging the system.  Node.js works by passing functions (closures, in many cases) around so that work can be performed as needs be.  However, the actual functions that run actually do block the system, and thus must be written in order to not run for too long.  Also, and this is important, Node.js also does its best to make IO non-blocking by default, so that you don't have to worry about IO calls.

Node.js isn't up to the level Erlang is at, because it requires more manual intervention and thinking about how to do things, but it's probably "good enough" for many tasks.  How often do you really write code with so many calculations that it slows things down?  Not often in my case – the real world problem I most often encounter is IO, and node.js does its best to make that non-blocking, so that it can be handled in the "background" or a bit at a time, without wedging the system.  And if you really needed to write a long-running calculation (say you want to stream the digits of PI or something), you can break up your calculation manually, which may not be quite as elegant as Erlang, but is "good enough" for many people.

"Good enough" concurrency, combined with a language that is at least an order of magnitude more popular than Erlang, and a fast runtime, combined with ease of use in general (it's way easier to get started with node.js than with most Erlang web stuff) make for a system that's likely to do fairly well in terms of diffusion and popularity, and is going to "eat some of Erlang's lunch".  Or perhaps, rather than actually taking users away from Erlang, it's likely to attract people that might have otherwise gone to Erlang.

Make Life Easier for European Startups: Simpler/Cheaper Limited Liability Companies

As I've mentioned here before, one of the differences between "Europe" and the US is just how cheap it is to start a company in the US.  Before we go any further, I'll take a moment to add the standard "yes, I know that Europe is not one country" disclaimer, and specify that I'm mostly talking about continental Europe.  Starting a company in the UK or Ireland isn't nearly as bad.

In Oregon, I spent $55 to create DedaSys LLC.  If I'd created it with one or more partners, I would have spent something on a lawyer in order to create a solid agreement, but that is of course a voluntary expenditure that we would pay for because it provided us with some value.  In Italy, it costs thousands of Euros just to get started with a company; before you've made any money at all.  And, while there are gradual improvements, it's still a bureaucratic process that pretty much must involve at least an accountant and a notary public.  And you have to involve the very arbitrary number of 10,000 euros of capital in the company, supposedly there as a guarantee for the people you're doing business with.  But 10,000 is not nearly enough to cover some kinds of problems you might cause, and way more than, say, a web startup with a couple of guys and their laptops need.  My friend Salvatore says it's possible to sort of "get around" sinking the full 10K into your company, but in any case, the principal of "caveat emptor" is a more sensible one.  At most, make a transparency requirement so that people dealing with companies can tell exactly how much reserves they have.

During a recent bout of cold/flu, compliments of our daughter's nursery school, when I had some extra time on my hands, I decided to do something about this, however much it may be pissing into the wind.  I set up a site:

http://www.srlfacile.org (warning: Italian only)

As well as a Google Group, and petition for people to sign, in an attempt to make a little bit of noise about the problem, here in Italy.

While it's likely that the actual bureaucratic mechanisms are more smoothly oiled in other European countries, I have my doubts as to whether the actual amount of paperwork can compete with the very minimal page or two required by many US states.  And in any case, the costs are still quite high, and while we all have different levels on the idea of the role of government, and ideal levels of taxation, I think we can agree that it's sensible to levy taxes only after a company has begun to make money!

So – how about it?  Anyone want to create sister initiatives in other countries in Europe where things aren't as simple and easy as they should be?  Anyone care to tell of how this problem has been fixed elsewhere?  I've heard tell that in Germany, there is now a simpler/cheaper way to create limited liability companies.

US Exports: “The Cloud”?

An Economist special report in last week's print edition talks about how the US will need focus more on savings and exports:

A special report on America's economy: Time to rebalance

I've been thinking about that for a while too, especially after the dollar's recent weakness, although it has been strengthening some, lately, apparently due to the state of Greece's finances…

I think that the computing industry is, in general, well poised to take advantage of that.  For instance, what could be easier to export than computing power or "Software as a Service"?  All it takes is a few minutes for someone in Europe to sign up to a US-based service with a credit card.

For instance, compare Linode's prices and good service with most of their European competitors (gandi.net for instance, who are good people, and you have to love that they put "no bullshit" right on their front page).  Not that they don't have good service in Europe, but it's very difficult to compete on price with the dollar being significantly cheaper.  With the dollar where it is right now, gandi is almost, but not quite, competitive with Linode.  If you don't include taxes.  If the dollar weakens again, though, things could easily tilt far in Linode's favor.

Besides a weak dollar, I think it will be important for companies in a position to do so in the US to focus on "the rest of the world".  The US is a big, populous country where it's very easy to forget about far-off lands.  Compare my home town of Eugene, Oregon to where I live in Padova.  Google Maps says that it takes 7+ hours to drive to Vancouver, Canada (which, to tell the truth, isn't all that foreign in that they speak English with an accent much closer to mine than say, Alabama or Maine).  Going south, Google says it's 15+ hours just to San Diego, although I think that's optimistic myself, given traffic in California.  From Padova, I can be in France in 5 hours, according to Google, 3 hours to Switzerland, 4 to Innsbruck, in Austria, less than 3 hours to the capital of Slovenia, Ljubljana, and around 3 hours to Croatia, too.  And if you wanted to throw in another country, the Republic of San Marino is also less than 3 hours away, according to Google's driving time estimates.   You could probably live your entire life in a place like Eugene and never really deal much with foreigners, whereas here, nearby borders are both a historic and an ever-present fact.

The outcome of this is that, to some degree, people in the US have traditionally focused their businesses "inwards" until they got to a certain size.  Which is, of course, a natural thing to do when you have such a big, homogenous market to deal with before you even start thinking about foreign languages, different laws, exchange rates and all the hassle those things entail.

However, if exchange rates hold steady or favor the US further, and internal spending remains weaker, it appears as if it may be sensible for companies to invest some time and energy to attract clients in "the rest of the world".

"Cloud" (anyone got a better term? this one's awfully vague, but I want to encompass both "computing power" like Linode or Amazon's EC2, as well as "software as a service") companies likely will have a much easier time of things: for many services, it's easy to just keep running things in the US for a while, and worry about having physical or legal infrastructure abroad later.  Your service might not be quite as snappy as it may be with a local server, but it'll do, if it performs a useful function.  Compare that with a more traditional business where you might have to do something like open a factory abroad, or at the very least figure out the details of how to ship physical products abroad and sell them, and do so in a way that you're somewhat insured against the large array of things that could go wrong between sending your products on their merry way, and someone buying them in Oslo, Lisbon or Prague.

Since this barrier to entry is lower, it makes more sense to climb over it earlier on.  As an example, Linode recently did a deal to provide VPS services from a London data center, to make their service more attractive to European customers. 

However, they still don't appear have marketing materials translated into various languages, and presumably they don't have support staff capable of speaking languages like Chinese, German or Russian either (well, at least not in an official capacity).  This isn't to pick on them; they may have considered those things and found them too much of an expense/distraction/hassle for the time being – they certainly know their business better than I do – and that they simply are content to make do with English.  Other businesses, however, may decide that a local touch is important to attracting clients.

What do you need to look at to make your service more attractive to people in other countries?  In no particular order:

  • Internationalization and localization.  Most computer people can "get by" in English, but perhaps their boss doing the purchasing doesn't.  If research shows that you are in a market where people value being able to obtain information in their own language, or interact with a site or service in it, make an effort to equip your code with the ability to handle languages other than English, and then pay to have your content translated.  Good, professional translations are not easy: for instance, when I translate to English from Italian (you always translate from the foreign language to your native language – anyone who doesn't isn't doing quality work) I read the Italian text, digest it, and then spit out an English version.  This doesn't mean just filling in English words for Italian, but looking at sentence length and structure, as well as translating idioms and cultural references into something that makes sense.  Basically, you read and understand the concepts and then rewrite the text, rather than simply taking each sentence and translating it.  Also, knowledge of the domain in question is important, so that you don't translate "mouse" to "topo", but leave it as "mouse" as is proper in Italian.
  • Part of internationalization is considering problems like time zones, currency, and names, which can vary a great deal from culture to culture.
  • Going a step further, you might consider hiring, or outsourcing, staff that is fluent in other languages to provide first-level support.  Reading English is one thing for many people; they can take the time to work out what one or two unfamiliar words mean.  However, if you have a problem with your server over the weekend, and you don't feel comfortable writing or calling someone to deal with a problem in English, you might consider purchasing a local service even if it's more expensive, because you can deal with people in your own language if the need should arise.  These people might either be local or remote, depending on what their role is.  For instance, Silicon Valley is 9 hours behind Central European Time, so when it's 9 AM here, and the business day is just getting started, everyone but the late-night coders in California is headed for bed, which means that it would be difficult to provide timely support unless you have someone working the late shift.  It may be easier to hire someone in Poland to support your Polish users than finding a Polish speaker in Silicon Valley who is willing to work from midnight to 9 AM.
  • Legal issues are not something I can give much advice on, but things like the privacy of people's data certainly bear considering.   If you don't actually have offices abroad, though, it's less likely that anything untoward will likely happen to you if users understand that they're using the service in question according to the laws and regulations of the jurisdiction your business resides in.  Once again though: I'm not a lawyer.
  • Even something as basic as your name needs to be carefully thought through.  "Sega" for instance, has a potentially rude meaning in Italian.  These guys are visible from a major road near Treviso: http://www.fartneongroup.com/ – doubtless the company was founded locally and subsequently grew to the point where they then learned of their unfortunate name in English (admittedly though, it does make them potentially more memorable than their competitors).

There's certainly no lack of work there, but on the other hand, it's possible to do almost all of it from wherever you happen to be located, rather than spending lots of money and time flying around to remote corners of the globe, as is still common practice in many industries.

Where Tcl and Tk Went Wrong

I’ve been pondering this subject for a while, and I think I’m finally ready to write about it.
Tcl was, for many years, my go-to language of choice. It’s still near and dear to me in many ways, and I even worked on a portion of a book about it ( https://journal.dedasys.com/2009/09/15/tcl-and-the-tk-toolkit-2nd-edition ).

However, examining what “went wrong” is quite interesting, if one attempts, as much as possible, a dispassionate, analytical approach that aims to gain knowledge, rather than assign blame or paper over real defects with a rose-colored vision of things. It has made me consider, and learn, about a variety of aspects of the software industry, such as economics and marketing, that I had not previously been interested in. Indeed, my thesis is that Tcl and Tk’s problems primarily stem from economic and marketing (human) factors, rather than any serious defects with the technology itself.

Before we go further, I want to say that Tcl is not “dying”. It is still a very widely used language, with a lot of code in production, and, importantly, a healthy, diverse, and highly talented core team that is dedicated to maintaining and improving the code. That said, since its “heyday” in the late 90ies, it has not … “thrived”, I guess we can say. I would also like to state that “hindsight is 20-20” – it’s easy to criticize after the fact, and not nearly so easy to do the right thing in the right moment. This was one reason why I was reticent to write this article. Let me repeat that I am writing it not out of malice or frustration (I went through a “frustrated” phase, but that’s in the past), but because at this point I think it’s a genuinely interesting “case history” of a the rise and gentle decline of a widely used software system, and that there is a lot to be learned.

At the height of its popularity, Tcl was up there with Perl, which was the scripting language in those days. Perl, Tcl, and Python were often mentioned together. Ruby existed, but was virtually unknown outside of Japan. PHP was on the rise, but still hadn’t really come into its own. Lua hadn’t really carved out a niche for itself yet, either. Tcl is no longer one of the popular languages, these days, so to say it hasn’t had problems is to bury one’s head in the sand: it has fallen in terms of popularity.

To examine what went wrong, we should probably start off with what went right:

  • Tk. This was probably the biggest draw. Easy, cross platform GUI’s were, and are, a huge reason for the interest in Tcl. Tk is actually a separate bit of code, but since many of the widgets are scripted in Tcl, the two are joined at the hip. Still, though, Tk is compelling enough that it’s utilized as the default GUI library for half a dozen other languages.
  • A simple, powerful language. Tcl is easy to understand and get started with. It borrows from many languages, but is not an esoteric creation from the CS department that is inaccessible to average programmers.
  • Easily embeddable/extendable. Remember that Tcl was created in the late 80ies, when computers were orders of magnitude less powerful than today. This meant that fewer tasks could be accomplished via scripting languages, but a scripting language that let you write routines in C, or, conversely, let the main C program execute bits of script code from time to time was a very sensible idea. Tcl still has one of the best, and most extensive C API’s in the game.
  • An event loop. Lately, systems like Python’s “twisted”, and node.js have made event-driven programming popular again, but Tcl has had it for years.
  • BSD license. This meant that you could integrate Tcl in your proprietary code without worrying about the GPL or any other legal issues.

These features led to Tcl being widely used, from Cisco routers to advanced television graphics generation programs to the AOLserver web server, which was busy serving out large quantities of dynamic web pages when many of us were still fiddling around with comparatively slow and clunky CGI programs in Perl. Note also that there have been a lot of cool things to have gone into Tcl in the mean time. It has a lot of impressive features; many more than most people realize, and has had a lot of them for a while. Check out http://www.tcl.tk to learn more about the “good stuff”. But that’s not the point of this article…

There was a healthy, active community of developers producing lots of interesting add-ons for the language, and working on the language itself. This culminated in its adoption by Sun Microsystems, which hired the language’s creator, Dr. John Ousterhout, and a team of people, who added a lot of great features to the language. Quoting from Ousterhout’s history of Tcl page:

The additional resources provided by Sun allowed us to make major improvements to Tcl and Tk. Scott Stanton and Ray Johnson ported Tcl and Tk to Windows and the Macintosh, so that Tcl became an outstanding cross-platform development environment; Windows quickly came to be the most common platform. Jacob Levy and Scott Stanton overhauled the I/O system and added socket support, so that Tcl could easily be used for a variety of network applications. Brian Lewis built a bytecode compiler for Tcl scripts, which provided speedups of as much as a factor of 10x. Jacob Levy implemented Safe-Tcl, a powerful security model that allows untrusted scripts to be evaluated safely. Jacob Levy and Laurent Demailly built a Tcl plugin, so that Tcl scripts can be evaluated in a Web browser, and we created Jacl and TclBlend, which allow Tcl and Java to work closely together. We added many other smaller improvements, such as dynamic loading, namespaces, time and date support, binary I/O, additional file manipulation commands, and an improved font mechanism.

Unfortunately, after several years, Sun decided that they wanted to promote one and only one language. And that language was Java. So Ousterhout and many people from his team decamped to a startup that Ousterhout founded, called Scriptics, where the Tcl and Tk innovations continued:

In 1998, Scriptics made several patch releases for Tcl 8.0 to fix bugs and add small new features, such as a better support for the [incr Tcl] extension. In April 1999, Scriptics made its first major open source release, Tcl/Tk 8.1. This release added Unicode support (for internationalization), thread safety (for multi-threaded server applications), and an all-new regular expression package by Henry Spencer that included many new features as well as Unicode support.
However, as many a company based around open source was to find later, it’s a tough space to be in. Scriptics changed its name to Ajuba, and was eventually sold (at a healthy profit, apparently, making it a relative dot com success story, all in all) to Interwoven, for the “B2B” technology that Ajuba had developed. Interwoven was not interested in Tcl, particularly, so, to create a system for the ongoing development and governance of the language, the “Tcl Core Team” was created.

This was something of a blow to Tcl, but certainly not fatal: Perl, Python, Ruby, PHP, Lua have all had some paid corporate support, but it has by no means been constant, or included large teams.

At the same time in the late 90ies, open source was really starting to take off in general. Programmers were making all kinds of progress, and had begun to make Linux into what is today the world’s most widely used server platform, and laying the groundwork for the KDE and Gnome desktops. While these may still not be widely used, they are for the most part very polished systems, and leaps and bounds better than what passed for the ‘Unix desktop’ experience in the 90ies.

One of the key bits of work that was added to Tk was to make it look pretty good on Microsoft Windows systems. This was in an time when the “enterprisey” folks were turning away from Unix in the form of AIX, Solaris, HPUX, et al. and taking up NT as the platform of choice, so it was in some ways sensible to make Tk work well there, and in any case as a cross platform GUI toolkit, it ought to work well there in any case.

And, on the Unix side, Tk emulated the expensive, professional Motif look and feel that serious Unix programmers used. What could go wrong?

As Gnome and KDE continued to mature, though, what would become one of Tk’s major (in my opinion) marketing blunders took root. I have it on good authority, from someone who was there in the office, that the Scriptics guys working on Tcl and Tk viewed Gnome and KDE (and the Gtk and Qt toolkits) as not really worth their while. To be fair, since Tk has always been under a liberal BSD style license, the Qt toolkit has always been “off limits”. Still, though, the attitude was that Tk was a standalone system, and since it ran on pretty much any Unix system, it didn’t need to bother with Gnome or KDE. Gradually, though, as more and more people used Gnome and KDE exclusively on Linux, the Tk look and feel began to look more and more antiquated, a relic from the 1990ies when Motif (which has since pretty much disappeared) was king. Tk applications started to really stand out by not looking at all like the rest of the operating system. And, while Linux may not be responsible for a vast portion of the world’s desktops, it is widely used by developers, who were turned off by the increasingly creaky look of the Tk applications they saw.

Tk is and was actually a fairly flexible system, and it would have been possible to tweak the look and feel to make it look a bit better on Linux, without even doing any major work. Maybe not perfect, but certainly better looking. Nothing happened, though.

Another problem was that Tk and Tcl made it so easy to create GUIs that anyone could, and did, despite, in many cases, a complete lack of design skills. You can’t particularly blame the tools for how they’re used, but there was certainly a cultural problem: if you read most of the early Tcl and Tk books, and even many of the modern ones, there are hundreds of pages dedicated to exactly how to use Tk, but few to none explaining even basic user interface concepts, or even admonitions to the user to seek out that knowledge prior to attempting a serious GUI program.

The end result is that a lot of Tk programs, besides just looking “old fashioned” had fairly poor user interfaces because they were made by programmers who did not have a UI/design culture.

Contrast that with Gnome and KDE, which have made a point of focusing on how to make a good GUI for their systems, complete with guidelines about how applications should behave. It may have taken them some time to get things right, but they have done a lot to try and instill a culture of creating high quality, well-integrated GUI’s that are consistent with the system where they run.

Lately, there has been a lot of work to update the Tk look and feel, and it has finally started to bear fruit. However, in terms of marketing, the damage has already been done: the image of “old, crufty Tk” has been firmly planted in countless developers’ minds, and no amount of facts are going to displace it in the near future.

Another problem Tcl faced, as it grew, was the tug-of-war between those wishing to see it small, light, and easy to distribute embedded within some other program, and those wishing it to become a “full-fledged” programming language, with lots of tools for solving every day programs. Unfortunately, that tug of war seems to have left it somewhere in the middle. Lua is probably more popular these days as an embedded language, because it is very small, and very fast, and doesn’t have as much “baggage” as Tcl. Meaning, of course, that it doesn’t do as much as Tcl either, but for a system where one merely wishes to embed a scripting language, without much ‘extra stuff’, Tcl’s extra functionality is perhaps a burden rather than a bonus. On the other hand, while Perl was happily chugging along with their CPAN system for distributing code, giving users easy access to a huge array of add-on functionality, and Python was building up a “batteries included” distribution, that included a lot of very useful software straight out of the box. Tcl, on the other hand, chose to keep the core distribution smallish, and only lately has got some semblance of a packaging and distribution system, that is, however, run by ActiveState, and is (at least according to a cursory glance at the Tcl’ers wiki), not even fully open source. The lack of a good distribution mechanism, and, in the meantime, eschewing a larger, batteries-included main distribution left Tcl users with a language that, out of the box, did significantly less than the competition. Technically, a Python style “big” distribution would not have been all that difficult, so once again, I think this is a marketing problem: a failure of the Tcl Core Team to observe the “market”, assess what users needed, and act on it in a timely manner.

Somewhat related to the large Tcl vs small Tcl issue was one particular extension, or extensions to the language that was noticeably absent: a system for writing “object oriented” code. Tcl, at heart, will never be an OO language through and through, like Ruby or Smalltalk, but that doesn’t mean that an OO system for it is not a useful way of organizing larger Tcl systems. Indeed, Tcl’s syntax is flexible enough that it’s possible to write an OO system in Tcl itself, or, optimizing for speed, utilizing the extensive C API in order to create new commands. Over the years, a number of such systems have arisen, the most well-known being “Incr Tcl” (a play on the incr command, which is akin to += 1 in languages like C). However, none of these extensions was ever included with the standard Tcl distribution or somehow “blessed” as the official OO system for Tcl. This meant that a newcomer to Tcl wishing to organize their code according to OO principles had to pick a system to use from several competing options. And of course, newcomers are the least able to judge a complex feature like that in a language, making it a doubly stressful choice. Furthermore, even experienced Tcl programmers who wanted to share their code could not utilize an OO system if they wanted their code to work with just standard Tcl. Also, if their code had a dependancy on some OO system, it would require the user to download not only the extension in question, but the OO system it was built on, which, naturally, might conflict with whatever OO system the user had already selected! As of Tcl 8.6, thanks to the work of Donal Fellows, Tcl is finally integrating the building blocks of an OO system in the core itself, but this effort is probably something like 10 years too late.

Some other more or less minor things that have taken too long to get integrated in the Tcl or Tk distributions include the PNG format (still not there in a production release of Tcl), a readline-based command line (Tkcon is nice, but not a replacement for simply being able to type “tclsh” and get a nice, functional shell like Python, Ruby and most other languages have. This could easily lead to a bad first experience for someone trying out Tcl). Tcl also took too long to integrate a first-class hash type (accessed with the ‘dict’ command), which only appeared in 8.5. Its “arrays” aren’t bad, but don’t quite have the full power of a hash table as dict implements them. Once again, the code to do these things was/is out there, it has just been a matter of integrating it into Tcl and Tk, which has been a slow process.

One actual technical problem that Tcl faces is the concept that all values must be representable as a string. This is more or less ok for things like lists, hash tables or numbers, but is problematic when the user wishes to represent a value that simply isn’t a string. A basic example is a file handle, which at the C API level is a FILE*. How does Tcl get around this? It keeps an internal hash table with a Tcl-script accessible string, such as “file5” that points to a FILE * value that is used internally by file commands. This works pretty well, but there is a big “but”: since you can compose a string like “file5” at any time, that must be able to access the actual file pointer, you can’t do any sort of “garbage collection” to determine when to clean things up automatically. Other languages have explicit references to resources, so the program “knows” when a resource is no longer referenced by the rest of the program, and can clean it up. Therefore, the programmer must explicity free any resources referenced this way. This explanation is simplifying things somewhat, but it is something I view as a technical problem with Tcl.

If you’ve been following along, you’ve noticed a lot of “these days” and “recently”. That’s because Tcl is still very actively developed, with a lot of new ideas going into it. However, if you look at the release dates, it seems that after Ajuba was sold off, and Ousterhout pretty much abandoned an active role in the language for good, placing it in the hands of the Tcl Core Team, there was a lull in the momentum, with Tcl 8.5 taking 5 years to be released.

This is actually, in my opinion, an interesting phenomenon in languages: you risk hitting some kind of local maximum when your language is popular enough to have a lot of users who will be angry if things are changed or accidentally broken in the course of big upheavals. So you have to slow down, go carefully, and not rock the boat too much. On the other hand, there is an opportunity cost in that newer languages with less to lose can race ahead of you, adding all kinds of cool and handy new things, or simply fix and remove “broken” features. Erlang is another system that has, in my opinion, suffered from this problem to some degree, but this article is long enough already! Once again, though, not really a technical issue, but a problem with how the code was managed (and not an easy one to solve, at that).

A Tcl failure that I was personally involved with was the web infrastructure. What went on to become Apache Rivet was one of the first open source Apache/Tcl projects, and was actually quite a nice system: it was significantly faster than PHP, and of course made use of Tcl, which at the time had a large library of existing code, and could be easily repurposed for projects outside the web (or the other way around: non-web code could be brought in to a web-based project). One thing I ought to have done differently with the Apache Rivet project was listen to the wise advice of Damon Courtney, my “partner in crime” on the project, who wanted to see Apache Rivet have a fairly “fat” distribution with lots of useful goodies. Rails and Django, these days, have shown that that’s a sensible approach, rather than relying on lots of little extensions that the user has to go around and collect. The code was out there, I should have helped make Rivet do more “out of the box”.

A problem that is and it isn’t: the syntax. Tcl’s syntax is amazingly flexible. Since everything is a command, you can write new commands in Tcl itself – and that goes for control structures, too! For instance, Tcl has a “while” command, but no “do … while”. It’s very easy to implement that in Tcl itself. You simply can’t do that in most “everyday” languages. However, this comes at something of a “cost”. The syntax, for your average programmer who doesn’t want to go too far out of their comfort zone, is perhaps a little bit further afield from the C family of languages than they would prefer. Still though, a “human” problem, rather than a technical one. Perhaps, sadly, the message is that you’d better not “scare” people when introducing a new language, by showing people something that doesn’t look at least a little bit familiar.

Conclusions? First and foremost, that, gradually, Tcl and Tk continue to be improved. However, if one visits developer forums, there are also a lot of negatives associated with the Tcl and Tk “brands”, and I am not sure if it will be possible to rectify that. So what can we learn from the rise and subsequent “stagnation” of Tcl?

  • Success in the first place came from doing one particular thing very well, and making it a lot easier than other existing systems at the time. That’s a good strategy.
  • Not staying up to date can be deadly. Of course it can be tricky to know what’s a genuine trend, and what’s just a fad, in this industry, and picking the right things to stay up to date with can be tricky. That said, there are a number of areas where Tcl and Tk failed to follow what became very clear directions until way too late.
  • Do your best not to get trapped between existing users who don’t want to rock the boat, and don’t lose your agility and ability to iterate with the system you’re developing. Long delays between releases can be deadly.
  • Don’t lose touch with your “roots”. In this case, the open source community that is a “breeding ground” for new developers and projects. Tcl and Tk became passe` in that environment, which has led to its lack of adoption for new projects not only in the world of open source, but in businesses as well.
  • Don’t isolate yourself: Tcl and Tk stopped appearing at a lot of the open source conferences and events and in magazines/books/articles online, either because with no figurehead/leader to invite there was less interest in speakers and authors, or because the rest of the Tcl Core Team wasn’t particularly engaged, or for whatever other reason. This created something of a negative feedback loop where Tcl and Tk were things associated with the past, rather than something currently talked about and discussed.

Books vs “e-books” ?

I've been thinking about something for a while, and to be honest, still haven't reached any firm conclusions: what to think about self-published "e-books"?  I'm curious to hear your opinions.

For instance:

These are all: electronic, in that they aren't distributed as real, paper books, have no ISBN number, and are generally only available via the author's web site (you won't find them on Amazon.com).  They aren't simply on-line, PDF versions of real books.

They're certainly a departure from the traditional model of having a publisher, editor, and real, physical books that could be purchased from book stores.  They don't appear to have been through any kind of formal editing or quality control process.  The prices seem to differ quite a bit; the first one is $19, the second one is $12, and the last one is $30.77.

For the authors, the appeal is obvious: they get to keep all of the money, and don't have to fool around with a lot of "process".

Consumers, on the other hand, have to consider different aspects: with a "real book", the bureacracy and process exist to guarantee some minimum standards of quality.  If you buy an O'Reilly book, you know that it's probably like many of the other books they sell: perhaps it won't stand the test of time like something written by Knuth, but it'll be a pretty good guide to the technology it describes, most likely be someone who is indeed an expert in that area.  If I buy some random PDF on the internet, it may come from someone who really knows their stuff, or it may be junk.  On the other hand, were this market to grow, theoretically prices could come down.  Since the people who are authoring the book don't have to fool around with editing, printing, and so on, and get to keep all the money themselves, they could in theory keep their prices significantly lower than someone creating a more 'traditional' book with a lot of overhead.  That is, of course, if the book is one where there is competition in its niche.  Right now a lot of these books that pop up on my radar are written by domain experts.  However, what's to prevent a lot of people from jumping in and attempting to make a quick buck with a flashy looking web site?  Buying books based only on reputation?  That might lead to people who are really good authors, but perhaps not well known as "doers" (they didn't invent the technology in question) being left out in the cold.  Also, there is something of an unknown quantity about "pdf books".  For instance, after raking in a bunch of cash with theirs, 37signals put it on their web site, completely for free.  That had to leave the guy who bought it the day before it went free feeling like a bit of a chump.  At least with a 'real book', even if the contents are posted on the internet, you have a physical object that belongs to you.  I wonder how bad piracy is, and how bad it might be were these to become more popular?  Another thing worth noting is that, via services like Lulu.com, it *is* possible to print these out.

In any case, I think things are likely to change with time, as we aren't dealing with a static situation, but rather one where a changing landscape maylead to different outcomes, as the key variables… vary.

I am honestly unsure of what to make of this development.  How do you see the market for "home brewed" pdf ebooks evolving?