2012-10-25

Lisp Hackers: Slava Akhmechet

Slava Akhmechet published several enlightening essays at defmacro.org, of which one I often recommend to people, interested in learning about Lisp: The Nature of Lisp. He also created a continuation-based Lisp web-framework - Weblocks, backed by a delimited continuations library cl-cont. Other then that he is a co-founder of a startup company RethinkDB, of which he tells a bit in the interview.
Tell us something interesting about yourself.

For a long time I thought that human achievement is all about science and technology. In the past few years I realized how misled I was. Hamlet is as important an achievement as discovering penicillin. I wish I'd figured out earlier that science, for all its usefulness, is very limiting if one adopts it as an article of faith.

What's your job? Tell us about your company.

I'm a founder at RethinkDB. We spent three years building a distributed database system that we're about to open source and release in the next two weeks. The system allows people to easily create clusters of machines, partition data in a click of a button, and run advanced, massively parallelized, distributed queries using a very comfortable query language we've designed. The product is really delightful to use — we were just playing with it today to analyze census data for the upcoming presidential election in the U.S. and using it to play with the data is a real joy. I'm very proud of what we've done here — I hope it will make lots of people's jobs easier and let them do things they couldn't have done before.

My job here is to do the most important thing at any given time. Sometimes it means fixing bugs, sometimes it means demoing the product to customers, and sometimes it means driving to buy supplies so our developers can get their jobs done.

Do you use Lisp at work? If yes, how you've made it happen? If not, why?

We don't use Lisp, but much of our software is built on ideas borrowed from Lisp. We don't use it because we needed low level control — most of the code is written in C++, even with some bits of assembly. But we've borrowed an enormous number of ideas from Lisp. In fact, if we weren't Lispers, we would have built a very different (and I think significantly more inferior) product.

What brought you to Lisp? What holds you?

A guy named bishop_pass on gamedev.net forums about fifteen years ago. He was a really good advocate and I respected his opinions because of other subjects, so I decided to check Lisp out. I enjoyed it immensely, and spent years hacking in it. Today the only Lisp I still use is Emacs Lisp. I honestly don't know if I'll program in Lisp again (other than for fun, of course), but the ideas behind it will be with me forever.

What's the most exciting use of Lisp you had?

I built cl-cont — a macro that converts Lisp code to continuation passing style. I honestly think I learned more about programming from that experience than from anything else I've done before or after.

What you dislike the most about Lisp?

Probably the arrogance of the community that surrounds it. Knowing Lisp certainly doesn't make one a better person, nor even necessarily a better programmer.

Among the software projects you've participated in what's your favorite?

Definitely RethinkDB. We took a really complex subject (real-time distributed systems) and made them extremely accessible and super-easy to use. I love the product both because we made the user experience a joy, and because of the really advanced technology that goes inside to make that happen (from low-level assembly hacks, all the way up to abstract mathematics).

If you had all the time in the world for a Lisp project, what would it be?

I'd want to build my own Lisp dialect. I know, I know, it's been done to death, there is no need to do it, and it only hurts the community, but in the presence of infinite time, it's just too much fun not to do.

Describe your workflow, give some productivity tips to fellow programmers.

The most important thing I learned on productivity is this Alan Kay quite — "Perspective is worth 80 IQ points." You could be the most productive person in the world, but it won't make the slightest bit of difference if you're pointing your talents in a direction that isn't useful to other people. If you're talented, your gift is precious and your time is limited. Learn how to direct your talents, it will be the most important thing you do.

You're currently a co-founder of a startup company RethinkDB, which went through YCombinator. As an insider of the startup ecosystem, in your opinion, what are the areas for Lisp use in startups nowadays with the biggest potential upside and why?

This isn't a popular stance in the Lisp community, but I think that today Lisp is mostly valuable as an education tool, as a means of thinking, and as an engine of ideas. It's very important for that. But as far as practical use goes, there are better options today.
submit

2012-10-19

Lisp Hackers: François-René (Faré) Rideau

François-René Rideau works at ITA Software, one of the largest employers of lispers, which was acquired by Google a year ago. While at ITA he stepped up to support and improve ASDF, the system definition facility, that is at the core of Lisp package distribution. He's also the co-author of the recently published Google Common Lisp Style Guide, which as well originated at ITA.


He's also an active writer: both of code and prose. He's thoughts and articles can be found on twitter, Facebook, Google+, Livejournal, and his site.
Tell us something interesting about yourself.

I like introducing myself as a cybernetician: someone interested in the dynamic structure of human activities in general.

Programming languages and their semantics, operating systems and reflection, persistence of data and evolution of code, the relation between how programmers are organized and what code they produce — these are my topics of immediate professional interest. For what that means, see for instance my slides (improved) from ILC'09: "Better Stories, Better Languages" or my essay "From Creationism to Evolutionism in Computer Programming".

However I'm also interested in cybernetics as applies to Civilization in general, past, present and future. See for instance my essay "Identity, Immunity, Law and Aggression on the Rapacious Hardscrapple Frontier" or my writings about Individual Liberty and the basic principles of Economics

Last but not least, I was recently married to my love Rebecca Kellogg, with whom I have since had a daughter Guinevere Lý "Véra" Kellogg Rideau (born last May). This gives me less free time, yet somehow made me more productive.

What's your job? Tell us about your company.

For the last 7 years or so, I have been working at ITA Software, now part of Google Travel. I have been working on two servers written in Lisp, at first briefly on QPX the low (air)fare search engine behind Orbitz and Google Flights then mostly on QRes, a reservation system now launched with Cape Air. These projects nowadays each count about half a million lines of Common Lisp code (though written in very different styles), and each keep growing with tens of active developers.

I suspect that my login "fare" (at itasoftware) was a pun that played in favor of recruiting me at ITA; however, it wasn't available after the Google acquisition, so now I'm "tunes" (at google), to remind myself of my TUNES project.

At ITA, I have been working mostly on infrastructure:
  • how to use better compilers (moving from CMUCL to SBCL, CCL),
  • how to build, run and test our software,
  • how to maintain the free software libraries we use and sometimes write,
  • how to connect QRes to QPX and follow the evolution of its service,
  • how to persist objects to a robust database,
  • how to migrate data from legacy systems,
  • how to upgrade our software while it's running, etc.
And debugging all of the above and more, touching many parts of the application itself along the way.

I think of my job at ITA so far as that of a plumber: On good days, I design better piping systems. On bad days, I don gloves and put my hands down the pipes to scrub.

Since you're mentioning me as working at ITA and on ASDF, I suppose it is appropriate for me to tell that story in full.

In building our code at ITA, we had grown weary of ASDF as we had accumulated plenty of overrides and workarounds to its unsatisfactory behavior. Don't get me wrong: ASDF was a massive improvement over what existed before (i.e. mk-defsystem), making it possible to build and share Common Lisp software without massive headaches in configuring each and every library. We have to be grateful to Dan Barlow indeed for creating ASDF. But the Common Lisp ecosystem was dysfunctional in a way that prevented much needed further improvements to ASDF. And so I started working on a replacement, XCVB.

Now, at some point in late 2009, I wrote a rant explaining why ASDF could not be saved: "Software Irresponsibility". The point was that even though newer versions of ASDF were written that slowly addressed some issues, every implementation stuck to its own version with its own compatibility fixes; no vendor was interested in upgrading until their users would demand upgrades, and users wouldn't rely on new features and bug fixes until all vendors upgraded, instead caring a lot about bug-compatibility, in a vicious circle of what I call "Software Irresponsibility", with no one in charge, consensus required for any change, no possible way to reach consensus, and everyone discouraged.

However, I found a small flaw in my condemnation of ASDF as unsalvageable: if, which was not the case then, it were possible to upgrade ASDF from whichever version a vendor had installed to whichever newer version you cared for, then ASDF could be saved. Users would be able to rely on new features and bug fixes even when vendors didn't upgrade, and vendors would have an incentive to upgrade, not to stay behind, even if their users didn't directly demand it. The incentive structure would be reversed. Shortly after I wrote this rant, the current ASDF maintainer stepped down. After what I wrote, I felt like the honest thing to do was to step forward. Thus, I started making ASDF self-upgradable, then massively improved it, notably making it more robust, portable, and easy to configure — yet fully backwards compatible. I published it as ASDF 2 in 2010, with the help of many hackers, most notably Robert Goldman, and it has quickly been adopted by all active Common Lisp vendors.

You can read about ASDF and ASDF 2 in the article I wrote with Robert Goldman for ILC 2010: "Evolving ASDF: More Cooperation, Less Coordination". I'm also preparing a talk at ILC 2012 where I'll discuss recent enhancements. I have to admit I didn't actually understand the fine design of ASDF until I had to explain it in that paper, thanks to the systematic prodding of Robert Goldman. Clearly explaining what you're doing is something I heartily recommend to anyone who's writing software, possibly as a required step before you declare your software complete; it really forces you to get the concepts straight, the API clean, and the tests passing. That also did it for me with my more recent lisp-interface-library, on which I'm presenting a paper at ILC 2012: "LIL: CLOS reaches higher-order, sheds identity, and has a transformative experience".

One double downside of ASDF 2 is that it both took a lot of resources I didn't put in XCVB, and made for a much better system for XCVB to try to disrupt. It isn't as easy anymore to be ten times better than ASDF. I still hope to complete XCVB some day and make it good enough to fully replace ASDF on all Common Lisp platforms; but the goal has been pushed back significantly.

Now one important point that I want to explicitly stress is that the problem with ASDF was not a strictly technical issue (though there were many technical issues to fix), nor was it strictly a social issue; it was an issue at the interface between the social and the technical spheres, one of how our infrastructures and our incentives shape each other, and what kind of change can bring improvement. That's the kind of issues that interest me. That's why I call myself a cybernetician.

Do you use Lisp at work? If yes, how have you made it happen? If not, why?

I've made it happen by selection. I applied at ITA Software precisely because I knew (thanks to the Carl de Marcken article published by Paul Graham), that the company was using Lisp to create real-world software. And that's what I wanted to do: create real-world software with a language I could use without wanting to kill myself every night because it is turning me into a pattern-expanding machine rather than a human involved in thinking and using macros as appropriate.

"I object to doing things that computers can do." — Olin Shivers

Yet, in my tasks as a plumber, I have still spent way too much time writing shell scripts or Makefiles; though these languages possess some reflection including eval, their glaring misdesign only lets you go so far and scale so much until programs become totally unmanageable. That's what pushed me over the years to develop various bits of infrastructure to do as much of these things as possible in Lisp instead: cl-launch, command-line-arguments, philip-jose, xcvb, asdf, inferior-shell.

Interestingly, the first and the last, cl-launch and inferior-shell, are kind of dual: cl-launch abstracts over the many Lisp and shell implementations so you can invoke Lisp code from the Unix shell; it is a polyglot lisp and shell program that can manipulate itself and combine parts of itself with user-specified Lisp code to produce an executable shell script or a dumped binary image; I sometimes think of it as an exercise in "useful quining". inferior-shell abstracts over the many Lisp and shell implementations so you can invoke Unix shell utilities from any Lisp implementation, remotely if needs be (through ssh), and with much nicer string interpolation than any shell can ever provide; it is a classic Lisp library notably available through Quicklisp. With the two of them, I have enough Unix integration that I don't need to write shell scripts anymore. Instead, I interactively develop Lisp code at the SLIME REPL, and have a shell-runnable program in the end. That tremendously improved my quality of life in many situations involving system administration and server maintenance.

What brought you to Lisp? What holds you?

My first introduction to Lisp was in high-school, in using the HP RPL on my trusty old HP 28C (eventually upgraded to a HP28S, with 32KB of free RAM instead of 4KB!). When I became student at Ecole Normale Supérieure, I was taught Caml-light by xleroy himself, I learned to use Emacs, and I met Juliusz Chroboczek who introduced me to Scheme and Common Lisp, continuations and SICP. Finally, during my vain efforts to gather a team to develop an operating system based on a higher-level language as part of the TUNES project, I have been introduced to Lisp machines and plenty of other interesting concepts.

I use Lisp because I couldn't bear to program without higher-order functions, syntactic abstraction and runtime reflection. Of all Lisp dialects, I use Common Lisp mainly because that's what we use at work; of course a large reason why we use it at work is because it's a good language for practical work. However, frankly, If I were to leave ITA (by Google), I'd probably stop using Common Lisp and instead use Racket or Maru, or maybe Factor or Slate, and try to bootstrap something to my taste from there.

What's the most exciting use of Lisp you had?

I remember being quite exhilarated when I first ran the philip-jose farmer: it was a server quickly thrown together by building green-threads on top of arnesi's (delimited) continuation library for CL. With it, I could farm out computations over a hundred servers, bringing our data migration process from "way too slow" (weeks) to "within spec" (a few hours). It's impressive how much you can do in Lisp and with how little code!

While I released the code in philip-jose, it was never well-documented or made user-friendly, and I suspect no one ever used it for real. This unhappily includes ITA, for my code never made it to production: I was moved to another team, our customer went bankrupt, and the new team used simpler tools in the end, as our actual launch customer was 1/50 the size of our first prospect.

What you most dislike about Lisp?

For Lisp in general, I would say the lack of good ways to express restrictions on code and data. Racket has been doing great work with Typed Racket and Contracts; but I'm still hoping for some dialect with good resource management based on Linear Logic, and some user-extensible mechanism to define types and take advantage of them.

For Common Lisp in particular, though I do miss delimited continuations, I would say that its main issue is its lack of modularity. The package system is at the same time low-level and inexpressive; its defsystem facilities are also lacking, ASDF 2 notwithstanding; up until the recent success of Zach Beane's Quicklisp, there wasn't a good story to find and distribute software, and even now it's still behind what other languages have. This is part of a vicious circle where the language attracts and keeps a community of developers who live happily in a context where sharing and reusing code is relatively expensive (compared to other languages). But things are getting better, and I have to congratulate Zach Beane once again for Quicklisp. I believe I'm doing my small part.

Among software projects you've participated in what's your favorite?

I unhappily do not have a great history of success in software projects that I have actively participated in.

However, I have been impressed by many vastly successful projects in which I had but a modest participation. In the Linux kernel, the Caml community, the Racket community, (QPX and QRes at work might also qualify but only to lesser degrees), there were bright people unified by a common language, by which I mean not merely the underlying computer programming language, but a vision of things to come and a common approach to concepts: not just architecture but architectonics. Another important point in these successful projects was Software Responsibility (as contrasted to the previously discussed Software Irresponsibility): there is always someone in charge of accepting or rejecting patches to any part of the system. Patches don't linger forever unapplied yet unrejected, so the software goes forward and the rewarded contributors come back with more and/or better patches. Finally, tests. Lots of them. Automatically run. All the time. Proofs can do, too, though they are usually more expensive (now if you are going to do testing at the impressive scale of sqlite, maybe you should do proofs instead (see CPDT). I discovered, the hard way, that tests (or proofs) are the essential complement to programs, without which your programs WILL break as you modify them.

If you had all the time in the world for a Lisp project, what would it be?

I would resurrect TUNES based on a Linear Lisp, itself bootstrapped from Racket and/or Maru.

Describe your workflow, give some productivity tips to fellow programmers.

First, think hard and build an abstract model of what you're doing. Guided by this understanding of where you're going, code bottom up, write tests as you do, and run them interactively at the SLIME REPL; make sure what you write is working and passing all tests at all times. Update your abstract model as it gets pummeled into shape by experience. Once you've got the code manually written once or twice and detect a pattern, refactor it using macros to automate away the drudge so the third time is a piece of cake. Don't try to write the macro until you've written the code manually and fully debugged it. Never bother with low-level optimization until the very end; but bother about high-level optimization early enough, by making sure you choose proper data structures.

Unhappily, I have to admit I am a serial under-achiever. I enjoy thinking about the big picture, and I like to believe I often see it better and further than most people; but I have the greatest trouble staying on track to bring about solutions: I have so many projects, and only one life to maybe complete a few of them! The only way I can actually get a few things done, is to decompose solutions into small enough steps such that I can keep focused on the next one and get it done before the focus goes away.

A year ago Google bought ITA, which was, probably, the largest Lisp company recently. What were the biggest upsides and drawbacks of using Lisp on the scale of ITA? Does Lisp have a future inside Google?

On the upside, we certainly have been able to write quite advanced software that we might not have otherwise managed. A million lines of Lisp code, including its fair share of macros and DSLs, would be so many more million lines of code without the syntactic abstraction made possible by Lisp. However hard and expensive it was with Lisp, I can only imagine how many times worse it would have been with anything else.

At the top of the tech bubble in 2008, we had over fifty Lisp programmers working just on QRes, gathered at an exponential rate over 3 years. That's a lot. We didn't yet have good common standards (Jeremy Brown started one, later edited by Dan Weinreb; I recently took it over, expanded it, merged it into the existing beginning of a Google Common Lisp Style Guide and published it), and it was sometimes hard to follow what another hacker wrote, particularly if the author was a recently hired three-comma programmer. But with or without standards, our real, major, problem was with lack of appropriate management.

We were too many hackers to run without management, and none of our main programmers were interested in becoming managers; instead managers were parachuted from above, and some of them were pretty bad: the worst amongst them immediately behaved like empire-building bullies. These bad managers were trying to control us with impossibly short, arbitrary deadlines; not only did it cause overall bad quality code and morale burnout, the renewing of such deadlines quarter after quarter was an impediment to any long-term architectural consideration for years. What is even worse, the organization as setup had a lot of inherent bad incentives and created a lot of conflicts, so that even passable managers would create damage, and otherwise good engineers were pitted against each other one two sides of absurd interfaces, each team developing a lot of scar tissue around these interfaces to isolate itself from the other teams. Finally, I could witness how disruptive a single bad apple can be when empowered by bad management rather than promptly fired.

I have had a lot of losing fights with QRes management at a time when, hanging on a H1B visa, I was too much of a coward to quit. Eventually, the bad people left, one by one, leaving behind a dysfunctional organization; and great as the people that manned it may have been, none was able or willing to fix the organization. Then finally, Google acquired us. There's a reason why, of two companies founded at about the same time, one buys the other and not the other way around: one grew faster because it got some essential things right that the other didn't. Google, imperfect as it necessarily is, gets those essential things right. It cares about the long term. It builds things to scale. It has a sensible organization. It has a bottom up culture. So far, things have only improved within QRes. Also, launching was also good in many ways. It makes us and keeps us real.

Lisp can be something of a magic tool to solve the hardest technical issues; unhappily it doesn't even start to address the social issues. We wasted a whole lot of talent due to these social issues, and I believe that in an indirect way, this is related to the lack of modularity in Common Lisp, as it fostered a culture of loners unprepared to take on these social issues.

So I'm not telling you this story just to vent my past frustration. There too I have a cybernetic message to pass on: incentives matter, and technical infrastructure as well as social institutions shape those incentives and are shaped by them.

As for the future of Lisp at Google, that million line of Common Lisp code ain't gonna rewrite itself into C++, Java, Python, Go, or even DART. I don't think the obvious suggestions that we should rewrite it were ever taken seriously. It probably wouldn't help with turnover either. But maybe, if it keeps growing large enough, that pile of code will eventually achieve sentience and rewrite itself indeed. Either that, or it will commit suicide upon realizing the horror.

Anything else I forgot to ask?

Ponies.

submit

2012-10-08

Lisp Hackers: Daniel Barlow

Daniel Barlow was one of the most active contributors to the open source Lisp ecosystem, when its development took off in the early 2000s. Together with Christophe Rhodes he was the first to join SBCL hacking, after the project was started by William Newman. He also had created a lot of early Lisp web tools, like Araneida HTTP application server, and built on it the first version of cliki.net, which served the Lisp community for almost 10 years (the second version went live earlier in 2012). Studying Cliki source was a kind of zen experience for me, as it did so much, yet in a very simple way.


But his largest contribution is, probably, ASDF, regarding which, likewise Cliki, there are controversial opinions among Lisp programmers. And Dan explains his attitude in the interview.

In the mid 2000s his involvement with open-source Lisp gradually diminished, as he stopped working as a consultant and got a full-time job. Yet he remains fondly remembered in the community.

Tell us something interesting about yourself.

I don't do interesting. Um, improvise. Hacker, skater, cyclist, husband, father to a seven-month-old son as demanding as he is adorable, computing retro-grouch who uses Linux on the desktop.

I have a metal plate and some pins in my right forearm where I broke it a couple of months ago, inline skating in the Le Mans 24 hour relay event. I have now regained more or less complete range of motion in that hand and can advise anyone doing the event next year that the carpet in the pit boxes is unexpectedly treacherous when it's been waterlogged by a sudden thunderstorm.

Still, we came fifth in category, which makes me very happy.

What's your job? Tell us about your company.

I started a new job about three months ago, in fact. I'm now working at Simply Business in London, busily disrupting the business insurance market. Which is to say, writing web apps for the online sale of business insurance.

It's not quite as buzzwordy as it sounds, actually. Business insurance is traditionally sold by brokers, who are humans and therefore although really good at dealing with complex cases and large contracts where the personal touch is required, tend to be a trifle expensive for straightforward policies which could be much more economically sold online. The industy is ripe for disintermediation.

What brought you to Lisp?

A combination of factors around the time I was at university: the UNIX-Haters Handbook, which I bought to sneer at and ended up agreeing with; Caml Light, which I used in my fourth year project; Perl - specifically my horror to learn that it flattens (1,2,3,(4,5,6)) to (1,2,3,4,5,6) - yes, I know about references, but I think it's a bug not a feature that the sensible syntax is reserved for the silly behaviour - and meeting some people from Harlequin (as was) at a careers fair.

It took me another couple of years or so to find CMUCL - in fact, I think it was another couple of years or so before it was ported to the x86 architecture, so it wouldn't have done me much good if I had known about it earlier - but looking back I suppose that was where the rot set in.

Do you use Lisp at work? If yes, how you've made it happen? If not, why?

No, we're primarily a Ruby shop, with a sideline in large legacy Java app which we're working to replace. I think there've been a couple of uses of Clojure in hack days, but that's as far as it goes.

Why not? Well, apart from the point that I've only been there since July ... It's The Ecosystem, I suppose. Ruby as a language has pretty good support for OO paradigms and a whole bunch of free software libraries for doing web-related things: Ruby as a community is big on Agile and TDD and maintainable design. And it's at least possible (if not exactly easy in the current bubble) to engage a regular recruitment agent and task him with finding competent programmers who know it. I'm not saying Lisp is exactly bad at any of that, but it's at least questionable whether it's as good along all of those axes, and it's certainly not better enough to make the switch sensible.

What's the most exciting use of Lisp you had?

SBCL was probably one of the most fun projects I've ever worked on. Working with people who were mostly smarter than me or had better taste than me or both, on a project whose goal was to make it possible for people less smart than me to hack on a Lisp system. And context switching between the CL with all its high level features and (e.g.) Alpha assembly was a real kick - it's a bit like I imagine building a Lisp machine would be, except that the goal is achievable and the result is generally useful.

What you dislike the most about Lisp?

I don't really use it enough any more to react to that with the required levels of venom. I should probably say ASDF, everyone else does :-)

I guess if you force me to an answer, it'd have to be its disdain for the platform it lives on - take, for example, CL pathname case conversion rules. Whoever decided that Unix systems could reasonably be said to have a "customary case" had, in my view, not looked very hard at it.

As far as I can tell, you're currently mostly doing work in Ruby. What's the pros and cons of Ruby development compared to Lisp?

The transpose-sexps function in Emacs does nothing useful in Ruby mode. rails console is a poor substitute for a proper toplevel. Backtraces don't show the values of parameters and local variables. (Yes, pry helps a lot). And the garbage collector (in MRI, anyway) is sucky to the point that even 1990s Java GC could probably beat it in a fair fight.

On the other hand, libraries.

Here's an interesting thought experiment, though: there's a clear difference between the Lisp workflow where you change the state of your image interactively to get the code into working shape very quickly (and then later try to remember what it was you did) and the more scripted approach of test-driven development in Ruby where you put everything (code, test setup, assertions) in files that you reload from disk on each run. How would you meld the two to get repeatable and fast iterations? A lot of people are doing things like Spork (which forks your application for each test it runs, throwing the child state away after the test has run) but they never seem to me to be more than 80% solutions. My intuition is that you'd want to stick to a much more functional design and just make state a non-problem.

Among software projects you've participated in what's your favorite?

SBCL was a lot of fun, as I said earlier. ASDF is a candidate too, just because it must so obviously fill a need if people are still cursing it - as they seem to be - ten years later :-)

Describe your workflow, give some productivity tips to fellow programmers.

It's taken me the best part of three months to get this interview back to Vsevolod, I'm the last person anyone should be asking about workflow or productivity. :-)

Um. I've been doing a lot of TDD lately. Given that as recently as two years ago I was castigating it as a religion this might be seen as a capitulation or as a conversion, this might be perceived as a change of mind. What can I say? Actually, pretty much now what I said then: the value of TDD is in the forces it exerts on your design — towards modularity, functional purity, decoupling, all those good things - not so much in the actual test suite you end up with. Process not product. These days everyone thinks that's obvious, but back then it was either less widely known or less explicitly stated or else I was just reading the wrong blogs.

(Of course, the tendency in Lisp to write code interactively that can be tested ad hoc at the repl probably has a very similar effect on coupling and functional style. My personal experience is that TDD doesn't seem to be nearly as valuable in repl-oriented languages, but YMMV)

More generally: go home, do some exercise, get some sleep. Sleep is way underrated.

Anything else I forgot to ask?

Some day I will write an apology for ASDF.

Pedants will note that the word "apology" not only means "an expression of remorse or regret" but also "a formal justification or defence", and may infer from that and my general unwillingness to ever admit I was wrong that I'm not about to actually say I did a bad thing in writing it. Seriously, go find a copy of MK-DEFSYSTEM and try porting it to a Lisp implementation it doesn't support.

In 2002 I presented a paper at the ILC (about CLiki, not ASDF) that said essentially "worse is better than nothing", and - unless the "worse" has the effect of stifling a potential better solution from coming along later - I still stand by that
submit

2012-08-20

Как сочетать функциональные языки и мейнстрим

Где-то с месяц назад меня позвали на киевскую тусовку функциональных программистов рассказать про практический опыт использования Clojure. В итоге я немного покритиковал Clojure, но в основном хотел сказать несколько о другом: о том, где ниша функциональных и других немейнстримных языков (например, Lisp'а), и как их можно использовать в сочетании с мейнстримными. Получилось, как всегда довольно сбивчиво, поэтому попробую изложить здесь яснее и структурированнее.

Вот видео:


А хотел сказать я всего-то 3 простые вещи, в которых нет ничего особенно нового, но от этого они, как по мне, не теряют своей ценности.

1. Философия Unix для веба


Если говорить о серверной разработке, то самым эффективным подходом к построению масштабируемых систем (как в смысле нагрузки, так и трудоемкости их развития) был и остается Unix way:
small pieces, loosely joined, that do one thing, but do it well

Только в отличие от классики Unix, теперь эти кусочки живут в рамках отдельных узлов сети и взаимодействуют не через pipe, а через сетевые текстовые интерфейсы. Для того, чтобы организовать такое взаимодействие существует ряд простых, надежных, хорошо масштабируемых и, что очень важно, де-факто стандартных и языконезависимых средств. Под разные задачи эти средства разные, и они включают:
  • низкоуровневые механизмы взаимодействия: сокеты и ZeroMQ

  • высокоуровневые протоколы взаимодействия: HTTP, SMTP, etc.

  • форматы сериализации: JSON и еще десяток других

  • точки обмена данными: Redis, разные MQs
Это не REST и не SOA в чистом виде, скорее перечисленные схемы являются несколько специализированными, а иногда и ушедшими сильно в сторону ппримерами воплощения этого подхода. Это не более и не менее, чем инструментарий, поверх которого можно построить любую сетевую архитектуру — от централизованной до P2P.

2. Требования к языкам


Когда программисты сравнивают между собой разные языки, они, как правило, подходят к задаче не с той стороны: от возможностей, а не от требований. Хотя работа в индустрии должна бы была их научить обратному. :) Если же посмотреть на требования, то их можно разделить на несколько групп: требования к языкам для решения неизвестных (исследовательских) задач существенно отличаются от требований к языкам для реализации задач давно отработанных и понятных. Это классическая дихотомия R&D — research vs development — исследования vs инженерия. Большинство задач, с которыми мы сталкиваемся — инженерные, но как раз самые интересные задачи, как с точки зрения профессиональной, так и экономической — исследовательские. Также отдельно я выделяю группу требований для скриптовых языков.

Требования к языкам для исследований

  • Интерактивность (минимальное время цикла итерации)

  • Поддатливость и гибкость (решать задачу, а не бороться с системой)

  • Хорошая поддержка предметной области исследований (если это математика — то хотя бы Numeric Tower, если статистика — то хорошая поддержка матриц, если деревья — то инструменты работы с деревьями и первоклассная рекурсия, и т.д.)

  • Возможность решать задачу на языке предметной области (заметьте, что специфические исследовательские языми — всегда DSL'и)

Требования к языкам для решения стандартных задач

  • Поддерживаемость (возможность легко передать код от одного разработчика другому)

  • Развитая экосистемы инструментов и хорошая поддержка платформы

  • Стабильность и предсказуемость (как правило, мало кто любит истекать кровью на bleeding edge, поэтому выбирают то, что работает просто, но без особых проблем, проверенное)

  • И, порой самое важное — возможность получить быстроработающий результат (подчас скорость работы отличает решение, которое пойдет в продакшн, от того, которое не пойдет)

Требования к скриптовым языкам

  • Первое и основное — хорошая интеграция с хост-системой (оптимизация языка под наиболее часто выполняемые операции в хост-системе)

  • Простота и гибкость (я еще не слышал ни об одном статически типизированном скриптовом языке :)

  • Минимальный footprint (с одной стороны вся тяжелая работа может делаться на стороне хост-системы, с другой стороны — обычно ресурсы ограниченны и очень мало смысла тратить их на ненужное)

Очень много можно рассуждать о том, какие требования стояли во главе угла при создании тех или иных языков, как языки эволюционируют и т.д. Ограничусь лишь тем, что выделю группу языков, которые однозначно создавались чисто для исследовательской работы — это Matlab, Octave, Mathematica, R, Prolog и разные вариации на тему. Слабость таких языков обычно в том, что в них не были заложены общеинженерные механизмы (прежде всего, первоклассная поддержка взаимодействия с другими системами, которые живут за пределами их "внутреннего" мира).

Один из классических подходов к решению исследовательских задач — двухэтапный метод: разработать прототип системы на исследовательском языке, а затем реализовать полноценную систему уже на инженерном языке, оптимизировав при этом скорость и другие показатели решения, но не меняя его сути. Самым существенным недостатком у него, как по мне, является то, что это может неплохо работать в случае одноразового решения, но если система должна эволюционировать во времени, то все становится существенно сложнее. Ну и, зачем делать дурную работу: хорошо, если язык может поддерживать как исследовательскую, так и инженерную парадигму.

Если предаставить эти требования в виде декартовых координат, и расположить на них языки, и обевсти самую интересная область, в нее попадет совсем немного языков — раз-два и обчелся. Прежде всего, это Lisp, также Python, и, может быть, еще пара-тройка.



3. Выбор языка под задачу, а не под платформу


Возвращаясь к нашему Unix'у в облаках, отдельные компоненты этой системы могут быть написаны на любом языке (также, как и на обычном Unix'е) и даже работать на любой платформе. Это дает возможность решать специфические задачи тем инструментом, который подходит лучше всего. И среди всего спектра задач преобладают в основном такие, которые, по большому счету все равно, на каком языке делать. Для этих задач обычно самыми главными критериями выбора являются экосистема инструментов и в половине случаев скорость работы результата (во всяком случае в долгосрочном плане).

Но не все задачи такие: есть много разных областей, в которых мейнстримные языки работают плохо или же фактически не работают вообще. Известный афоризм на этот счет — 10е правило Гринспена. Соответственно, если вы хотите использовать Lisp, Haskell или Factor — решайте на них те задачи, на которых они дают очевидное преимущество, и делайте их полноценными гражданами в облачной экосистеме. Так вы на практике, а не в теории сможете доказать их полезность скептикам, и в то же время будут развиваться как знания остальных программистов о них, так и инструментарий этих языков (по которому они часто проигрывают мейнстримным аналогам). Таким образом, в будущем они получат возможность рассматриваться как кандидаты для решения и других задач, для которых их преимущества не столько очевидны (в 2-3 раза, а не на порядок).

P.S. Пару слов о Clojure


В выступлении я много сравнивал Erlang и Clojure. Оба эти языка делают упор на конкурентную парадигму разработки. Но Erlang в этом смысле стоит особняком от других функциональных языков, поскольку конкурирует с императивными языками не столько на уровне качества языка для решения конкретных задач, сколько как язык-платформа, создающий новую парадигму решения задач за рамками отдельного процесса и отдельной машины. Таким образом, его главная ценность — это роль системного языка для распределенных систем.

Что касается Clojure, то я в шутку разделил все языки на фундаментальные и хипстерские. Фундаментальные языки (такие как C, Lisp, Erlang, Haskell, Smalltalk) появляются, когда группы умных людей долго работают над сложными проблемами и в процессе создают не просто язык, а целую парадигму. В то же время хипстерские языки являются продуктом обычно одного человека, который хочет здесь и сейчас получить самое лучшее из нескольких языков сразу, гибрид. Я перечислял такие примеры, как C++ (C + классы + еще куча всего, понадерганного с разных сторон) — это, пожалуй, архетипный хипстерский язык,— Ruby (Perl + Smalltalk + Lisp), JavaScript (C + Scheme), который со времени пояления V8 и node.js перешел из категории скриптовых языков в общесистемные. Кстати, в большинстве случаев языки второго типа более успешны в краткосрочном плане и завоевывают мир. Точнее, их мировое господство чередуется с господством языков, которые стоят в этом спектре где-то посередине: когда люди устают от подобных гибридов, их в конце концов "побеждают" более здравые варианты. Java вместо C++, Python вместо Ruby (для веба), посмотрим, что будет вместо JS. К сожалению, Clojure — яркий пример такого гибрида: это попытка скрестить Lisp с Haskell'ем, да еще и на Java-основе...

2012-07-25

Lisp Hackers: Luke Gorrie [+ his ECLM video]

Luke Gorrie is a proverbial hacker following his passion for programming to various places in the physical world, like Sweden or Nepal, or programming world, like Erlang, Forth, Lisp, Lua, or some other fringe language. And enjoying the process of exploration and meeting different people, while fiddling with computers, from OLPC to telecom equipment, working with world-famous technologists, including Joe Armstrong or Alan Kay, and, generally, doing whatever he likes in the programming world.

He was one of the main authors of SLIME in its early days. And recently he founded a Lisp networking startup Teclo Networks, the story of which (as of fall of 2011) he told at ECLM 2011. Here's the video, that he asked to hold off for some time due to business reasons, and only now it's posted online. Enjoy!



Luke's twitter account is @lukego

Tell us something interesting about yourself.

I enjoy exploring the world. I was born and raised in Australia (Brisbane), I've lived for many years in Sweden and become a Swedish citizen, and these days I'm extremely happy to be settling myself into Switzerland. I've spent a couple of years traveling continuously with just my backpack and unicycle and no home anywhere to go back to. I've found this interesting. You can find links to my exploits on my homepage.

I like to feel a bit out of my depth and to be "the new guy" who has a lot of catching up to do. This is why I so much enjoy learning new programming languages and visiting new programming communities so much: the feeling of having to think really hard about how to formulate even the most basic programs, just like when I was a kid, while other people do it so naturally.


What's your job? Tell us about your company.

I'm currently starting a new project called Snabb. It's too early to say very much about this yet, so I'll talk about the past.

I've had a few major jobs that have shaped my thinking. I worked in each one for about 3-5 years.

The first was Bluetail AB, the first Erlang startup company. I was hired by Joe Armstrong because I had enthusiasm pouring out of my ears and nostrils. I moved immediately to Stockholm in the middle of winter for this job -- I was 21 years old and I'd never left Australia or seen snow before. I learned a lot of things at Bluetail, besides practical matters like how to walk on ice. The programmers there were all way above my level and I was routinely stunned at the stuff they had done, like when Tobbe Törnqvist mentioned in passing that he'd written his own TCP/IP stack from scratch as a hobby project and that his dial-up internet connection uses his own home-brew PPP implementation. I used to roam the corridors at night borrowing tech books from people's shelves, there must have been a thousand books in the office.

Bluetail was bought by Alteon, who were bought by Nortel at the same time, and become a productive little product unit in a big networking company.

Next was Synapse Mobile Networks. This was an amazing experience for me: to switch from a big company to a small company and take care of everything from design to development to deployment to support by ourselves. Really getting to know customers and internalizing their needs and building the right solutions. The product is a device management system, which is a realtime database in a mobile phone network keeping track of which phone everybody is using and making sure all their services work. The whole thing was written in Erlang and BerkeleyDB, even with a from-scratch SS7 telecom networking stack in the end. I would routinely fly to a really interesting country -- Kazakhstan, Jordan, Russia, Algeria, you name it -- and spend a few weeks deploying the system and developing customizations on-site. Synapse was the first company in this market and they are still the market leader today. The system has by now configured approximately 1 billion actual mobile phones by sending them coded SMSes.

Synapse was also instructive in seeing how much a strong personality can shape a company. I'm still in awe of our fearless leader Per Bergqvist. What a guy, as they say on Red Dwarf.

The most recent is Teclo Networks. This is a company that I co-founded with friends from the SBCL community -- mostly drinking buddies from ECLM -- and friends from the telecom world. I served as CTO during the phase of finding the right problem to solve, building our series of prototypes and our real product, and finding our very first customers. I'm a Teclo-alumnus now, not actively working in the company, and: Wow, this was a really intense few years.

Teclo builds network appliances that optimize TCP traffic for cellular internet at about 20Gbps per server. The product speeds up the network, improves consistency, and globally eliminates buffer-bloat. The company is currently moving forward with a lot of momentum: we have Lispers like Juho Snellman, Tobias Rittweiler, and Ties Stuij currently deploying systems in real live networks all over the world. Go Teclo :-)


Do you use Lisp at work? If yes, how you've made it happen? If not, why?

I've mostly done Lisp hacking for my own pleasure on the side, but I've also used it quite a bit at work.

The first time was when I used Scheme at Bluetail. Specifically, I used Kawa to extend a Java application. I'm sure this raised some eyebrows amongst my colleagues but frankly I was having too much fun to really notice. Kawa is a great piece of software.

I wrote a bunch of Common Lisp networking code while reading books like TCP/IP Illustrated. This was a hobby project called Slitch and Netkit. I used this slightly in my work at Nortel to replicate a DoS against the Linux kernel that was crashing some of our appliances.

Teclo is very much a Lisp-hacker shop. In the first year or so we used Lisp for absolutely everything. We've written and deployed a prototype TCP/IP implementation written in Common Lisp (SBCL), wrote network analysis tools for cross-referencing and visualizing traffic captures (often working on traces that fill up a whole disk i.e. hundreds of gigabytes), and developed all of the operation-and-maintenance infrastructure in CL. These days Teclo uses C/C++ for the main traffic handling, R for statistical analysis, and Common Lisp for the rest -- mostly operation and maintenance and data visualization.


Among software projects you've participated in what's your favorite?

SLIME! This was wild fun. The project started when Eric Marsden, a CMUCL hacker, posted to #lisp his weekend hack to annotate individual S-expressions in an Emacs buffer with CMUCL compiler messages. His source files were slim.el and skank.lisp, after the Fatboy Slim song he was listening to at the time. I loved the idea so I started toying with it, renaming the files to slime.el and swank.lisp to make them easy to diff. I quickly posted my version to the cmucl-dev mailing list where Helmut Eller jumped right into it too. We created a Sourceforge project and quickly snowballed from there to over 100 contributors.

SLIME's mission was ultimately to replace ILISP as the standard Emacs mode for interacting with Common Lisp. ILISP worked by sending S-expressions between Emacs and Lisp via standard I/O and it had the unfortunate habit of getting stuck a lot. ILISP was also about 15 years old at that time (SLIME is catching up now at 8 years!) and really hard to work on due to the heavy use of reader conditionals in the source code, like #+cmucl #-lucid #+sbcl #-lispworks #+acl #-acl4.2 and so on.

I really enjoyed the feeling of working on a growing and thriving open-source project. Seeing the steady stream of new names appearing on our mailing list, getting patches from dozens of people, feeling good about positive feedback we received, working hard on negative feedback right away, and so on. Twas a really productive flow.

I think that writing development tools is also really satisfying because your users are your peer group: people you really look up to and respect and sometimes drink beer with. It's a great feeling to build stuff that they like to use.

Helmut Eller and I also had a really great working style. Very often I'd do some late-night hacking and check in a bunch of new functionality, which Helmut would then read through and think about and then thoughtfully rewrite in a simpler way. Then I'd see what he'd done, think about it, and rewrite it to be simpler again. And so on. This was a really pleasant and productive way of working. He is also an absolute magician when it comes to suddenly checking in what seems like a month worth of work that he did over the weekend. (Juho Snellman has this uncanny ability too.)

I hacked on SLIME from the beginning and up to version 1.0. Then I was engulfed by an Erlang startup. Here're some posts that I fished out of the mailing list archives to give a sense of the early days:


What brought you to Lisp? What holds you?

My friend Darius Bacon brought me to Lisp (Scheme) by gently and persistently singing the praises of Abelson & Sussman back in the days when I was a teenager hacking Java. This book was a revelation for me: particularly the Digital Circuit Simulator. So I was a Scheme-lover for several years, but Darius also gently and persistently sang the praises of Norvig's Paradigms of AI Programming, which was another revelation to me, and made a Common Lisp convert of me.

What holds me to Lisp is the people. I love hanging out with Lisp hackers: I find that we're an unusually diverse community. How often do you attend a small conference where attendees are building nuclear defense systems, running intensive care wards, designing aeroplane engines, analysing Lute tablature, developing cancer drugs, writing FIFA's legal contracts, and designing their own microchips? Surely this describes few tech events other than Arthur & Edi's European Common Lisp Meeting :-).


What you dislike the most about Lisp?

I have a few stock rants that I'm tempted to rattle off -- fragmentation, threads, GC-phobia -- but honestly I doubt they are applicable these days. There have been so many improvements to SBCL and great developments like Quicklisp and so on.

I can say I'm personally disappointed about the missed opportunity for us to write Teclo's production traffic path in SBCL. Ultimately, the software has just a few microseconds' budget to process each packet, and can never spike latency by more than a millisecond. I don't know how to deliver that kind of performance in a high-level language like Lisp. So we fell back to C.

I'd also like to have embedded SBCL in the C program to take care of high-level work like slurping in configurations and spitting out statistics. But the SBCL runtime system is a bit heavyweight to make that practical. It gets in the way when you want to debug with strace, gdb, etc. So we wrote this stuff in C++ instead.

I'd have written a lot more Lisp code in recent years if I'd found good solutions to those problems. But at the end of the day it is C's niche to write lots of tiny state machines with extremely predictable performance characteristics, so I'm not especially shocked or disheartened. A language can't occupy every niche at once :-).


Describe your workflow, give some productivity tips to fellow programmers

I'm an incrementalist. I like to start from minimal running code, like (defun program () (print "Program 0.1")), and move forward in a long series of very small steps. I tend to choose designs that I know are too simple or too slow, because I enjoy the feeling of hitting their limits and knowing that I didn't prematurely generalize or optimize them. Then I fix them. I'd say that I'm much influenced by watching the development of Extreme Programming on WardsWiki in the 90s.

This isn't a hard and fast rule though. In Teclo I once spent a whole month writing a complex program without even trying to compile it once. This was a rewrite of the main traffic engine in C after having written a prototype in Lisp previously. This was a really fun way to work actually. I produced a tremendous amount of bugs in this style though and I wasn't smart enough to fix them. Christophe Rhodes did that part -- don't ask me how :-).

I do have a tip for getting into "the flow". It's a simple one: make a little TODO list of some features you want to hack, and take the laptop into the park away from the internet for an hour or two until they're done. This works every time for me.

Oh, and I highly recommend printing out and reading your programs. This is the best way that I know for improving their quality. This is why I'm a bit picky about things like the 80-column rule and the layout of functions within a file. I want to be able to print programs out and read them on paper from top to bottom. I wrote a program called pbook to help with this -- it's not a very good implementation though, with all those regexps, so I'd love if someone would make a much simpler one.


You have played around with so many languages, like Erlang, Lisp, Smalltalk & Lua. If you would design your own, how would it look like?

That's really hard to imagine. I'd have to find a reason that I needed a new programming language, and the details would probably follow from the problem I needed to solve.

I learn new languages mostly because I enjoy meeting new people and learning new ways of thinking. It's very seldom from any sense of dissatisfaction with previous languages I've used. My favourite languages are Common Lisp, Emacs Lisp, Forth, Erlang, Smalltalk, and C. So the best I can say is: those are the languages that I'd like to have designed.


P.S. Luke asked to give my warm thanks to John Q. Splittist for reading over these answers before I sent them to you (and for many other things!).
submit

2012-07-17

Book Review: Beautiful Code



It took me, probably, more than a year to read Beautiful Code (almost from cover to cover). Because of its volume, list of authors and coverage of languages and topics this book can absolutely be considered one of the bibles of modern programming. I've learned a lot from it and can recommend it to any programmer: both novice - to get the general picture of what programming spans and what is important in reality - and experienced - to study previously uncharted corners and maybe rethink some of the underpinnings of personal programming philosophy.

Surely, not all essays were of the same quality - on the contrary it ranged from very poor and dull, although still mostly informative, to insightful and even fascinating. The book features reknown authors: programming language designers and authors of famous books, as well as not so well-known but at times even more interesting people from various scientific and engineering communities. Almost all languages, ranging from FORTRAN to Lisp and Ruby are covered, and most of the topics are viewed, so to say, from two angles: most of the essays are paired either by language, or by topic, or both. Not surprisingly, almost half of the essays feature at least some code in C, as it was and remains the lingua franca of computing.

Distribution of languages

Each author was given a very vague task - to describe the most beautiful code, they've ever written or seen. Some interpreted it literally and just told and analysed an algorithm or hack. Others told a story about the whole system, how it evolved and became beautiful - these kinds of stories, were surely, the most interesting part of the book for me with two favorites. The first being the story of PGP-enabled Perl-based mail service Cryptonite, which maybe could feature in some collection of Cyberpunk novels; and the second - The Most Beautiful Code I Never Wrote by Jon Bentley, the author of Programming Pearls, in which he presents the analysis of Binary search algorithm, that helped to understand many aspects of this fundamental programming tool. Yet some people, who surprisingly include such famous programmers as Doug Crockford or Tim Bray, just used it to describe an idea, that they had found interesting, sometimes even without explaining why...

Personally for me the highlights of the book were:
  • how extensibility and unification played an ultimate role in most of the stories about system beauty
  • the proliferation of callback techniques on one hand and rather poor support of them in most of the languages on the other hand
  • and the articles, emphasizing different aspects of programming craft, like Beautiful Debugging and Code in Motion (about code formatting)

I've also used a chance to dive deeper into Perl and get a glimpse of FORTRAN, as well as to learn about a lot of low-level optimization techniques and considerations.

So what's beautigul in code? I've tried to summarize the views of all authors (33 articles don't represent a statistically meaninugful sample, but still :)

Unsurprisingly, simplicity & elegance are emphasized a lot (6 times) and its backers include Brian Kernighan, Yukihiro (Matz) Matsumoto and Jon Bentley. It's also matched, or even slightly overrun in terms of proponents number by clear interface/API abstraction and code structure/organization (i.e. proper OO or functional design). Flexibility comes next with 3 people, championing it. Only 2 people root for modularity, or as the Unix philosophy goes "small pieces loosely joint", but their articles are very good and insightful: Simon PeytonJones on STM in Haskell and Greg Kroah-Hartman about the Linux Kernel driver model. Than comes a long tail of features, like: frugality (not being wasteful), being close to the low-level, balance, and visual readability. Some authors didn't emphasize any one aspect of beauty, but told about some area they thought may be beautiful. These include: search, code generation, tests, debugging, top-down operator precedence parsing, hygienic macro support and math.

P.S. And here's an alternative review by Jeff Atwood, based on the same data, but with different conclusions, which boil down to this: code per se can't be beautiful. He quotes another opinion:

The chapter written by Yukihiro Matsumoto, the creator of Ruby, was the most impressive standout. It is three pages in which he simply writes about what he believes beautiful code is. He explains his understanding of beautiful code to you. This is what the book should be!


While provoking interesting thoughts and discussion, those three pages weren't enough: they were quite shallow and didn't bring enough useful evidence to the table. The best chapters did both, and there are quite a few in the book.

2012-07-12

Alan Kay about keeping history in CS

So true and very well explains my motives behind "Lisp Hackers":

Binstock: You once referred to computing as pop culture.

Kay: It is. Complete pop culture. I'm not against pop culture. Developed music, for instance, needs a pop culture. There's a tendency to over-develop. Brahms and Dvorak needed gypsy music badly by the end of the 19th century. The big problem with our culture is that it's being dominated, because the electronic media we have is so much better suited for transmitting pop-culture content than it is for high-culture content. I consider jazz to be a developed part of high culture. Anything that's been worked on and developed and you [can] go to the next couple levels.

Binstock: One thing about jazz aficionados is that they take deep pleasure in knowing the history of jazz.

Kay: Yes! Classical music is like that, too. But pop culture holds a disdain for history. Pop culture is all about identity and feeling like you're participating. It has nothing to do with cooperation, the past or the future — it's living in the present. I think the same is true of most people who write code for money. They have no idea where [their culture came from] — and the Internet was done so well that most people think of it as a natural resource like the Pacific Ocean, rather than something that was man-made. When was the last time a technology with a scale like that was so error-free? The Web, in comparison, is a joke. The Web was done by amateurs.

-- Dr.Dobbs «Interview with Alan Kay»

2012-07-01

Lisp Hackers: Peter Seibel

Peter Seibel has helped more people (including me) discover and become user of Lisp as probably noone else in the last decade with his Practical Common Lisp. Dan Weinreb, one of the founders of Symbolics and later Chief Architect at ITA Software, a succesfull Lisp startup sold to Google for around $1B in 2011, wrote, that their method of building a Lisp team was by hiring good developers and giving them PCL for two weeks, after which they could succesfully integrate under the mentorship or their senior Lisp people.

A few years after PCL Peter went on to write another fantastic programming book Coders at Work - here's my summary of it with the social network of Coders :)



Aside from being a writer he was and remains a polyglot programmer, interested in various aspects of our trade, about which he blogs occasionally. His code, presented in PCL, laid the foundation for a wide-spread CL-FAD library, which deals with filenames and directories (as the name implies), and more recently he created a Lisp documentation browser Manifest. Before Lisp Peter had worked a lot on Weblogic Java application server.

He's also active on twitter: @peterseibel

Tell us something interesting about yourself.

I’m a second generation Lisp programmer. My dad discovered Lisp when he was working at Merck in the 80s and ended up doing a big project to simulate a chemical plant in Lisp, taking over from some folks who had already been trying for quite a while using Fortran, and saving the day. Later he went to Bolt Beranek and Newman where he did more Lisp. So I grew up hearing about how great Lisp was and even getting to play around with some graphics programs on a Symbolics Lisp Machine.

I was also a childhood shareholder in Symbolics—I had a little money from some savings account that we had to close when we moved so my parents decided I should try investing. I bought Symbolics because my parents just had. Never saw that money again. As a result, for most of my life I thought my parents were these naive, clueless investors. Later I discovered that around that time they had also invested in Microsoft which, needless to say, they did okay with.

Oh, and something I learned recently: not only was Donald Knuth one of the subjects in my book Coders at Work, but he has read the whole thing himself and liked it. That makes me happy.


What's your job? Tell us about your organization.

A few months ago I started working half-time at Etsy. Etsy is a giant online marketplace for people selling handmade and vintage items and also craft supplies. I’m in the data group where we try to find clever ways to use data to improve the web site and the rest of the business.


Do you use Lisp at work? If yes, how you've made it happen? If not, why?

I always have a SLIME session going in Emacs for quick computations and sometimes I prototype things in Lisp or write code to experiment with different ideas. However, these days I’m as likely to do those things in Python because I can show my co-workers a sketch written in Python and expect them to understand it and I’m not sure I could do that with Lisp. But it makes me sad how slow CPython is compared to a native-compiling CL like SBCL. Usually that doesn’t matter but it is
annoying sometimes mostly because Python has no real excuse. The rest of my work is in some unholy mishmash of Scala, Ruby, Javascript, and PHP.


What brought you to Lisp? What holds you?

As I mentioned, I grew up hearing from my dad about this great language. I actually spent a lot of my early career trying to understand why Lisp wasn’t used more and exploring other languages pretty deeply to see how they were like and unlike Lisp. I played around with Lisp off and on until finally in 2003 I quit the startup I had been at for three years, which wasn’t going anywhere, with a plan to take a year off and really learn Common Lisp. Instead I ended up taking two years off and writing Practical Common Lisp.

At this point I use it for things when it makes sense to do so because I know it pretty well and most of my other language chops are kind of rusty. Though I’m sure my CL chops are rusty too, compared to when I had just finished PCL.


Did you ever develop a theory why Lisp isn’t used more?

Not one that is useful in the sense of helping it to be used more today. Mostly it seems to me to be the result of a series of historical accidents. You could argue that Lisp was too powerful too early and then got disrupted, in the Innovator’s Dilemma sense, by various Worse is Better languages, running on systems that eventually became dominant for perhaps unrelated reasons.

Every Lisper should read The UNIX-HATERS Handbook to better understand the relation between the Lisp and Unix cultures—Lisp is the older culture and back when the UNIX-HATERS Handbook was written, Unix machines were flaky and underpowered and held in the same contempt by Lisp geeks as Windows NT machines would be held by Unix geeks a few decades later. But for a variety of reasons people kept working on Unix and it got better.

And then it was in a better position than the Lisp culture to influence the way personal computing developed once micro computers arrived—while it would be a while before PCs were powerful enough to run a Unix-like OS, early on C was around to be adopted by PC programmers (including at Microsoft) once micros got powerful enough to not have to program everything in assembly. And from there, making things more Unix-like seemed like a good goal. Of course it would have been entirely possible to write a Lisp for even the earliest PCs that probably would have been as performant as the earliest Lisps running on IBM 704s and PDP-1s. My dad, back from his Lisp course at Symbolics, wrote a Lisp in BASIC on our original IBM PC. But by that point Lispers’ idea of Lisp was what ran on powerful Lisp machines, not something that could have run on a PDP-1.

The AI boom and bust played its role as well because after the bust Lisp’s reputation was so tainted by its failure to deliver on the over-promises of the Lisp/AI companies that even many AI researchers disassociated themselves from it. And throughout the 90s various languages adopted some of Lisp’s dynamic features, so folks who gravitated to that style of programming had somewhere else to go and then when the web sprang into prominence, those languages were well positioned to become the glue of the Internet.

That all said, I’m heartened that Lisp continues to not only be used but to attract new programmers. I don’t know if there will ever be a big Lisp revival that brings Lisp back into the mainstream. But even if there were, I’m pretty sure that there would be plenty of old-school Lispers who’d still be dissatisfied with how the revival turned out.


What's the most exciting use of Lisp you had?

I’m pretty proud of the tool chain I’ve built over the years while writing my two books and editing the magazine I tried to start, Code Quarterly. When I first started working on Practical Common Lisp I had some Perl scripts that I used to convert an ad-hoc light-weight text markup language into HTML. But after a little while of that I realized both that Jamie Zawinski was right about regexps and that of course I should be using Lisp if I was writing a book called Practical Common Lisp.

So I implemented a proper parser for a mostly-plain-text language that I uncreatively call Markup and backends that could generate HTML and PDF using cl-typesetting. When I was done writing and Apress wanted me to turn in Word files, I wrote an RTF backend so I could generate RTF files with all the Apress styles applied correctly. An Apress project manager later exclaimed over how “clean” the Word files I had turned had been. For editing Code Quarterly I continued to use Markup and wrote a prose diff tool that is pretty smart about when chunks of text get moved and edited a little bit.


What you dislike the most about Lisp?

I don’t know if “dislike” is the right term because the alternative has its own drawbacks. But I do sometimes miss the security of refactoring with more static checks. For instance, when I programmed in Java, there was nothing better than the feeling of knowing a method was private and therefore I didn’t have to look anywhere but in the one file where the method lived to see everywhere it could possibly be used. And in Common Lisp the possibilities for action at a distance are even worse than in some other dynamic languages because of the loose relation between symbols and the things they name. In practice that’s not actually a huge problem and some implementations provide package locks and so on, but it always makes me feel a bit uneasy to know that if I :use a package and then DEFUN a function with the name of an inherited symbol I’ve changed some code I really didn’t mean to.

From time to time I imagine a language that lets you write constraints on your code in the language yourself—kind of like macros but instead of extending the syntax your compiler understands, they would allow you to extend the set of things you could say about your code that the compiler would then understand. So you could say things like, “this function can only be called from other functions in this file” but also anything else about the static structure of your code. I’m not sure exactly what the API for saying those things would look like but I can imagine it being pretty useful, especially in larger projects with lots of programmers: you could establish certain rules about the overall structure of the system and have the compiler enforce them for you. But then if you want to do a big refactoring you could comment out various rules and move code around just like in a fully dynamic language. That’s just a crazy idea; anyone who’s crazy in the same way should feel free to take it and run with it and see if they get anywhere.


Among software projects you've participated in what's your favorite?

Probably my favorite software I ever wrote was a genetic algorithm I wrote in the two weeks before I started at Weblogic in 1998, in order to build up my Java chops. It played Go and eventually got to the point where it could beat a random player on a 5x5 board pretty much 100% of the time. One of these days I need to rewrite that system in Common Lisp and see if I can work up to a full-size board and tougher opponents than random. (During evolution the critters played against each other to get a Red Queen effect—I just played them against a random player to see how they were doing.)


Describe your workflow, give some productivity tips to fellow programmers

I’m not sure I’m so productive I should be giving anybody tips. When I’m writing new code I tend to work bottom up, building little bits that I can be confident in and then combining. This is obviously easy to do in a pretty informal way in Common Lisp. In other languages unit tests can be useful if you’re writing a bigger system though I’m often working on things for myself that are small enough I can get away with testing less formally. (I’m hopeful that something like Light Table will allow the easy of informal testing with the assurances of more strict testing—I’d love to have a development environment that keeps track of what tests go with what production code and shows them together and runs the appropriate tests automatically when I change the code.)

When I’m trying to understand someone else’s code I tend to find the best way is to refactor or even rewrite it. I start by just formatting it to be the way I like. Then I start changing names that seem unclear or poorly chosen. And then I start mucking with the structure. There’s nothing I like better than discovering a big chunk of dead code I can delete and not have to worry about understanding. Usually when I’m done with that I not only have a piece of code that I think is much better but I also can understand the original. That actually happened recently when I took Edi Weitz’s Hunchentoot web server and started stripping it down to create Toot (a basic web server) and Whistle (a more user friendly server built on top of Toot). In that case I also discarded the need for backward compatibility which allowed me to throw out lots of code. In that case I wasn’t going for a “better” piece of code so much as one that met my specific needs better.


If you had all the time in the world for a Lisp project, what would it be?

I should really get back to hacking on Toot and Whistle. I tried to structure things so that all the Hunchentoot functionality could be put back in a layer built on top of Toot—perhaps I should do that just to test whether my theory was right. On the other hand, I went down this path because the whole Hunchentoot API was too hard for me to understand. So maybe I should be getting Toot and Whistle stable and well-documented enough that someone else can take on the task of providing a Hunchentoot compatibility layer.

I’d also like to play around with my Go playing critters, reimplementing them in Lisp where I could take advantage of having a to-machine-code compiler available at run time.


PCL was the book, that opened the world of Lisp to me. I’ve also greatly enjoyed Coders at Work. So I’m looking forward for the next book you’d like to write. What would it be? :)

My current theory is that I’m going to write a book about statistics for programmers. Whenever I’ve tried to learn about statistics (which I’ve had to do, in earnest, for my new job) I find an impedance mismatch between the way I think and the way statisticians like to explain stuff. But I think if I was writing for programmers, then there are ways I could explain statistics that would be very clear to them at least. And I think there are lots of programmers who’d like to
understand statistics better and may have had difficulties similar to mine.


Discussion on Hacker News
submit

2012-06-25

Lisp Hackers: Juan José García Ripoll

Juan José García Ripoll is a physicist and the principal developer of Embeddable Common Lisp (ECL), an implementation that compiles to C instead of native code and provides a shared library, so that Lisp programs can be distributed without the Lisp runtime and be embedded in other programs. He also actively contributes to the improvement of ASDF, the foundational system definition facility (one of his posts on the topic).




Juanjo is a representative of a large group of Lisp users - researchers and scientists, who are not professional programmers. Although Lisp is not generally considered on par with specialized research environments, like Matlab or R, it is quite useful in this field, because of its solid mathematical foundation, coupled with truly interactive and robust environment.

Tell us something interesting about yourself.

I do not consider myself to be an interesting character or have interesting hobbies, but my job is quite interesting and unconventional: I work on Quantum Computation, developing new ways of computing using, for instance, superconducting circuits, or arrays of atoms. Actually I find that the fact that human technology has progressed to the point that we can trap and harness individual atoms is pretty cool.


What's your job? Tell us about your organization.

I am a physicist and work as a scientist for CSIC, the Spanish Research Council. It is the largest research institution in Spain and my group works on all kind of exotic things related to the implementation of quantum computers.


Do you use Lisp at work? If yes, how you've made it happen? If not, why?

From my description above it might seem that it would be hard to use Lisp for anything, but quite the contrary: 50% of our time is spent doing simulations and programming. Quite interestingly, the programs that we develop are very short-lived. We have a question, write a simulation, get the data, move on. This is the reason why most people in our field use Matlab. I reached Lisp looking for an interactive programming language that was more powerful than Matlab in the fields of expressiveness, extensibility, optimization. That is how I started working on ECL, with the aim of empowering it for numerical simulations. Along the way I got distracted and more interested on the fields of language development.

Despite this I have found room for Lisp in my daily routine, from scripting all the way up to solving actual problems. For instance, in one of our research topics we had to solve huge instances of 3-SAT problems using a very basic algorithm. The fact that Common Lisp comes with bignums reduced the development time to one afternoon and the program we produced was faster than the C++ prototypes we had.


Among software projects you've participated in what's your favorite?

My favorite is ECL, but my second favorite was developing for an extinct operating system, OS/2. Back when I was at the university, OS/2 was still alive and I got some interesting projects finished, such as an X-Windows server and the port of Doom to it. It was pretty cool to learn multithreading and GUI development back then and OS/2 was pretty well thought out.


What brought you to Lisp? What holds you?

My job brought me to it, and what keeps me is the possibility of learning a lot about language implementations, compilers and making a tool that is useful for a larger community, including other projects such as Sage. It is a weak link though, and sometimes personal interactions make me question whether this is useful at all.


What's the most exciting use of Lisp you had?

ECL. Really. After one year of development it was awesome to see ECL being able to build and compile itself from scratch for the first time. Also, every time I finish yet another feature, it feels really good. And finally, developing a Common Lisp implementation is a great excuse to learn a lot of things, from the subtleties of the POSIX specification, to the Unicode collation algorithms.


What you dislike the most about Lisp?

The unspecified corners. Multithreading was not in the specification and implementations have evolved in somewhat random ways, with some common denominator and features (process-kill) which are not really well suited for existing operating systems and are nightmare to implement. I also miss a better specification of physical pathnames.


Describe your workflow, give some productivity tips to fellow programmers

Sorry, I am a terribly disorganized developer and ECL users suffer from it. No advice here, hehe.


You are a physicist by main occupation, yet you manage to almost single-handedly support ECL. Why? and How?

Everybody needs some projects in life other than work and this one is a lifelong companion that I would like to see to the end. I had a lot of help from freelance programmers at the beginning and users cooperate with patches and bug reports, and a lot of patience for my mistakes and absences. That helps.

That does not mean I consider the current status an optimal one. I consider it a personal failure the fact that ECL does not have a larger community and a better support base. It still persists the wrong impression that ECL is a lesser implementation, covering a niche (embeddability) and inferior to other implementations which are more lispy in nature. That is not the case: we have sufficiently proven that the underlying core, C, does not prevent any feature from being implemented, performance is improving and stability, well, this is not always optimal, but it has to be blamed to the small size of the developer team, which has scarce resources for testing the code.
submit

2012-04-16

Lisp Hackers: Pascal Costanza

Pascal Costanza is a researcher, and an active Common Lisp programmer and community enthusiast:


  • he's the maintainer of Closer to Mop library, that provides a common facade to the MOP implementation in different Lisps, and is the basis of some of his more advanced libraries like: ContextL and FilteredFunctions;

  • the originator of Common Lisp Document Repository (CDR) project, that collects proposals for improving the language (a la JCP for Java or PEP for Python);

  • and the author of a Highly Opinionated Guide to Lisp, which can serve as introductory text for those, who come from other languages. (It was quite a useful text for me, when I started studying Lisp.)


In the interview Pascal shares a lot if insight into his main topic of interest — programming language design — grounded in his experience with Lisp, Java, C++ and other languages.

Tell us something interesting about yourself.

I share a birthday with Sylvester Stallone and George W. Bush. I have been a DJ for goth and industrial music in the Bonn/Cologne area in Germany in the past. I once played a gay Roman emperor in comedic theatre play. I played a few live shows with a band called "Donner über Bonn" ("Thunder over Bonn"). My first rock concert I ever attended was Propaganda in Cologne in 1985. The first programming language I really liked was Oberon. I often try to hide pop culture references in my scientific work, and I wonder if anybody ever notices. My first 7" single was "Major Tom" by Peter Schilling, my first 12" single was "IOU" by Freeez, my first vinyl album was "Die Mensch-Maschine" by Kraftwerk, and my first CD album was "Slave to the Rhythm" by Grace Jones. I don't remember what my first CD single was.


What's your job? Tell us about your company.

I currently work for Intel, a company whose primary focus is on producing CPUs, but that also does business in a lot of other hardware and software areas. (Unfortunately, Intel's legal department requires me to mention that the views expressed in this interview are my own, and not those of my employer.)

I work in a project that focuses on exascale computing, that is, high-performance computers with millions of cores that will be on the market by the end of the decade, if everything goes well. I am particularly involved in developing a scheduler for parallel programs that can survive hardware failures, which due to the enormous scale of such machines cannot be solved by hardware alone anymore, but also need to be dealt with at the software level. The scheduler is based on Charlotte Herzeel's PhD thesis, and you can find more information about it in a paper about her work and at http://www.exascience.com/cobra/.


Do you use Lisp at work? If yes, how you've made it happen? If not, why?

At Intel, I do all software prototyping in Lisp. The scheduler I mentioned above is completely developed and tested in Lisp, before we port it to C++, so that other people in the same project and outside can use it as well. It didn't require a major effort to convince anybody to do this in Lisp. It is actually quite common in the high-performance computing world that solutions are first prototyped in a more dynamic and flexible language, before they are ported to what is considered a "production" language. Other languages that are used in our project are, for example, MATLAB, Python and Lua. (Convincing people to use Lisp beyond prototyping would probably be much harder, though.)

The implementation we use for prototyping is LispWorks, which is really excellent. It provides a really complete, well-designed and efficient API for parallel programming, which turns LispWorks into one of the best systems for parallel programming of any language, not just in the Lisp world. The only other system that is more complete that I am aware of is Intel's Threading Building Blocks for C++.


What brought you to Lisp? What holds you?

I have participated in one of the first Feyerabend workshops, organized by Richard Gabriel, one of the main drivers behind the original Common Lisp effort. I have also read his book Patterns of Software around that time. Later we had a small discussion in the patterns discussion mailing list. He tried to promote Lisp as a language that has the "quality without a name", and I made some cursory remarks about Lisp's unnecessarily complicated syntax, just like anybody else who doesn't get it yet.

To me, the most important comment he made in that discussion was: "True, only the creatively intelligent can prosper in the Lisp world." The arrogance I perceived in that comment annoyed me so much that it made me want to learn Lisp seriously, just to prove him wrong and show him that Lisp is not as great as he thought it is. As they say, the rest is history.

I actually dabbled a little bit in Lisp much earlier, trying out a dialect called XLisp on an Atari XL computer at the end of the 80's. Unfortunately, it took too long to start up XLisp, and there was not enough RAM left to do anything interesting beyond toy examples, plus I was probably not smart enough yet to really get it. I was just generally curious about programming languages. For example, I also remember trying out some Prolog dialect on my Atari XL.

In the end, Lisp won me over because it turns out that it is the mother of all languages. You can bend it and turn it into whatever language you want, from the most flexible and reflective interpreted scripting language to the most efficient and static compiled production system. For example, the scheduler mentioned above easily gets in the range of equivalent C/C++-based schedulers (like Cilk+, TBB, or OpenMP, for example), typically only a factor of 1.5 away for typical benchmarks, sometimes even better. On the other hand, ContextL uses the reflective features of the CLOS Metaobject Protocol to bend the generic function dispatch in really extreme ways. I am not aware of any other programming language that covers such a broad spectrum of potential uses.


What's the most exciting use of Lisp you had?

When I decided to make a serious attempt at learning Common Lisp, I was looking for a project that would be large enough to prove to myself that it is actually possible to use it for serious projects, but that would also be manageable in a reasonable amount of time. At that time, I was intimately familiar with the Java Virtual Machine architecture, because I had developed compilers for Java language extensions as part of my Diploma and PhD theses. So I decided to implement a Java Virtual Machine in Common Lisp - under normal circumstances, I wouldn't have dared to do this, because this is quite a complex undertaking, but I had read in several places that Lisp would be suitable for projects that you would normally not dare to do otherwise, so I thought I would give it a try. Over the course of 8 weeks, with something like 2 hours per day, or so (because I was still doing other stuff during the day), I was able to get a first prototype that would execute a simple "Hello, World!" program. On top of that, it was a portable (!) just-in-time compiler: It loaded the bytecode from a classfile, translated it into s-expressions that resemble the bytecodes, and then just called Common Lisp's compile function to compile those s-expressions, relying on macro and function definitions for realizing these "bytecodes as s-expressions." I was really impressed that this was all so easy to do.

The real moment of revelation was this: to make sure to reuse as many of the built-in Common Lisp features as possible, I actually translated Java classes into CLOS classes, and Java methods into CLOS methods. Java's super calls posed a problem, because it was not straightforward to handle super calls with plain call-next-method calls. Then I discovered user-defined method combinations, which seemed like the right way to solve this issue, but I was still stuck for a while. Until I discovered that moving a backquote and a corresponding unquote around actually finally fixed everything. That was a true Eureka moment: In every other programming language that I am aware of, all the problems I encountered until that stage would have required a dozen redesigns, and several attempts to start the implementation completely from scratch, until I would have found the right way to get everything in the right places. But Common Lisp is so flexible that at every stage in your development, you can tweak and twist things left and right, but in the end you still get a convincing, clean, and efficient design. As far as I can tell, this is not possible in any other language (not even Scheme).


What you dislike the most about Lisp?

There is not much to dislike about Lisp itself. There are some technical details here and there, some minor inconsistencies, but nothing that cannot be fixed in easy and straightforward ways. From a purely conceptual point of view, Common Lisp is one, if not the most complete and best integrated programming language that covers a lot of ground. Some rough edges are just to be expected, because nothing is ever perfect.

What concerns me a lot more is that there is too much unwarranted arrogance in the Lisp community. I don't know exactly where this comes from, but some Lispers seem to believe, just because they understand some Lisp concepts, that they are much smarter than anybody else in the universe. What they are forgetting is that computer science as a discipline is very, very young. In one or two hundred years from now, Lisp concepts will be common knowledge, just like elementary algebra. There is no reason to be arrogant just because you know that the earth is round, even if most other people still believe that it is flat.


Describe your workflow, give some productivity tips to fellow programmers.

I strongly believe that the one thing that made me most productive as a programmer is my interest in doing some form of art. I used to spend a lot of time making my own music, both by myself with synthesizers and computers, as well as in bands. I also was an actor in an amateur theater group. Art gives you a sense of making parts (notes, chords, melodies, rhythms, or acts, characters, plot lines) relate to each other and form a coherent whole. It also makes you aware that there is an inner view on a piece of music or a play, as seen by the artist, but also an outer view, as seen or heard by an audience, and if you want to make good art, you need to be able to build a bridge between those two parts.

Programming is exactly the same: you need to make parts (functions, data structures, algorithms) relate to each other, and you need to bridge the inner view (as seen by the designer and implementer) and the outer view (as seen by the user of a library or the end user of the final software).

The important aspect here is that you need to be able to change perspectives a lot, and shift between the local, detailed view, the global, architectural view, and the many different levels of a layered design. This is especially important when it comes to designs that incorporate meta-programming techniques and reflective approaches, but also already for simpler designs.

Like in art, the concrete workflow and the concrete tools that work best vary a lot for different people. It's also a good idea to just play around with ideas, expecting that most of them will turn out useless and need to be thrown away. Artists do this all the time. Artificial, seemingly nonsensical rules and restrictions can be especially enlightening (use only effect-free functions; use only functions that receive exactly one argument, not more, not less; make every function pass around an additional environment; use only classes with exactly two slots; etc., etc.), and then try to build larger programs strictly following such rules - this will make your mind a lot more flexible and train you to see new potential solutions that you wouldn't see otherwise.

(I actually believe that this is what makes fans of static typing so excited: Static type systems always impose some artificial restrictions on your programs, and enforce them to the extent that programs that violate these rules are rejected. If you then program in such a statically typed programming language, you will indeed have some interesting insights and see new solutions that you would otherwise miss. However, the category error that fans of static typing often seem to make is that they ascribe the results to the static type system, and therefore usually get stuck with one particular set or kinds of rules.)

Apart from that, I believe that LispWorks is a really good development environment.


Among software projects you've participated in what's your favorite?

I don't know how to answer that. At any point in time, I'm always most excited by the one I'm currently working on. So far, I am quite proud of ContextL and the ClassFilters project for Java, because they are or were both used in one way or the other in "real" applications. Closer to MOP is a favorite project of mine, because it makes me feel like I can give something back to the Lisp community from which I otherwise benefit so much. But this doesn't mean I dislike anything else I have done in the past.


One of your papers, "Reflection for the Masses", researches the ideas behind 3-Lisp, "a procedurally reflective dialect of LISP which uses an infinite tower of interpreters". Can you summarize them here?

That paper was actually mostly the work of Charlotte Herzeel, and I was only her sounding board for detailing the ideas in that paper. Reflection is one of the essential concepts that was born out of Lisp. Every program is about something, for example a financial application is about bank accounts, money and interest rates, and a shopping application is about shopping items, shopping carts and payment methods. A program can also be about programs, which turns it into a meta-program. Lisp macros are meta-programs, because they transform pieces of code in the form of s-expressions into other pieces of code. C++ templates are also meta-programs in the same sense. Some meta-programs are about "themselves," and thus become reflective programs.

3-Lisp is "procedurally reflective" in two senses: On the one hand, it allows you to inspect and change the body of procedures (functions). Common Lisp, for example, also gives you that, in the form of function-lambda-expression and compile/eval, among others. On the other hand, 3-Lisp also allows you to inspect and alter the control flow. Scheme, for example, gives you that in the form of call/cc, but in 3-Lisp this is actually part of the eval interface. 3-Lisp goes further than Common Lisp and Scheme combined, in that it provides not only first-class access to function bodies and continuations, but also to lexical environments, always both with facilities to inspect and modify them, and provides a clean and integrated interface to all of these features. Unfortunately, because 3-Lisp goes that far, it cannot be fully compiled but always needs to be able to resort to interpretation, if necessary.

The reason for the requirement to always have an interpreter around is because the eval interface in 3-Lisp is so flexible that you can run programs inside an eval invocation that in turn can inspect and change (!) the environment in which eval runs, and can then invoke further evals in those changed environments, to arbitrary levels of recursive invocations of eval. These recursive invocations of eval build what is called a reflective tower, where every level in the tower is conceptually an interpreter being executed by an interpreter one level up in the tower of interpreters. The amazing thing about 3-Lisp is that an implementation of 3-Lisp can actually collapse the tower into one level of interpretation, and arrange that one interpreter in such a way that the several different levels of interpretations are only simulated, so the tower is actually just an "illusion" created by the 3-Lisp implementation.

This may all sound quite esoteric, but is actually practically relevant, because there is strong evidence that all meta-programming approaches eventually need such towers, and some ad-hoc way to collapse the towers. For example, you can find the tower in Racket's and R6RS's macro systems, where they are explicitly mentioned; in Common Lisp's macros, where eval-when is used to control which part of the tower sees which definitions; in the CLOS Metaobject Protocol, where the generic function dispatch can be influenced by other generic functions, which can in turn be modified by some meta-classes at a higher level; in the template metaprogramming system of C++, where "concepts" were devised for C++11 (and rejected) to introduce a type system for the template interpreter; and so on, and so on. If you understand the concept of a reflective tower better, you can also better understand what is behind these other meta-programming approaches, and how some of their sometimes confusing semantic difficulties can be resolved.


If you had all the time in the world for a Lisp project, what would it be?

I have some ideas how to design reflection differently for a Lisp dialect, which I believe has not been tried before. If I had all the time in the world, I would try to do that. I also have many other ideas, so I'm not sure if I would be able to stick to a single one.


Anything else I forgot to ask?

I think one of the most underrated and most underused Lisp dialects is ISLISP. I think people should take a much closer look at it, and should consider to use it more often. Especially, it would be an excellent basis for a good teaching language.


Discussion on HackerNews submit