Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I also don't see the point of optimizing on the number of line of code if the complexity per character is increased (did I just invented a kpi here?)

It doesn't increase linearly though. #:'=: is seven characters that mean as much as 68 separate lexemes in lisp:

    (defun count (list)
      (let ((hash (make-hash-table)))
        (dolist (el list)
          (incf (gethash (cadr el) hash 0) (car el)))
        (let (result)
          (maphash (lambda (key val)
                     (push (list val key) result))
             hash)
          result)))
so you don't need all that. You don't even bother to create that "helper" function because you don't need it.

The choice of operators (and their definitions) was made with great care in an effort to maximise this effect.



That's just using nothing but the features of ANSI Lisp, which was at the time of its standardization already criticized for being large and facing pressure to stay small.

The hashing stuff feels a little "bolted on" in ANSI CL. Add-on libraries can round out the functionality.

In another dialect, TXR Lisp, the whole (let ...) part above reduces to this:

   (hash-pairs hash) ;; get key-value pairs as two-element lists
such a function could be had in ANSI Lisp.

To reduce a sequence into a histogram returned as a hash in TXR Lisp, we would do a group-reduce

   [group-reduce (hash) identity (op succ @1) INPUT-LIST-HERE 0]
The idea here is that the input items are projected through a function (here specified as identity, to take the items themselves) and grouped into buckets by the projected value.

Within each of these buckets, an independent reduce (i.e. left fold) is going on, whereby a hash table holds the reduce accumulators.

For maximum flexibility, group-reduce asks the caller to specify the hash table rather than making one implicitly; hence the (hash) argument.

Likewise, the initial value for the reduce (shared by all the buckets) is specified explicitly. reduce is too general a concept to justify making it a defaulted optional argument that goes to zero.

Here, we somehow abuse reduce. We start the accumulator at 0, and at each reduction step, we ignore the new value coming in, and instead produce the successor of the accumulator as the new accumulator value, so all we do is count the reduction steps in each bucket, thus producing a frequency histogram.

(op succ @1) is needed rather than succ because the function it denotes takes two or more arguments (it gets called with two), whereas succ requires exactly one argument. If succ ignored arguments after the first one, it could be like this:

  [group-reduce (hash) identity succ INPUT-LIST-HERE 0]
But relaxing the checks on the number of arguments in library functions for the sake of code golfing isn't a good idea. Further, if the function constructed the hash implicitly, and geared toward numeric processing, making the default accumulator zero, it could look like:

  [group-reduce identity succ INPUT-LIST-HERE]
and if we cripple group-reduce by taking out the projection we can get it down to:

  [group-reduce succ INPUT-LIST-HERE]
Basically, Lisp can be succinct to the point of diminishing returns, if you have the right set of functions for the task at hand.

The traditional, mainstream Lisps assume that the developers will make these for themselves, especially for tasks outside of list processing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: