There is a saying about the band Velvet Underground - that not many people bought their records, but everyone who bought one went out and started their own band. I think Lisp is kind of similar - its influence is much larger than its fame, but ever language designer is inspired by it to some extent.
I do think Lisp kind of missed the boat when a wave of dynamic languages (Perl, Python, Ruby) went mainstream. Today the most exciting developments in language design seem to be around static type systems, type inference etc.
I think there is a lot of exciting stuff happening in Racket and Clojure around contract systems right now with things like Racket Contracts and Clojure Spec. This approach provides an appealing alternative to static typing because it focuses on semantic correctness while static typing focuses primarily on self consistency.
In my view the former is much more valuable because ultimately we want to know that the code is doing what was intended, and static typing doesn't appear to be an effective tool for encoding intent.
static typing doesn't appear to be an effective tool for encoding intent
Static typing is about as effective at encoding intent as intention revealing names. That is to say, if you are diligent and clever in how you structure and name things, a system can be very intention revealing.
HN user jerf has a method of revealing intent through types in golang, which he calls something like microtyping. Basically, everything has its own type, based on its domain semantics. So if you're using a float64 to store time deltas, you don't just use the float64. Instead, you base a TimeDelta type on float64.
type TimeDelta float64
Then golang allows you to attach methods to that type, which can then provide more opportunities for intention revealing. Also, this prevents a programmer from mistakenly using a float64 representing width and shoving it into a time calculation. Is this as ironclad as Eiffel Design by Contract or Haskell typing? No. However, I've been using it myself, and I can affirm that yes, the compiler will warn you about some semantic errors if you do it this way.
The big problem with the Intention Revealing Names pattern, is that there is no enforcement and that it is subjective. For this reason, it's likely to fall apart in a large enough project that lives long enough.
>HN user jerf has a method of revealing intent through types in golang, which he calls something like microtyping. Basically, everything has its own type, based on its domain semantics. So if you're using a float64 to store time deltas, you don't just use the float64. Instead, you base a TimeDelta type on float64.
>type TimeDelta float64
>Then golang allows you to attach methods to that type, which can then provide more opportunities for intention revealing. Also, this prevents a programmer from mistakenly using a float64 representing width and shoving it into a time calculation.
IIRC, this technique, or something similar, is shown in The Go Programming Language book.
f# definitely allows this with single case discriminated unions. and in f#, you can add members to any type, which includes discriminated unions, records, classes, etc.
in fact, you can add member functions somewhere else after the definition of the type. so you can define the type, write functional helper functions against that type and then later add member functions to the type that are defined in terms of the helper functions.
Didn't know that stuff, cool, thanks. (In early stage of learning F#). That feature you mention sounds a bit like Ruby's open classes, where, in file B (or even in file A), you can add a method to a class C (where C is defined in file A), outside of the definition of C. I was just reading somewhere else recently that some other language also has this feature, maybe it was about Lisp, in the recent thread about it.
The kinds of errors where you have a type mismatch are wholly uninteresting in my experience, and get caught very early on in the development process. What I really care about is semantic correctness.
For example, consider a sort function. The types can tell me that I passed in a collection of a particular type and I got a collection of the same type back. However, what I really want to know is that the collection contains the same elements, and that they’re in order. This is difficult to express using most type systems out there. Even when the type system is powerful enough to encode such semantics, it's both difficult to figure out how to express and understand the resulting specification.
I was curious about your example of expressing and validating that a sort function returns a sorted list using the type system. I found this paper [0] by McBride, McKinna, and Altenkirch about using dependent types in the language Epigram to do exactly that. I have only scanned it, and it remains to be seen if it truly is too "difficult to figure out how to express or understand the resulting specification." Hopefully with more research it will only become easier!
The fact that it's a research topic really says all you need to know in my opinion. It's not a question of whether you can do it in principle, but rather whether it's an effective alternative to other approaches.
I agree, I don’t think dependent typing and validating the correctness of your program using formal proofs is ready for mainstream adoption. I think it might be ready for some domains, like aeronautics, where the cost of errors is very high.
It's not a question of whether you can do it in principle or not. My point was that I don't think it's an effective approach for creating such specifications compared to the alternatives such as runtime contracts.
Why is that? In fact, I would argue you want both. You want to use a contract to check the incoming data, but once it's been checked, you'd want to wrap it in a type that guarantees the property is set.
Otherwise, you have to iterate over a potentially very large set of data to check if it's sorted every time you want to use a function that has that contract.
This is what strong typing is about - not having to specify the exact arbitrary machine types for a particular implementation, but using compiler checked types to enforce invariants of a design.
I can't swear to this but I think they've gone basically hand in hand. I'm pretty sure I read a few Racket posts about how they use contracts to keep a safe interface between typed and untyped code and just in general between languages.
Because static typing primarily focuses on showing that your types align. This is completely tangential to knowing that the code is doing what you intended.
In fact, it's often at odds with understanding the intent of the code because you're limited to a set of statements that can be verified by the type checker. This can make it harder for the human reader to understand what the code is meant to be doing.
A human reader has to be able to read that and understand that the types are encoding the intent correctly. I don't know about you, but I have a much easier time knowing whether the dynamically typed version in Python is correct or not:
def insertionSort(lst):
for index in range(1, len(lst)):
currentvalue = lst[index]
position = index
while position > 0 and lst[position - 1] > currentvalue:
lst[position] = lst[position - 1]
position = position - 1
lst[position] = currentvalue
Complex type definitions effectively become their own meta-programs that the compiler uses to verify your program, but now you've just pushed the problem back a step since somebody still has to read and understand the meta-program to know that it's verifying the correct thing.
Is Elixir/Erlangs Dyalizer/Dyalixir similar to Clojure's spec?
I'd very much like to read more on this topic, but because my knowledge of type systems pretty much started with, and is limited to TypeScript, I don't know what to ask specifically. Have you written anything about this, or could you point me to any articles that discuss this difference?
Basically, I've become convinced that TypeScript, at least for anything nontrivial, is worth using over just JavaScript. There's nothing like it in the Elixir world, but I've been meaning to check out Dialyzer because I do sometimes miss what TypeScript offers.
Dyalizer is more of a linter from my understanding. Instead of aiming for formal correctness it looks for cases that are obviously incorrect instead. Meanwhile, the goal of Spec is to provide the ability to specify runtime contracts for what the code is meant to be doing.
I think that both approaches compliment each other in practice. I don't know any articles comparing the two though.
The idea is that, if you can describe the problem (here, sorting) with a few lines, you don't have to prove that the Idris code is correct, because the machine do it for you. You can throw that sort function without even looking at the code in any critical space mission you want and, if the premises are true, then it will always correctly sort.
You can imagine 3 kinds of code :
- a code that describes the problem (that is both human readable and machine readable)
- a code implementing the solution (here, your Python script)
- a code proving the implementation correctly answers the problem (which would be math if you had to prove your Python code on a whiteboard)
The idea is more about reusing and sharing trusted code, re-running the proof on your own system to make sure it's legit, and move on. You don't need to look at the proof to use the sort function, you would only look at the code that describes the problem and make sure it's the right one for your problem at hand.
>The idea is that, if you can describe the problem (here, sorting) with a few lines, you don't have to prove that the Idris code is correct, because the machine do it for you
The machine can't show that your specification is correct, only that it's self consistent. What you're doing is writing a proof that the compiler will use to verify your code. However, if you make a mistake in your proof, then the compiler will happily verify the wrong thing.
The argument is that it's much harder to spot a mistake in a 300 line Idris proof than it is in a 5 line Python implementation.
> However, if you make a mistake in your proof, then the compiler will happily verify the wrong thing.
I don't use theorem provers in my to day to day tasks, but I had the 101 course at University. Back then I wrote basic proof with the tools, and if your problem is correctly stated and entered in the system, you simply CAN'T arrive to an invalid proof. The only way to fool the compiler is to use logical shortcuts, which are clearly defined in term of language keywords, so you know exactly where is the weakness in the proof, and look for them.
Edit : I don't know every theorem provers out there, so to give a bit more context about my experience, it was with the Coq theorem prover
> if your problem is correctly stated and entered in the system, you simply CAN'T arrive to an invalid proof.
How is that different from: If your problem is correctly stated, and correctly coded into in the system using x86 machine language, then you can't arrive at a bug!
At least in Coq, there is no "bug" when you write a proof, it uses mathematical notations and logical rules to conclude a fact. Because of that you cannot state all the problems you would be able to state with x86. What I meant is that such theorem provers don't give you false-positive : if it tells you your proof is correct, then there can't be a "bug" in your reasoning
In Coq your proof is your program, and if you end up proving the wrong thing that's a bug in the program. There's no magic here. Only a human can tell whether the code does what was intended. To do that you have to understand the code.
If you are proving the wrong statement, then you simply didn't state the initial problem correctly, it has nothing to do with a bug in your proof. It's like saying there is a bug in your implementation of a sort function, while we initially asked you to implement a max function. Even if your sort function is correctly implemented, it's still a "bug" to the person who asked you to implement a max function.
That's my whole point. You're really just pushing the problem as step back instead of solving it. Instead of worrying about bugs in your implementation, you're now worrying about bugs in your specification.
Problem is that these proofs are not the specification. They are very detailed. Specification says "write me a sort function", and the proof is some gobbledygook that deal with irrelevant minutiae of the sorting implementation. Where is the proof which proves that that proof matches the specification?
Because only a human can decide whether the specification matches the intent. This is not a hard concept. The machine can only tell you that your specification is self-consistent, not that it's specifying what you wanted.
The machine could go a level above and helps the human to reflect about its intents. That would basically be a machine convincing the human he/she is already happy right now, and he/she doesn't need to earn more money, and therefore doesn't need that "new e-commerce website". An artificial psychologist conversational agent would help you know about what you really want in life :)
I think you missed the point I was making which is that you still have to know that you're proving the right thing.
It's entirely possible to have a but in the proof itself, at which point your program is going to incorrect.
Reading and understanding the Idris proof is actually more work than understanding the untyped Python version, therefore it's actually harder to say whether it's correct or not in a semantic sense.
> I think you missed the point I was making which is that you still have to know that you're proving the right thing.
Right now, it's a missing part in many theorem prover systems (but I didn't do exhaustive researching, so it's more my point of view) : a code to succinctly describe, or state, the problem you are trying to solve. For exemple, for a sorting problem, you would state the problem in English : after sorting, for any element E, the next element in the sequence should be lower.
> It's entirely possible to have a but in the proof itself, at which point your program is going to incorrect
The premise of theorem provers is that if the problem is correctly stated, then a proof of a solution passing the prover's compiler and a few human reviews is even more unlikely to have a bug.
> Reading and understanding the Idris proof is actually more work than understanding the untyped Python version, therefore it's actually harder to say whether it's correct or not in a semantic sense.
I would be curious to see a Python proof of the Python sort function. I mean an actual logical and mathematical proof, not a unit test or fuzzy test. It would imply to create a library with a DSL around math and logic.
“Saying it’s correct in the semantic sense” = proving the code.
They found a bug in Java’s binary search after 10 years of it going unnoticed, so I think you’re overestimating your own ability to prove code correct in your head.
Generative testing/spec is absolutely useful. It doesn’t subsume static typing and much less theorem proving though, and you can QuickCheck your pre and postconditions in static languages as well.
I don't find that to be a convincing argument in the slightest. The type system in Java also failed to catch the bug, and there's no guarantee that you wouldn't end up with an error in a specification using a more advanced type system that could actually encode that constraint.
The thing to realize is that theorem proving is not a business goal in most cases. What you care about is being able to deliver software that works well in a reasonable amount of time. Sure, you might end up with a binary search bug in your code that goes unnoticed for 10 years, but clearly the world didn't stop because of that, and people have successfully delivered many projects in Java despite that bug that work perfectly fine.
> The type system in Java also failed to catch the bug
Java is not a theorem prover, and your comment was about theorem provers.
> you wouldn't end up with an error in a specification
Again, completely orthogonal to your argument. If you have an error in specification tests won't help you because you'll test the wrong thing.
> The thing to realize is that theorem proving is not a business goal in most cases. What you care about is being able to deliver software that works
Again, your comment was about "saying things are correct in the semantic sense". You're saying now "I don't care about correctness in the semantic sense, I care about delivering software that "works" (whatever this means; clearly it doesn't mean software without bugs) in a reasonable amount of time" and that's fine. I do too, but again, nothing to do with your original argument.
>Java is not a theorem prover, and your comment was about theorem provers.
No, my comment was about formal methods in general. Every type system is a form of a machine assisted proof in practice. It's just that most type systems don't allow you to prove interesting things about your code.
The main problem is that of basic cost/benefit analysis. If it takes significantly more effort to write a full formal proof, and you end up sometimes catching minor errors, the effort is not justified in most scenarios.
>Again, completely orthogonal to your argument. If you have an error in specification tests won't help you because you'll test the wrong thing.
This is completely central to my argument. I'm saying that encoding a meaningful specification using a type system is a lot more difficult than doing that using runtime contracts, or even simply reasoning about the code unassisted.
I can read through the 5 lines of Python code implementing insertion code, and be reasonably sure that it's correct. I would have a much harder time verifying that 300 lines of Idris are specifying what was intended.
> You're saying now "I don't care about correctness in the semantic sense, I care about delivering software that "works" (whatever this means; clearly it doesn't mean software without bugs) in a reasonable amount of time" and that's fine. I do too, but again, nothing to do with your original argument.
I'm saying that you have diminishing returns. What you want to show is semantic correctness, and type systems are a poor tool for doing that. So, while you can use a type system to do that in principle, the effort is not justified vast majority of the time.
> I can read through the 5 lines of Python code implementing insertion code, and be reasonably sure that it's correct.
Binary search is possibly one of the simplest and most basic CS algorithms, and yet, it took people who were "reasonably sure" 10 years to find that bug.
> I would have a much harder time verifying that 300 lines of Idris are specifying what was intended.
Then don't use Idris?
> I'm saying that you have diminishing returns.
I agree. However, that was not your argument. Your argument was that tests subsume proofs, and that's obviously wrong.
Also, types vs tests is a false dichotomy. Personally, I find a strongly typed language + QuickCheck to be the most practical way of developing complex software. YMMV, and that's fine.
>Binary search is possibly one of the simplest and most basic CS algorithms, and yet, it took people who were "reasonably sure" 10 years to find that bug.
Again, saying there is a bug is not interesting. The question is how much this bug costs you, and how much time you're willing to invest in order to guarantee that you won't make that type of a bug.
>Then don't use Idris?
I think you entirely missed the point I was making here.
>I agree. However, that was not your argument. Your argument was that tests subsume proofs, and that's obviously wrong.
My point is that tests and runtime contracts provide sufficient guarantees in practice. Nowhere have I argued that they provide the same guarantees as proofs. The argument is that the cost of the proofs is much higher, and the proofs themselves can be harder to reason about making it harder to tell they're proving the right thing.
Consider the case of Fermat's last theorem as an example. It's pretty easy to state: a^n + b^n = c^n, and it's easy to test that this is the case for a given set of inputs that you might care about. However proving that to be the case for all possible inputs is a monumental task, and there are only a handful of people in the world who would even be able to follow the proof.
>Also, types vs tests is a false dichotomy. Personally, I find a strongly typed language + QuickCheck to be the most practical way of developing complex software. YMMV, and that's fine.
Again, that is not a dichotomy I was suggesting. My point was that I think runtime contracts are a more effective way to provide a semantic specification than a static type system. I also said that static typing restricts how you're able to express yourself, leading to code that's optimized for the benefit of the type checker as opposed to that of the human reader. I'm not sure how you got types vs tests from that.
> Reading and understanding the Idris proof is actually more work than understanding the untyped Python version, therefore it's actually harder to say whether it's correct or not in a semantic sense.
That was your original, nonsensical, point (emphasis mine).
> My point is that tests and runtime contracts provide sufficient guarantees in practice.
That's your revised point (which I don't care much about discussing), after moving the goalposts sufficiently.
> The thing to realize is that theorem proving is not a business goal in most cases. What you care about is being able to deliver software that works well in a reasonable amount of time. Sure, you might end up with a binary search bug in your code that goes unnoticed for 10 years, but clearly the world didn't stop because of that, and people have successfully delivered many projects in Java despite that bug that work perfectly fine.
I agree with you on that part. However, if the whole stack of computer technologies were more reliable, we would maybe think about new business usecases that we subconsciously dissmised because of lack of trust in current technology.
It's basic cost/benefit analysis. At some point you end up with diminishing returns on your effort that are simply not worth the investment.
Also worth noting that nobody has been able to show that static typing leads to more reliable code in practice https://danluu.com/empirical-pl/
Claiming that to be the case putting the cart before the horse. A scientific way to approach this would be to start by studying real world open source projects written in different languages. If we see empirical evidence that projects written in certain types of languages consistently perform better in a particular area, such as reduction in defects, we can then make a hypothesis as to why that is.
For example, if there was statistical evidence to indicate that using Haskell reduces defects, a hypothesis could be made that the the Haskell type system plays a role here. That hypothesis could then be further tested, and that would tell us whether it's correct or not.
This is pretty much the opposite of what happens in discussions about static typing however. People state that static typing has benefits and then try to fit the evidence to fit that claim.
I agree with you, there is no much data about the effectiveness of more rigorous software development tools. It's clearly a research topic.
My intuition is that, with advances in robotics and AI, we may see the need of more robust logical systems. At some point, mathematicians and algorithmicians may use those tools to prove new concepts, which they will be able to share and valid more quickly, which then will percolate into software engineering more quickly.
Beyond software engineering, the expirements with theorem provers may lead to new ways of exchanging information, such as news, mathematics and legal documents. There are inspirations to take from the formalizations created in theorem provers for applying automated reasoning in more domains than just programming (imho)
Edit : Intuitively, it has do to with building trust in the algorithms and informations we share at light speed. With more validated building blocks, we may explore more complex systems. Accelerating trust between different entities can only lead to plus value, I guess. That's all very abstract tho
Edit 2 : too put it in even fewer words, theorem provers may be more about collaboration than just pure technical engineering
To be honest, I think that once we have advances in AI there will come a point where you won't really be doing the kind of programming that we do today by hand. You'll have an AI assistant whom you'll give queries that are close to natural language, and it will figure out how to implement them for you. I can see that coming within a few decades for many kinds of applications such as your typical CRUD apps.
In such an hypothetical world, the "typical CRUD application" may just not even exist anymore.
I was talking about advances in consumer drones, autonomous cars, and personal robots, such as SpotMini from Boston Dynamics. More autonomous embedded systems evolving around us means different needs in term of safety in software development.
AI will have to explain the reasoning of their decisions (prove they did right) in natural language. The humans who do that in our world are scientifics, politicians, lawyers and mathematicians. Those people use a specific kind of natural language, with domain specific words to communicate. Theorem provers in software engineering are a step forward that direction imho
Sure, you wouldn't really interact with a computer the way we do now once you have AIs that can understand natural language. It would be more like having a personal secretary.
I don't think theorem provers are actually the way to get there though. AI systems are probabilistic in nature, and neural nets are self-organizing. One of the biggest problems is that it's really hard to tell how such a system arrives at a decision. The human brain itself is not based on formalism, and we find formal thinking to be very challenging. It's something that needs to be trained, and doesn't come to most people naturally. Our whole cognition is rooted in heuristics.
> One of the biggest problems is that it's really hard to tell how such a system arrives at a decision. The human brain itself is not based on formalism, and we find formal thinking to be very challenging
So far, "neural networks" in AI is a fancy name for what is nothing more than a giant equation system with many parameters. It's not even close to a biological, actual self-organizing, neural networks. It's closer to a weather model prediction.
The human brain is not based on formalisms, so let's create an AI that helps the human brain's weakness. Maybe we shouldn't try to replicate the human brain capacities, but rather create a new "form of life" complementing our biological skills.
So far, theorem provers, with expert systems, are the only works I'm aware of about systematically explaining reasoning and decisions.
Neural networks are graphs that evolve at runtime by balancing their weights based on reinforcement, and as far as I know there hasn't been much success in using formal methods for AI.
I do think theorem provers can be useful in certain contexts, and I can see AI using these tools to solve problems.
> Neural networks are graphs that evolve at runtime by balancing their weights based on reinforcement, and as far as I know there hasn't been much success in using formal methods for AI.
This is not correct in the current state of tech. Neural networks are parametrized equations systems. You train the parameters on a dataset in a training phase, then freeze the result, then distribute the model to devices. Once distributed, the "neural network" can't be modified, and stop to "learn" new cases.
Edit : I mean, you are not completely wrong, you described the training phase of the neural network. That's only half of the story tho
> Reading and understanding the Idris proof is actually more work than understanding the untyped Python version, therefore it's actually harder to say whether it's correct or not in a semantic sense.
But you don't have to write a proof to sort a list in Idris. So you're not being honest with your comparison.
Sure, but then you're not getting the formal guarantees. The whole argument here is regarding whether adding more formalism improves code quality. If you agree that less formalism is better in some cases, then it's just a matter of degrees.
>Because static typing primarily focuses on showing that your types align. This is completely tangential to knowing that the code is doing what you intended.
That's not true. It entirely depends on the power of your type system.
Types do not just encode "alignment" matching -- they can be used to encode all kinds of intent.
To give but one example, a type that is bounded by a range (e.g. 1..12 like e.g. Ada allows) doesn't just encode alignment matching (that your "month" variable doesn't get passed where a regular int or a "temperature" is expected) but also that it's not 16.
That of course is just the tip of the iceberg...
In Rust the type system enforces intend regarding the lifetime of a variable as well.
You can encode interesting things using the type system, but I don't find it's nearly as practical as the alternatives such as runtime contracts. For example, consider a sort function. The types can tell me that I passed in a collection of a particular type and I got a collection of the same type back. However, what I really want to know is that the collection contains the same elements, and that they’re in order. This is difficult to express using most type systems out there, while trivial to do using a contract system such as Spec.
>For example, consider a sort function. The types can tell me that I passed in a collection of a particular type and I got a collection of the same type back. However, what I really want to know is that the collection contains the same elements, and that they’re in order.
That's a test though -- and a contract is basically an assertion for such a test.
Types allow you to express and check things before the code is run.
The question isn't whether you can do something in principle. It's whether the approach is more effective than alternatives. My experience is that it's not.
>It's whether the approach is more effective than alternatives. My experience is that it's not.
It obviously is more effective, since you can prove parts of your programs behavior throughout at compile time -- whereas contracts depend on passing through some particular code.
Besides those are two different things. You can have types to do the heavy lifting AND contracts a la e.g. Eiffel
It's only obviously more effective coming from a very narrow perspective. Static typing can provide stronger guarantees, but if it takes significantly more effort to do that and you already get adequate guarantees with contracts then that effort is likely not justified.
The goal is typically to ship software that works well enough in a reasonable amount of time, and my experience is that contracts provide a much better tool for doing that.
While contracts need to execute the code, generative testing provides you with a sufficient sample to be reasonably sure that the code is doing what you want. The contracts also let you directly specify what the code is intended to be doing, something that's not easy to do using the type system.
In some situations proving parts of the program behavior may be worth the effort, but I simply don't agree that it's true for the general case.
Best i can tell, when it was introduced it massively outpaced any of the "consumer" hardware out there. End result was the whole issue of Lisp machines that was tailored to accelerate the language, possibly souring academia on the language in the process.
It's been so sad to see that while the AI winter ended, the Lisp winter that started around the same time never did. Maybe it's coming back after consolidation on the languages end: SBCL and Clojure are coming ahead as clear winners, and are quite powerful.
I'm sure I'm not the only guy who started learning Lisp after reading Paul Graham's essays, and I'm going to keep using Lisp for several reasons, but the momentum just isn't there.
Lisp certainly wasn't in vogue when Paul Graham did his startup and wrote those essays.
"Robert and I both knew Lisp well, and we couldn't see any reason not to trust our instincts and go with Lisp. We knew that everyone else was writing their software in C++ or Perl. But we also knew that that didn't mean anything. If you chose technology that way, you'd be running Windows. When you choose technology, you have to ignore what other people are doing, and consider only what will work the best."
"This is especially true in a startup. In a big company, you can do what all the other big companies are doing. But a startup can't do what all the other startups do. I don't think a lot of people realize this, even in startups."
[...]
"If other companies didn't want to use Lisp, so much the better. It might give us a technological edge, and we needed all the help we could get."
It's more that a startup can't do what all the others do in every aspect. You need to do something differently (different niche) or better (if there are others in your niche).
The R language that gets used a lot for machine learning is actually very Lisp-like. So maybe that's introducing a lot of people to Lisp without them realizing it.
AI winter is having a problem because not every problem can be solved with a number calculation. Not everything can be measured with a metric, and this especially runs into problems when measurements are being used to measure one another - becomes very convoluted.
Is SBCL winning in anything? One of my acquaintances, a Lisp zealot, convinced me to jump in and make my next project in SBCL. Short story, it was a joke. Nobody seems to have done any serious work in SBCL in over a decade. Basic things are either completely missing or broken. Clojure I can at least make working software in. The fact that you are putting SBCL up there with Clojure makes me wonder if you've ever used SBCL.
It is one of the most active implementation of Common Lisp (other being Clozure CL) and one of the fastest thanks to its support of optional type hinting (which it had long, long before gradual typing become in vogue recently). With some type hinting, it is possible to generate a code that's almost as fast as C (in certain cases, of course, but still much better than other implementations).
>Nobody seems to have done any serious work in SBCL in over a decade.
Could you please elaborate on what you mean by this?
SBCL releases a new version that contains both enhancements and bugfixes every couple of months, with the latest version (version 1.4.9, released on June 28, 2018, ~5 days ago) came one just one month after the previous version, 1.4.8, so I really don't understand what you mean when you said that no one "seems to have done any serious work in SBCL in over a decade".
Or did you meant to say that no one use SBCL to do serious work? Well, with some googling, I'm sure you'll be able to find that, while not many, there are companies that use SBCL (and other Common Lisp implementations) in production and other non-trivial works (not to mention various freelancers who use SBCL to put foods on the table). Some of those companies are also involves with the development of SBCL as well.
>Basic things are either completely missing or broken.
Care to list them? One of the strength of SBCL is that the devs are very responsive, especially to bug reports, so I'm sure they would be more than happy to fix the problems that you encountered.
Sure, here's 2 off the top of my head: Package management is a non-thing in SBCL. It just doesn't exist. Have fun downloading 6-to-12 year old zipballs. The basic HTTP server, hunchentoot IIRC, crashes after it serves its first request. Hope your favicon.ico was a good one!
>Package management is a non-thing in SBCL. It just doesn't exist.
When is the last time you use SBCL, or Common Lisp in general? Quicklisp (https://www.quicklisp.org/beta/), which is a modern package manager that supports resolving dependency and works across almost all currently active Common Lisp implementations (and not just SBCL), has been a thing for years now.
>The basic HTTP server, hunchentoot IIRC, crashes after it serves its first request
Would love to see the backtrace to see the reasons (and to submit a bug report if the issue warrant one), as from my experience Hunchentoot is very stable. I regularly used it in my freelance jobs without any issues.
Not sured what you tried to do, but over the last decade I've done a lot of commercial lisp programming and wonder what is "completely missing or broken".
ITA software [1] (acquired by Google) runs on SBCL. ITA people work on SBCL full time. Maybe you should tell us what is broken and/or completely missing? I have this feeling you've never used Common Lisp and are making things up as you go..
The momentum will never be there, Lisp is not geared for public consumption period.
The commodification of programming and proliferation of languages like Javascript and Python means not only that the "masses" lack fundamental understanding of Computer Science but that they have been conditioned to go after superficial ease-of-use rather than actual value. Anything that requires upfront effort to reveal its secrets is rejected for what is fundamentally broken but covered in a veneer of user friendliness.
Python is the new Perl (Go is already gearing up to replace it where reliability is at stake) and in a couple of years the masses will look at it as a colossal and obvious failure, but they will make the same mistakes again and again when the next shiny thing comes along because the entire domain is rigged in multiple ways.
I suggest reading Richard Gabriel's essays, they are very depressing but also illuminating. In many ways he predicted what is taking place now, the rise of commodification and the obliteration of the field by companies like Google.
> Python is the new Perl (Go is already gearing up to replace it where reliability is at stake) and in a couple of years the masses will look at it as a colossal and obvious failure, but they will make the same mistakes again and again when the next shiny thing comes along because the entire domain is rigged in multiple ways.
Before Perl, Tcl was supposed to be the Lisp replacement. As far as I know this "language X is an adequate/better replacement for Lisp" argument goes back to Ted Nelson ranting against Lisp while advocating TRAC in the 1974 book Computer Lib. So far all of the language Xs have disappeared, and the only thing to replace Lisp has been other Lisps.
I find the Lisp variants to be lovely right up until I want to package and ship an actual application to a normal human being. Then they are quite clumsy. I find the same to be true of a number of other more recently popular high level languages. I've always assumed it was because graduate students never got to the point where they ship anything.
I haven't tried doing this in other Lisps, but it's really easy if you're using Clojure's most popular build tool, Leiningen. For JVM Clojure you can produce a single .jar ("uberjar") that you can run as you'd expect, and for ClojureScript, Google's Closure Compiler is used to output a single minified and tree-shaken JS file.
The parent poster was talking about shipping "an actual application to a normal human being". A .jar file is a non-starter, as normal human beings these days don't have Java installed and have no desire to install it.
Don't get me wrong: I love Clojure, and it's currently the language that pays my bills. But I'd never consider using it to build an app for end users (except possibly a ClojureScript React Native app for mobile devices).
How so ? In CL you can build an executable. (https://lispcookbook.github.io/cl-cookbook/scripting.html#bu...) It's large-ish (because it ships an interpreter, a debugger that we can access from outside, etc), but it has a near-zero startup time, and it works for web apps with their embedded web server.
For ages I couldn't figure out how exactly I would be able to use Quicklisp libraries to create something, and then create an executable from the source code. I think I ended up giving up, after only finding a couple of blogs that were using ancient versions of everything involved. I remember even having trouble loading some of the packages. Overall my experience with writing CL for myself was fun and interesting, but my experience writing it to be used by other people was anything but. There seemed to be too much rubbish to confuse me, as opposed to something either like C (header files + corresponding source) or Python (a directory with an __init__.py is something you can import).
The module system has also turned me away from Scheme; the point where different Schemes all implement say R6RS but they have different module systems, or even different ways of converting an integer to a string, varying support for SRFIs (the regex SRFI is nearly four years old and the only Scheme to implement that is Chibi) and the completely variable module system, where nothing at all is consistent, none of the modules work in another Scheme, even if you were to extract the code. You want something from Chicken to use in Guile? Too bad, you have to manually do all the work yourself, despite the fact that both modules are supposedly written in standard Scheme.
The module situation eventually pushed me to write my own Lisp, but I realised that unless I were to go ahead and implement all the things I wanted, everything I would miss, I'd be in just the same place, reminded of that xkcd comic in which "there are now 15 competing standards". So I gave up on doing just that, and now I'm literally writing a Lisp compiler and then operating system with a friend, because for me it's a lot less frustrating to say "that doesn't work on my operating system" than it is to say "that doesn't work in this guy's Scheme". It wasn't supposed to make sense.
> For ages I couldn't figure out how exactly I would be able to use Quicklisp libraries to create something, and then create an executable from the source code.
If you use SBCL, which is by far the best CL implementation, it's in the manual under "Generating Executables" [1].
It directs you to this [2] which is basically a single function that you call:
Function: save-lisp-and-die [sb-ext] core-file-name &key
toplevel executable save-runtime-options purify root-
structures environment-name compression
Save a "core image", i.e. enough information to restart a
Lisp process later in the same state, in the file of the
specified name. Only global state is preserved: the stack
is unwound in the process.
<more, detailed, documentation snipped>
What is hard about this? It's literally 10 seconds of work.
I am not picking on you, since others run into similar issues, that are completely and utterly trivial, which makes me wonder if they have been conditioned not to bother looking things up in a manual anymore and want things to simply fall in line with their expectations.
I am also a Smalltalk user and I frequently observe the same behavior in that domain. New users run into trivial problems that take less than a minute to solve (for the absolute beginner) and give up immediately. I'm almost convinced that there is some sort of psychological effect that comes into play when the task at hand is very dissimilar to what one is used to.
Image-based languages like Common Lisp and Smalltalk offer tremendous productivity gains but they're also totally unlike anything else out there in many ways. So they require upfront effort, before one truly "gets it", that includes someone willing to suspend their preconceptions and look at things with 'zen mind, beginners mind'. I think it's fair to say that most programmers today are incapable of doing that. Maybe they're just wired for immediate rewards and/or the overall dumbing down of the domain is also making them dumber. Cybernetic feedback loops are like that ..
LISP always makes me sad. It's a language from which all modern languages could learn so much (and currently a lot of features of LISP get cloned). LISP Machines pioneered basically 90% of what modern computers and the internet do today.
Yet, it's a very niche language, an almost forgotten artifact of time.
I get the impression that there's a sort of disconnect between what mainstream programmers want these days and what the Lisp community is offering. Not only in tech, but also in marketing the ecosystem.
I agree. The fraction of programmers that are tackling really challenging problems that push the limits of what's possible has probably always been small, but it has only gotten (much) smaller. Mainstream programmers don't do those things. They make websites for CRUD apps and maintain COBOL and other "obsolete" systems, solving the same well understood problems over and over. I see people writing compilers for quantum computers in LISP, compilers for automatically synthesizing complex chemicals in LISP, writing a TeX-looking, but completely dynamic scientific/math editor (TeXmacs) with LISP, writing a computer algebra system built on abstract algebra and types with LISP (Axiom/FriCAS), and writing a hyper fast scientific programming language with LISP (Julia). Mainstream programmers are just trying to maintain (and maybe modernize) systems built by their parents and grandparents. They don't need the tools that LISP offers to do that. They (or at least the people that pay them) need tools that turn themselves into easily replaceable cogs.
> need tools that turn themselves into easily replaceable cogs.
Sort of but not really.
Even though this mantra is repeated often, the greatest "cogs" actually have deep domain knowledge. That's how most mainstream programmers "level up". They become experts in the application they're developing and what it does functionally. Or in the domain in which the application operates (after having worked on several applications in the same field, for various companies).
HackerNews folks don't work in this kind of field, but in that domain Lisp is a liability. You want a plain and simple programming language, no meta programming, dumb albeit repeated code that does just what it needs to do. The rest of the brain cycles are needed to understand the often convoluted business logic ("due to this regulation the employee needs to be X", "the account is Y because our partner Zs", etc.).
I don't know if I would call knowledge of regulations"deep". But I agree with the rest of what you are saying. I'm just want to point out that the problems most effectively aided by LISP are those that require orders of magnitude more domain knowledge about hard, subtle concepts like quantum mechanics, synthetic chemistry and abstract algebra. Those are where Domain Specific Languages become extremely useful. But, thankfully, mundane things like regulations absolutely don't need that overkill.
I don’t know if I would call regulations mundane work. In fact regulations are one of the most convoluted and more often than not ad-hoc pieces of requirements that could most certainly benefit from a DSL.
On the other hand, linear algebra or “quantum mechanics” (not sure what exactly you mean in computational context) do not require DSL. For example, at least in computational chemistry and fluid dynamics things are very much FORTRAN (or C/++) under the hood (see Gaussian, GAMESS). I believe most of the linear algebra is already available for use through higher level APIs/language bindings (see BLAS/Atlas). I am not sure why one may want to learn a programming language for synthetic chemistry unless we have futuristic robo labs doing the grunt work. Perhaps I misunderstood your comment?
The sheer quantity of edge cases, special cases and normal use cases when dealing with these "mundane" entities is a quality all its own. These use cases we automate or help automate are not by themselves all that complex normally. This is where they differ from quantum mechanics, synthetic chemistry, etc., in that use cases in those fields are usually not as accessible to the layperson.
Yes, go deeply enough into those "mundane" entities and you will eventually hit a abstraction strata high enough where it makes sense to apply a Lisp/functional language/<<insert favorite abstraction tool here>>.
But in the meantime, many times even the domain experts themselves don't realize there even exist higher abstraction levels of their domain. Incrementally getting there with the lower-abstraction-capable languages often better fit their organizations' staffing budgets today. Lisp programmers by and large I've found are scary-category smart. Consistently staffing that kind of smart takes bigger payroll budgets than most organizations are willing to stump up.
Today, the tooling around getting "good enough" results in most fields for most projects tends to even out whatever programmer productivity efficiencies Lisps brought to the table, enough to the point where most managers don't want to tackle the higher complexity of managing a Lisp team.
I just listed regulations, but it's not just regulations. It's regulations, business knowledge, various social aspects.
And I doubt there's anything we work more complex than human society. Even the most complex scientific models pale in comparison to real human interactions, business or otherwise.
I'd be happy to be proven wrong once we have an AI to take care of other pesky humans :)
A concrete example of a use case is F#, which is being used in the structured products business, where complex financial derivatives agreements need to be expressed mathematically in order to be valued.
Where I work, one of the benefits of Clojure is precisely how concise and accessible storing densely layered business logic is. We're able to have a faith that the code does what it looks like that just isn't approachable in any other language I know of.
In other words, they are equivalent to the people in the 1500s and 1600s using the new Gutenberg presses to reproduce Bibles. Still waiting for the explosion in computational literacy that has yet to take hold.
Clojure is past its hype phase, but it's certainly not dying. The yearly surveys consistently show more and more people using it professionally. It's used by large companies like Apple, Boeing, and Walmart for important large scale projects, and both the ecosystem and the community continue to grow at a steady pace. I've been working with Clojure for over 8 years now, and I've seen the community grow tremendously over that time.
My view is that Clojure has enough critical mass now that it's not going anywhere. There is a large community that's very active and enthusiastic about the language. There are many companies building their businesses around Clojure, and it's only going to continue to get better.
I don't think Clojure is ever going to become a mainstream language like JavaScript or Python, but that's very different from saying that it's dying.
Clojure, like so many of the functional languages, struggles to attract beginners. I mean for rilz beginners.
I don't mean it's unsuited to them. My company has a developer with one year of Javascript programming experience productively working in Clojure. But, I think often about how hostile FP's vocabulary is to non-experts. (Personally, the baggage I brought from OO made FP incomprehensible for quite some time). Clojure documentation, and most tutorials aimed at it, for example, take for granted newcomers' comprehension. If we could measure the average number of languages a programmer brings to the table when he learns Clojure, I'd be willing to bet it's more than two.
Ruby and Python on the other hand are languages where beginners thrive! Not only are many of the language concepts self-evident, the communities are often driven by recently graduated beginners spawning beginner-level documentation, beginner-level sympatico, etc.
I think the rule is the more mathematics background you have the more natural functional programming will feel. Most programmers didn't take math electives in college.
I and a few others in my workplace are Clojure beginners. It's really not hard to pick up once you suck it up and stop bitching about parentheses. I don't think the language owners are terribly concerned about picking up beginners though, for all it seems they may actually like Clojure's reputation as belonging to Serious Programmers.
My biggest beef is actually with all the lisp true believers who wax on about their moment of clarity when all the pieces fit together and they realized that the universe is written in lisp. So far for me, it's just code.
As far as jobs are concerned I'm not sure Clojure was ever alive. Search the Indeed.com API for title:Clojure and Location:London and you'll find 10 Clojure jobs. Extend that to Location:United Kingdom and you'll find the same 10 Clojure jobs, ie. nothing outside London. For other countries it's: France:0, Germany:7, Netherlands:3, Spain:1, Italy:0, Japan:0, China:0, United States:22, Canada:2 . Elixir seems to have suffered a similar fate. In fact the only functional language which has any real adoption is Scala but even here the numbers are patchy and well below the figures for Ruby which some say is in decline.
I think the best we're ever going to see is mainstream languages like Kotlin, Swift, Ruby and Python which mix functional features with OOP. In fact Scala really falls into this category so it seems OOP is here to stay.
The whole point of lisp is competing with 500 programmers blub programmers with 5 lisp programmers.
I wouldn't measure Clojure's success with those kind of numbers. The number of applications running Clojure code would come close to a real metric.
The whole point of Java and Python is to provide VPs with massive departments to bloat their egos with head counts. Clojure or Lisp don't aim for such goals.
I find it odd to see Java and Python lumped together. The companies I've seen using those respective languages (by job listings) seem to be quite different. I'm a Python developer and I've only ever worked in small, fast moving teams (biggest team I've worked in had 4 developers).
You must not be in SF or NYC. Python and Ruby (and even Node, to an extent) have supplanted Java as the language of fungible mass-market headcount du jour.
Anecdotally I've seen quite a few job postings recently mentioning Elixir. That it can be seen, along with Phoenix, as a more performant, easier to maintain successor to Ruby/Rails gives it some legs imo.
Beyond the superficial syntax similarities and the fact that Jose Valim was a major Rails contributor I really don't get this Elixir = Ruby++ thing. The languages are poles apart, based on opposing paradigms (FP vs OOP) and data structures (immutable maps/lists vs mutable objects).
It's more about Phoenix than Elixir alone. The language is approachable enough, although clearly different, and the frameworks are similar enough (router, controllers, templates, gem -> hex, etc.) that a rails dev can become productive in this environment relatively quickly. And, the effort to do so comes with a number of serious benefits.
I am not sure about dying. Any time I need to use Clojure I have all the libraries I need and if I don't it takes me very little time to write a small wrapper around the Java library that provides the functionality. Saying Clojure is dying is the same as saying Java is dying. There are unmaintained libraries, no doubt of that, yet, it does not mean that the entire language is dying. Java8 was a great improvement for Clojure and I expect more coming down in the pipe for later Java releases.
A language cannot survive if it doesn't compete for adoption in the software industry, ie. paid jobs, and with so many languages competing for adoption Clojure's lacklustre marketing has cemented it's place in the history of also-rans. That doesn't mean Clojure isn't a great language. In fact I'd say it's the best language we have today.
Language viability isn't a zero sum game. What you ultimately care about is that the language is actively developed, and that there is a viable ecosystem around it. Clojure has both those things, with plenty of companies using it to do real world work.
You seem right. Every metric I can check says it is going downhill. Too bad, I used to go to the Clojure meetups in my area, I even had a talk about Clojure + Emacs.
Many reasons. S-expressions would be the least of my worries. Java is dying, not on the whole, but certainly in mindset of people who are picking new languages. It’s integration with OSs, which Clojure depends on, is arcane by today standards. Second, it’s tightly controlled by one person, and they happen to hate types. Similar to CoffeeScript, couple years after the language inception people figure out that they miss types. Adding them back in is super tough.
And third, abstractions have costs. There is no wide-spread popular LISP, and people should really ask themselves why. We have lots of code generation, and depend on it more and more. But it’s “simple” (or “primitive”). It’s usually one step. You can read the input, and you can read the output.
I’m starting to get a sense that macros are like currying. Elegant, beautiful, powerful. But net negative if your concern is to get lots of people to create software together.
Re macros, I had a similar thought this morning. Except for code instrumentation and boilerplate generation I've actually never found macros to be useful.
ADT DSLs + free monads or similar abstractions give you everything you need, in a more principled way.
> Java is dying, not on the whole, but certainly in mindset of people who are picking new languages
Even as a whole it's slowing down heavily now, even Android is switching to Kotlin. The language is already not that trendy but Oracle's aggressive lawsuits is making it even worse.
I did not say that it's still not used heavily, just that it's slowing down. And C is the second on that list so I'm not sure I would believe this ranking, I have not seen much C jobs for a while now.
Is it really? IoT still isn't taking off much and the more powerful the machines are getting (so every year), the less people are going to use C for that only tiny segment people are still using it for.
Where I've used C on a microcontroller 10 years ago, you would likely use a cheap Android board nowadays.
Look around your home. Right now. How many things have software in them?
How many things don‘t you see in your home? Everything in your home that doesn‘t include software has been manufactured and packaged.
My employer makes components for industrial automation. Have you ever thought about it? Nobody does, unless he has direct contact.
Building automation? Special machinery (huge presses or packaging machines). Whatever.
And re: your Android board: come back when your bricolage conforms to all kinds of requirements. Extended temperature range. Vibration. Electromagnetic compatibility (it‘s easy to accidentally broadcast in the naval emergency band). Etc. etc.
> Look around your home. Right now. How many things have software in them?
Not much as much as you think, apart from the routers, computers and phone, the only other thing I can think of is the dishwasher because it's half-recent. And I suspect the new ones just include a cheap Android board.
> And re: your Android board: come back when your bricolage conforms to all kinds of requirements. Extended temperature range. Vibration. Electromagnetic compatibility (it‘s easy to accidentally broadcast in the naval emergency band). Etc. etc.
I've worked on a company which manufactured their own card, that is much harder to do by yourself than buying a board which has been produced at millions of units where they solved all those issues directly. You can't compete with that easily. We had at least 5 iterations to solve the magnetic and heat issues, all of that comes for free in a mass-produced board.
Not much as much as you think, apart from the routers, computers and phone, the only other thing I can think of is the dishwasher because it's half-recent. And I suspect the new ones just include a cheap Android board.
I suspect you’re thinking at the wrong level. Your washing machine definitely does have software even if it’s not “recent”. When you press the buttons or turn the dials and it does stuff, even if it doesn’t have pretty graphics on the display, that’s software.
If you have a microwave, that has software, if you have a car that has a ton of software. Your car keys and your credit cards probably have software.
Battery charger. Camera. Camera lenses. TV. Blu-Ray player. HiFi/audio system. Radio. Dimmable lamps (probably). Electric toothbrush. Shaver. Possibly electric kettle. Blender. KitchenAid. Telephone (the wired one). Possibly door opener. Rice cooker. Garage door opener. Coffee machine. Toaster.
You post a lot of great submissions to HN, but if you continue to be uncivil in comments we're going to ban you. This is the second time I've had to warn you in as many days. Not cool!
C jobs are very rare, and it's certainly not 14% of the programming jobs. Putting Java and C on the exact same level is a good sign something is wrong with what they are doing.
C jobs are certainly much more than 14%. Believe it or not.
Most software developers don‘t hang out in internet forums, commit to GitHub or write blog posts about JavaScript frameworks. They just do their day job and go home to their family.
It most definitely is large. It's not just about all of the recent IoT devices. Firmware for all of the controllers on your motherboard is written in C. Most device drivers are still written in C. Many operating systems (linux) are written in C. Then you have all of the controllers in the devices in your home (e.g. microwave, oven, digital clock, entertainment devices, etc.)
Feel free to show me a link to a site with a better methodology to support your claim. Otherwise I'm going to continue believing you lack evidence to support said claim and are therefore wrong.
> Every metric I can check says it is going downhill.
I've seen a fair number of metrics that show it's losing market share on a proportional basis, but holding steady on an absolute basis. Have you seen anything to contradict this?
Shouldn't the proportion be your concern (assuming you actually have a horse in the race)? Steady absolute numbers basically indicates that nobody new is choosing it, which is just the first step to a decline of those absolute numbers.
Having just learned Clojure, I'm kinda sad to say that I agree... It kinda seems like there are a lot of unmaintained libraries and relatively few things being worked on (mostly in the web-tech space which I don't really know anything about)
I've come across my share of unmaintained libraries in Clojure, but for anyone reading, lack of activity doesn't necessarily mean that a library is unmaintained.
In JS in particular, activity is often used as a proxy for the "maintainedness" of a library. I think this is because JS is (currently) a moving target.
Clojure is one of those languages where, occasionally, problems get solved and don't require any significant further development.
Was about to say the same thing. It took a little while to get used to the idea that seeing `Last updated 2 years ago` didn't mean the library was dead. If I ask in the slack channel about a library I'll very often get a message back in a matter of minutes from the author confirming that it still works.
For me personally I gave up on Clojure because of the JVM. Had there been a Clojure, supported by its main author and community compiling to native, it'd probably be my main language today.
My concerns about the JVM aren't so much CPU, but memory usage and ease of distribution. Those two factors have, IMO, helped take golang where it is today. Today you cannot run your Clojure apps on a small 250 MB VM, and distributing it means getting the right version of the JVM, the right libraries etc...
I don't know how Graal will handle those two issues, it'll be interesting to watch.
That's really an interesting project, thanks for sharing!
Now about all the other Clojure features...immutable, persistent collections by default, built-in async library, spec, standardized way of handling state and concurrency, and many more...
Modern languages have learned a lot from Lisp. In fact they've stolen just about every major feature from it, except for s-expr syntax. Which should tell you something about s-expr syntax.
Any language adopting s-expr would instantly be called a Lisp.
Arguably, the s-expr syntax is what enables the rich code-as-data metaprogramming that you find in Lisps. Working with full macros in a sexpr language is already complex enough, the an extra layer of complexity that non-sexpr language add really breaks that camel in half. The point is, you really need code == AST parity to make a powerful macro system workable. You don't necessarily need the parans, but complaining about parens is really missing the point.
As it turns out, you can have a powerful macro system without s-expr syntax (e.g. Scala, Nim). In any case, you jus need to write functions that operate over trees, which you can do in any language. That this tree is not immediately reflected in the actual syntax is just a minor hindrance
What problems do you run into? I'm familiar with lisp's macro system but didn't get to look into hygienic macros in other language, and was wondering if they are equivalent.
It tells me that the vast majority of software developers do not want to read or maintain code in raw s-expression format, for the same reason that the vast majority of software developers do not want to read or maintain code in machine code. S-expressions and machine code run, but when the argument is "you won't see the parentheses after a while" then clearly the parentheses shouldn't be there in the first place. For example, normal people find "a + b" easier to read than "(+ a b)", because they receive 12-18 years of training in infix notation AND practically all materials with math (books, papers, etc.) use infix. Lisp's current s-expression syntax is simply unacceptable to most software developers.
Of course, s-expressions have a big advantage: homoiconicity. Homoiconicity enables the metaprogramming that is such a powerful and relatively unique capability of the Lisp family. But it's possible to add additional abbreviations to s-expressions to make the notation much easier to read by most software developers. Indeed, the original Lisp didn't have "quote", but no one would be happy today if you had to write (quote a) instead of 'a. (See the "LISP 1.5 Programmer's Manual" by McCarthy et al. of 1962 at http://www.softwarepreservation.org/projects/LISP/book/LISP%... ... and note that it has only QUOTE.) I think adding a few more abbreviations to the s-expression reader can make a big difference in the readability (and acceptability) of Lisps. E.g., "{a + b}" as an abbreviation for "(+ a b)" retains homoiconicity, while making code significantly easier to read for most people. For more information, see: https://readable.sourceforge.io/
I am tired of seeing this programmer's meme constantly being repeated.
Superfluous parens:
Do not assume that the parentheses are superfluous, they are an excellent way to write down and communicate a tree-like structures. Code happens to be such a structure.
With regard to the readability, this probably reveals an initial training in an Algol-syntax language. It is really not self-evident that s-expr are harder to read, and find it hard to believe that is true.
Most likely, if it is the case for you, it more likely means it is not leveraging years of brain-training on Algol/C style notation. Do not mistake this for anything intrinsic about s-expr.
To grab to personal experience: I have thought both Lispy and Alogly languages to complete programming virgins and have anecdotally seen less syntax-related mistakes in the former. The benefit of pre-learned infix semantics from mathematics courses is vanishingly small compared to the complexity of learning how to program. Unless your domain is actual mathematics (R, Matlab, ...), only a very small set of expressions in any program actually leverage that familiarity. The benefit in teaching/learning S-expr syntax is that is comparably uniform: whitespace breaks with paren ends. Infix languages introduce a set of context-dependent symbols. For example, a good portion of the errors I saw first timers make in class is mixing commas versus semicolon breaks. Even Python does this in the form of complex line breaking rules mixed with symbol-breaks (semicolon expression seperators, colon in conditionals, etc). It isn't hard when you are already familiar, but you definitely notice the complexity when holding the hand of a first-timer.
In short, the 'not seeing parens' is akin to not 'seeing commas, semicolons, colons, braces, parens, backspaces, ...' in Algol-syntax languages.
But honestly, things that are different should look different. Leveraging familiarity is a vital tool when designing a consumer-facing interface, but avoiding faulty preconceptions and ambiguities is more important for the professional tool that is a serious programming language. I personally think in C when handling low-level memory, Lisp when doing metaprogramming, Prolog when doing logic programming and something APL-like when doing linear algebra.
> Superfluous parens: Do not assume that the parentheses are superfluous, they are an excellent way to write down and communicate a tree-like structures. Code happens to be such a structure.
Actually, the people saying "you won't see the parentheses after a while" are a significant subset of advocates of Lisp, who know the language well! Here's a quote from the old Common Lisp FAQ: "After you've written and read enough Lisp, you stop seeing the parentheses. (Reports vary from a few days to a few weeks.) They don't disappear in some magical way, but you start to see the structure of the code rather than just "lots of fingernail clippings".
We all agree that it's important to be able to see the structure of code. But if your goal is to not see the only marker with important information, then there's a problem.
> Infix languages introduce a set of context-dependent symbols.
That is not required at all. That conflates infix with precedence. What developers want is infix, not necessarily precedence. In practice many developers avoid using precedence, in fact, a large percentage don't understand the precedence rules of the language they're using all. In Algol-like languages, they just use parens to force all evaluation orders even when they are not necessary.
If your infix system doesn't support a built-in precedence, there are no context-dependent symbols. For example, in curly-infix, {2 + 3 + 4} => (+ 2 3 4), but there is nothing special about "+". The expression {2 qwe 3 qwe 4} => (qwe 2 3 4). If you want precedence, you use another pair of curly braces to directly express it, just like you would in an Algol-like language: {2 + {3 * 4}} => (+ 2 (* 3 4)), while {{2 + 3} * 4} => (* (+ 2 3) 4). As a result, there's no dependence on context-dependent symbols, and you DO get infix.
"They first measured the visible source code of a number of large C programs... only 1.9% of all expressions had at least two binary operators (where precedence would make a difference)... In those cases where precedence could have been used (the 1.9% of all expressions), 67% (102,822/154,575) of the operator pairs were explicitly parenthesized (making any precedence rule moot)."
"The authors then described a survey of developers at the 2006 ACCU conference to determine if they correctly applied the precedence and associativity of the binary operators common to C, C++, C#, Java, Perl, and PHP. In this experiment only 66.7% of the answers were correct (standard deviation 8.8, poorest performer 45.2% correct, best performer 80.5% correct); this was not much better than random chance (50%). Even many widely-used operator pairs had relatively poor results... (only) 69% when they combined / and +. These were far short of the 100% one might expect. These developers ranged from 5 to 25 years of professional experience, and the more-experienced developers did not do better (!). ... these results clearly suggest that precedence rules may harm, instead of help, the process of developing correct code."
> it's possible to add additional abbreviations to s-expressions
It is, and infix notations have been added many times, going back at least to Vaughan Pratt's CGOL in the 1970s. Yet none of these notations have ever caught on very much among Lisp programmers.
I understand that the syntax is a barrier to newcomers, but there's no point in arguing that it should be changed, because one really does get used to it, and even to like it. You might as well tell the Mexicans (or Indians, or Thai, or Chinese, etc.) not to make their food so spicy. Yes, it takes some getting used to, but to change it would be to ruin it.
Maybe it's valid to think of Python (and Javascript, and R, etc) as a DSL, where the Domain is very large, almost as large as Lisp itself, but not quite, just missing the elegant metaprogramming. A DSL for "easy-to-use coding by non-Lispers".
To my knowledge, every inflix operation in R has a corresponding function. Together with the pipe operator, this is really, really nice, and is kinda similar to what you propose, but perhaps the other way around, so to speak.
I feel like this helped me understand the power of "everything is/as a function" to the point of wanting to go back and try lisp again.
Sibling post suggest making Python-esq DSL on top of lisp, and I really like that idea.
I'm disappointed by this paper. The general review of the language is of the sort that you can find a dime a dozen on LISP enthusiast sites--so this paper does little to effectively recruit existing programmers to the cause. On the other hand, the references to current bioinformatics projects using LISP is helpful as an index, but provide few details to convince a biologist that they're worth checking out--so I'm not convinced on that side either. The "key point" of a forthcoming BioLisp is embarrassingly glossed over.
I think the paper would be much stronger if it focused on that BioLisp aspect, talking about:
* What is BioLisp? A mere web portal pointing to various resources across various languages? An attempt to create a curated installation of version-compatible libraries for a specific LFL? An attempt to develop a fully-integrated framework for that LFL??
(I think Sage for Python might be a useful model here, but if you were trying to build that, you'd have to start by picking your LFL, which the authors at present seem unwilling to advocate. That in itself demonstrates one of the main liabilities at present in adopting LISP--the community is fractured between Racket, Scheme, Clojure, and various implementations of Common LISP--often to the extent that they argue over whether their respective opponent languages even _deserve_ to be called LISPs. And, of course, in this forum we even have Arc fanatics to add to the lot. ;-) )
* What challenges face biologists attempting to use LISP in their work? How does BioLisp help to address those? ("Your language lacks macros" is not an argument at a sufficiently detailed level, I'm afraid. Rather, actually show them a DSL built in LISP that they will desperately want!)
* For that matter, what challenges face biologists attempting to use R/Python/Julia/etc for their work? How does BioLisp help to address those?
* What promising pieces could BioLisp be built upon? What pieces are missing? What pieces need modernization?
By all means, I encourage scientists and LISP enthusiasts to make the case that biologists should/could be using LISP--but make sure you actually make the case!
Great writeup on specific advantages of Lisp languages! I have used Lisp on two long term medical and bio information consulting projects (one also used semantic web/linked data tech) and I could not agree more that building Lisp up to be a new language for applications, repl based development, and the expressiveness of Lisp languages are all huge wins in building knowledge intensive systems.
EDIT: I will add that although I enjoy using Lisp languages, and have written a few Lisp books, that I don’t like to talk anyone into using any particular programming language or technology stack. There is a difference in enjoying our own choices to telling other people what choices they should make.
In ancient history I used FORTRAN. For machine learning, which is my job, I use TensorFlow using the Keras APIs. There are a few interesting machine learning libraries in Common Lisp but I don’t use them professionally. I am experimenting with saving trained Keras models and converting them to a format with runtime code for Racket Scheme - I might open source that if I can improve the performance.
I had previously toyed with numerical prototypes in typed Racket but could never get it fast enough. But, a typed dialect which transpiles to an OpenCL kernel might work.
> "Lists, which are a generalization of graphs, are extraordinarily well supported by Lisp."
I always thought of linked lists as a special kind of graph, where each node except the head and tail had exactly one incoming edge and one outgoing edge. What's a better way to think of this in terms of LISP?
Internally a Lisp list is a binary tree in which left children are leaves storing a value and right children aren't leaves (except the terminus) and don't store values.
Or rather, there is a structure called a cons cell which consists of two pointers, "left" and "right"; a cons cell in which the right pointer is NIL is a list; and a cons cell in which the right pointer points to a list is also a list. The values in the list are whatever is pointed to by the left pointers.
If you diagram this out, you'll find that your mental image is basically accurate already. It seems difficult to describe lists of this form as a "generalization" of graphs.
All these comments about how Lisp missed the boat, and you're all wrong. We missed the boat. Lisp has been going strong among it's own the whole time. It's not their fault we're all so stubborn, punishing ourselves over and over again like some sick masochists.
I got mad when I finally learned Lisp and realized how much time and effort has been wasted over the years because we're too lazy, ignorant, or foolish to use it.
(As a tangent, it's simply incredible how deliberately ignorant programmers can be. I've worked with guys getting paid to write software who didn't know who Alan Kay is, or had never heard of Prolog. What's worse is they're not ashamed of their ignorance. Can you imagine a physicist who had never heard of Newton?)
I don't use Lisp, but I have the grace to admit that that's a personal failing. (I joke that Python's only major problem is that it ruins you for other languages... I just can't quit that sweet syntax. Although Python 3 is a train wreck IMHO. Don't get me started.)
I would switch to Lisp, but I found something even better, which brings me to my actual, non-ranty, point:
There's a language even better than Lisp.
It's called Joy and it combines all the best parts of Lisp with the best parts of Forth to make an enormously simple system for describing software. It's a purely functional, stack-based, "concatinative" language that turns out to be good for "Categorical" programming: programs in Joy can be "instantiated" over different categories to generate different kinds of computations. I only have one good reference for "Categorical" programming right now: a paper and talk by Conal Elliott "Compiling to Categories" February 2017 http://conal.net/papers/compiling-to-categories/ He's working in Haskell, not Joy, but he's describing the idea: from one piece of code you can get calculations, dataflow diagrams, circuit descriptions, type signatures (inference), derivatives, etc., by implementations of the categories for each kind.
As elegant as lisp is, I think the flexibility just kills its usability on teams, while more rigid languages like Java, Go or C see massive adoption. Language expressiveness is rarely a real bottleneck, compared to debugging and understanding program flow and getting new contributors up to speed.
Separately I think each lisp has an individual fundamental flaw that makes it unusable for production, eg clojure having terrible error handling, common lisp being case insensitive, scheme having a minimal standard library, racket being fairly slow, etc.
Is case insensitivity really a flaw? Other than some weak namespace hinting (constants in ALL CAPS for example) what is the flaw of your language being case insensitive for everything but string handling?
Would C be ruined if people could type:
If ( expression ) { effect };
I'm also becoming unsold on file system case sensitivity as well. Should "Income Report 2018Q1" really be different than "Income report 2018Q1"? How often is this a desirable feature?
The only reason I have now for maintaining case sensitivity is that in the Unicode world casefolding is an absolute nightmare and no language or file system built today can afford to not use Unicode.
I tend to find it very easy to debug CL code. In fact, CL is designed to be introspective: "inspect", "trace", "break", "disassemble" and "step" are part of the language.
The paper itself doesn't focus on Machine Learning / AI or anything too specific for that matter (it reads like a review / praise of LISPs), but I am currently researching Symbolic Artificial Intelligence as well as let's say the history of "Good Old-Fashioned Artificial Intelligence" and the role LISP took in all of that.
Any books / papers / articles that one should definitely read on these topics?
What are the pros and cons of Lisp (or any other language for that matter)?
In my undergraduate days (oh so many decades ago), I was introduced to Lisp. For some of the problems we were solving, it was the perfect tool. For others, a different language could be used.
The pros of Lisp (in all its varieties) are in many ways the cons of Lisp.
The macro system is both beneficial and antagonistic to the development of sane programs. Used judiciously, macros may life writing code much easier. However, it is not uncommon for Lisp programmers to write macros without consideration for those who come after them. This is, of course, true for any language.
There are some major systems out there that are written in Lisp and when trying to change them (in a maintenance way), it is just as painful as in pretty much any other language.
The competent and talented Lisp programmers is imbued with the same arrogance that C++ competent and talented programmers have. Tricky programming is fine if it is fully documented and the reasoning for the tricks is fully explained.
Lisp and its variations have much to offer the programming community. But the way programs are written quickly diminishes the effectiveness of the language for the larger community.
Each language has features that can be of use to each of us. they are tools we can use to solve the problems we are facing. No one language is suitable for all problems. If you have that attitude then you only see hammers and nails.
We have reached the point where saying "pick the right tool..." usually means completely opposite. PHP for web servers, python for large applications, go or anything c like for business logic
LispWorks offers several version of their free IDE with some limitations (f.x. limited heap size), most of which are irrelevant if you just want to try/learn Common Lisp.
I have limited personal experience with it, but it seems good and I have read good things about it, written by free users.
I wouldn't depend on an IDE for learning lisp. Just use a text editor you are comfortable with. DrRacket is pretty code for scheme. I also like repl.it if you are into online editors.
You really don't need to start programming macros before you need them. Just focus on learning lisp. Much simpler.
Lisp is part of the family of languages that J. Backus referred to as 'transformational languages'. Its ability to self-modify creates programs that are almost impossible for people other than the author to understand. This is why after 50 years it is still a niche language, and will remain so. There is little demand for programming languages that are hard to maintain. A working program lasts decades, and passes through many hands, and LISP is dead-end technology that will forever keep being brought up as its temptations are irresistible to some.
This is one of the eternal debates in software engineering.
Knowing that certain powerful features of programming languages can be used to create unmaintanable code, do we ban those features or do we try to harness them by using a disciplined approach to software design?
Specifically with regard to meta programming, I think we have seen all that can happen several times over.
If you have those features within the programming language they tend to get overused. If you don't have them within the programming language, they get tacked on in some inconsistent, half-assed way as soon as the need for frameworks, DSLs or sophisiticated configuration arises.
> they get tacked on in some inconsistent, half-assed way as soon as the need for frameworks, DSLs or sophisiticated configuration arises.
can confirm, if you can't metaprogram in the language, you will at some point write external code generators and such... Metaprogramming is just the most common programming "task", avoiding repetition, applied to itself.
Well, everything I've read about this subject basically shows that ever since we stopped congregating in families and tribes, scaling people effectively trumps any kind of individual scaling. The Kingdom of Benin probably had some amazing individuals we'll never read about, but once the much more organized Brits came around, the stronger organization beat the other one.
Scaling individuals gets you output at the quality of the very best individuals. Scaling the people gets you the best the average of the people can deliver. The vast majority of useful and highly complex software at least in its initial formative stages is produced by one or very few individuals working. Later come the masses of me too I need this. There will always be the need for languages you can be super effective and productive in as an individual. Java is a great engineering language, but anyone who thinks their personal itch project is best served by that language, doesn't know anything other than Java.
Software advice from the unlikeliest of sources (Stalin): quantity has a quality all its own.
The architects will be constrained by Java, but they can use it nonetheless. And instead of metaprogramming with Lisp they can metaprogram with humans. You'll need them anyway, as the application grows, and like this they get to know the code better, since they write it.
> Knowing that certain powerful features of programming languages can be used to create unmaintanable code, do we ban those features or do we try to harness them by using a disciplined approach to software design?
Discipline never works unless it's enforced. So the only hope is to figure out "less powerful" versions of those features that cover all their important use cases without permitting unmaintainable code. Fortunately, modern programming language design has gotten pretty good at that: 95% of the killer use cases you'd see as macro examples in Lisp advocacy 20 years ago are now ordinary, standard-ish programming language features.
> If you have those features within the programming language they tend to get overused. If you don't have them within the programming language, they get tacked on in some inconsistent, half-assed way as soon as the need for frameworks, DSLs or sophisiticated configuration arises.
That's a possible failure mode, but we've been pretty good at moving away from it IME. The best modern languages do manage to walk the fine line between a language that's too constrained to do anything in without some magic addon and a language that's unmaintainably flexible in the language proper.
> Knowing that certain powerful features of programming languages can be used to create unmaintanable code, do we ban those features or do we try to harness them by using a disciplined approach to software design?
I don't have any example were the disciplined approached worked so I would agree with the first option. People overused every single bad feature, here it's Lisp but it's the same in Php, Perl, Javascript or Python.
In Clojure macros are not overused IMO.Even though they are fully supported the community encourages you to only use it when it's absolutely necessary.
“I don’t need macros, they’re too complicated and not useful,” says the programmer as they use Flow with JSX with Babel with two dozen plugins and maintain two hundred line webpack configs for code with machine-checked comments that parses CSS in template strings at runtime.
If you think that is why Lisp is a "niche language" then you should check out C. It has a "preprocessor" and you can do all kinds of things that make it hard to read your code, yet it is by any metric very popular, which provides a counterexample to your claim that "There is little demand for programming languages that are hard to maintain." Check out [1] for a few examples.
Virtually nobody these days advocates for using the C Preprocessor do to anything more complicated than define some constants and include header files. History has shown that it is more trouble than it is worth.
I don't know if it is fair to say that Lisp meta-programming is more core to the language than CPP is to C.
Only a smart programmer can write Lisp. The popularity is proportional to how much cognitive capability requirements of the language. The story is the same with Haskell.
I think when it comes to C, there are no smart programmers.
Everyone is like a monkey with a grenade. They very few people that have proven to be exceptions to this, can be counted on one's hand and thus do not invalidate the premise.
It's an emperical observation. Compare the popularity of languages, and then compare the "difficulty" of using them.
Php, javascript, java, C# : all very popular.
Haskell, clojure, Lisp, F#, APL (and derivatives): all very niche, and requires some knowledge that one might consider "higher level" than required for the former list.
C is rightly in decline and the preprocessor is a big part of the reason. A series of historical accidents made C popular despite its preprocessor, not because of it.
>>This is why after 50 years it is still a niche language, and will remain so.
The bigger problem here is the number of programmers have increased disproportionately over the years. For that alone reason technologies had to be dumbed down to a point where anybody could use it.
If programming kept its original ethos, it would suffer the same problems one suffers taking a subject like Math mainstream.
But the real problem here is just starting now. You now have a whole generation full of masses of programmers who only do XML/JSON parsing, talking to HTTP interfaces and doing HTML and doing nothing more. This is like people doing basic algebra whole life. Eventually you won't get much out of this sort of practice.
At some point in time, to not max out the value generated by a system you have to up the level of the game, and that would come at a painful task of having to retrain entire generations of programmers which neither will be easy, and in many people's case it won't be possible.
Already languages like Java have shown their limits. To build anything meaningful you need large frameworks, with crazy dependency graphs, to manage which you need one another framework(DI). And with all that you barely get to do the ordinary stuff.
So eventually you would have only delayed the use of something like FP, not entirely eliminated it.
>>There is little demand for programming languages that are hard to maintain.
Java applications with 100 classes to post to REST interfaces will eventually suffer the same fate.
'Maintainability' is a very subjective term.
>>A working program lasts decades
Not true anymore. This was true pre dot com days. Thanks to all this agile processes and 'fail fast' culture. Code bases have very little life these days.
>>LISP is dead-end technology that will forever keep being brought up as its temptations are irresistible to some.
The success of Clojure means this is totally false.
Programming is not easy. Its at-least as difficult as things you try to do with it. A bit like Math. You can't get simpler because you have to work with the reality it models.
The thing with languages like Python. They will lower the bar for entry, but they will keep you at the beginner stage all along the stay.
That's OK. Programming isn't a monolith, we already have several layers of programmers based on the complexity of the task they solve. So Lisp will have its own place, so will languages like Haskell and F#. Its also upto the programmer to decide at which layers they like to work.
It's a language with a gentle learning curve, yes, but a long one.
Don't confuse the fact it gives you quick productivity with being shallow.
Lisp is espacially deep, so i won't dare to compare the 2, but python has way more under the belt that it's reputation. Which is great : only learn what you need at the moment you need it. Then curioisity can take you ahead if you wish.
Your comment assumes that all programming is high tech. As a commenter from Reddit said: all programmers dream of painting the Mona Lisa yet most of programming is painting houses.
In the software world, you either level up your tech skills or you level up your domain knowledge skills. Both are lucrative career paths. The second one can be easily covered with Java and it will continue to be so for the foreseeable future.
Languages don't create unmaintainable code, people do. I don't get why there's such hate against Lisp. (Granted, as others have noted, the author of this paper obviously drank from the Lisp kool-aid fountain)
> Languages don't create unmaintainable code, people do
Have you ever seen so-called academic code? If your developers are PhDs without a programming background, you're bound to get programs that reflect the complexity of their own throught process plus the subject they are dealing with. I don't think encouraging that through something as unbound as Lisp is a good idea, as much as I love how unbound Lisp is.
It's possible to write horrible python code. But it's way harder
Even without an editor, you are forced to indent. You can't use macro or make huge anonymous functions. The whole language is designed around readability.
Honestly, a PhD is the culmination of a very complicated thought process. Why wouldn't the program resulting from research be non-trivial to understand?
Being able to apply the resulting work to the real world is very hard, because for a programmer - you have to package it up in a way that's easily digestible to people who don't want to have to go through the process of reinventing the wheel from scratch in order to understand it (or just don't have the time to).
Lisp doesn't have to be complicated with a billion bells and whistles to be good lisp code. If you approach learning it from there, you build your own, don't have to compare it to PhD level research. It might one day get to that point, where no one can understand or wants to understand what you've created, but that's the other side - the isolation that comes with being an academic.
> Honestly, a PhD is the culmination of a very complicated thought process. Why wouldn't the program resulting from research be non-trivial to understand?
The subtlety here is that there is the complexity inherent to the problem domain, which is what you are referring to, and code complexity that arises from translating that complexity to a programming context.
Environments like SciPy, Matlab, R, and Julia get rid of a lot of the latter while still forcing certain conventions to follow that make the code easier to read for outsiders, leaving the complexity of the algorithm itself.
And I probably should have used Julia instead of Python as an example, since it aims and IMO succeeds as a "goldilocks" language that combines a lot of great ideas from all aforementioned languages plus Lisp[0]. IIRC, it is sometimes jokingly referred to as "secretly a Lisp" because part of the compiler is writting in... I think Scheme?
I disagree. As Lisp is more moldable you can get to your point more easily, in a clear manner that is not obfuscated by the language limitations, syntax and constructs, and it's easier to refactor (you have less code to refactor).
Like in Python, we have context managers (with statements), they're cool but we can't do much inside them. I wanted to use them to add logs to a list and automatically return it from my api. I couldn't, I had to do that manually everywhere.
Now enters a new developer. In Python he would learn the base style, function names, the use of our custom context manager plus the need to manually feed and return the list of logs. In Lisp he would learn that our context manager (a macro) handles the logs.
---
This thread gets comparisons to Python, so let's enlarge the comparison spectrum: Lisp has a better REPL (enjoy ipython ? You'll love a Lisp REPL !), better type inference, it is a compiled language, making it a breeze to build an executable of your web app and deploy it to your server, a better object system (method combination, polymorphism, methods not bound to classes,…), Lisp is stable,…
When these discussions come up, I always remember a comment from a guy (sorry, can't remember his name) in the scala google groups. People were arguing that the scala compiler was too complicated and someone wanted to write a new language. This guys says he can pop up a Hindley-Millner type-inference lang in a couple of weeks, and someone notes that the adoption wouldn't be as nearly as high as with scala.
The guys says he just wants a language where he and his team can be productive. This. 1000x this. Ranters gonna rant, haters gonna hate, but productive people will use whatever floats their boat. For me it's clojure lately. I don't need to convince anyone that 'X' is better than 'Y'. If a new paradigm comes along (actors) that proves valuable, I hope that my language can assimilate it (quasar/pulsar). Otherwise I'm happy.
Besides, in today's startup economy, we're lucky if a company lasts months, nevermind about decades.
As a manager I would not want such orchids as we call em in German.
What if this "rock star" leaves the team?
What if we want to bring in more team members?
Who pays for the development of these dev tools? After all we are not payed to write code but to solve problems- a fact devs like to forget.
Do they really pay for themselves in therms of savings as suggested by the dev?
I don't claim that this questions were not raised and answered, but that from a strategic point of view not the coolest and shiniest toys and concepts may make the most sense.
> The guys says he just wants a language where he and his team can be productive. This. 1000x this. Ranters gonna rant, haters gonna hate, but productive people will use whatever floats their boat. For me it's clojure lately. I don't need to convince anyone that 'X' is better than 'Y'. If a new paradigm comes along (actors) that proves valuable, I hope that my language can assimilate it (quasar/pulsar). Otherwise I'm happy.
To "just" be productive you need a huge library/tool ecosystem that, realistically, only a handful of languages are ever going to have at any point in time. Particularly if you want your language to make use of the possibilities that GUIs offer. I love Scala and find its IDE support to be one of its great strengths, but even so it's noticeably behind what you get in Java or C#. And that's a top-20 language.
I wasn't talking about any particular language. Scala is an interesting example, you have some good software written in it (Kafka, Scalding, Finagle) even with not-so-good IDEs.
As for clojure, I have yet to find an environment as good as emacs+cider+nrepl.
> Scala is an interesting example, you have some good software written in it (Kafka, Scalding, Finagle) even with not-so-good IDEs.
My point is quite the opposite: its IDEs are a lot better than you'll find for many languages. On paper Haskell should be a better language than Scala, but you don't see anything comparable to your examples, and I think the tool situation has a lot to do with that. So popularity ends up being very important.
> Ranters gonna rant, haters gonna hate, but productive people will use whatever floats their boat.
I agree in general, but it's not just whatever floats their boat. Every real project has its yak-shaving aspects. Productive people pick languages (plus libraries and ecosystems) that will handle for them as much of the yak-shaving as possible, leaving them free to work on the problem they're actually trying to solve.
I used Lisp as my main programming language for about 6 years in the late 80s and early 90s - I remember receiving some code I was expected to integrate and initially not realizing that the code was Lisp due to the rather over-enthusiastic use of reader macros.
Edit: Actually, I seem to remember that this was how I learned about reader macros and I went on to over-enthusiastically (ab)use them myself....
I think the niche qualification is inappropriate here. Lisp spread ideas around. Most dynamic languages today are basically CL in different clothing (lua, python[1], javascript, php ...). Even cpp is looking a lot more lispy these days with auto and closures.
Methinks that lisp value is exploratory, when you want to go crazy deep, you use it to mold every bit of the language as you want and try paradigms/principles.
ML or Haskell are also used a lot there but I think it requires a bit more apriori planning (separate grammar / semantics, typing, denotational mindset). Whereas lisp is more hack it until you make it.
[1] and if you look Dave Beazley fantastic video talks about "abusing" python metalevel, and compare it to CLOS you will find the difference to be more hard to discern.
Besides what's call Lisp in the Tiobe index(https://www.tiobe.com/tiobe-index/) is ranked 27, before D, Clojure, Lua, Erlang, Rust, Julia…
And I find Lisp programs great to maintain: the language is stable, the ecosystem too. Python programs on the contrary are a chore to maintain, even after only 2 years.
I don't get the downvote. This is my experience too. You browse a code source, unable to understand some of it. You google it and find nothing. Only to realize that the author defined it's own little world. As usual, zero unitest or tutorials. It makes lisp a terrible experience for beginers.
I don't share this experience. I mean, I'm absolutely sure it can happen – in particular at shops where there's a high level of employee churn and they throw consultants at a problem and then kick them out again. Of course you'll get all sorts of weirdness at then.
But with some continuity and a carefully crafted own little world, everybody on the team soon chooses voluntarily to speak in the terms of that world, because it is a world well suited to the problem at hand.
Yeah, you managed to link a comment with some code (although I agree it looks useless)... I expected a github link to some lib relying on undocumented macros.
> I don't know what's the worst. That you can't help it or that you won't ever recognize you have a problem.
Anyone can write bad code. Lisp certainly makes it easier, I won't deny it. And no, I don't have a problem, thank you.
It's probably because the OP makes unwarranted assumptions:
> This is why after 50 years it is still a niche language
where "This" is either the REPL or macros, which they do not specify.
It's just as easy for me to speculate that Lisp isn't popular because money talks:
- You have an idea for a piece of software and create a start-up.
-You hire competent Lisp programmers to write that software. Since they're rare, they're expensive.
-The start-up grows because the idea was good and the software works, and somewhere along the line someone sees that more money can be made by cutting expenses on development.
- They shop around and find some outsourcing party that claims to write Lisp cheaper.
- They outsource development, but the software goes down the drain. "Lisp" is now the problem, according to the new developers. But costs are down, growth is still continuing, and they got rid of the weird people who made fun of their Gucci tie.
- People cash out, and on advice of the new developers stuff gets rewritten in the popular language of the day.
- Stuff muddles on for some time, until everything comes crashing down. The cycle starts anew, except now it's clear for everyone involved: don't bother with the software, focus on growth, and in case of doubt:
Blame Lisp!
So yeah, that's my "Why isn't Lisp more popular?" hypothesis: modern software for start-ups is about enabling growth, not quality.
And without further substantiating it's just as valid as blaming it on the "ability to self-modify."
> It makes lisp a terrible experience for beginers.
Hidden in this statement is the assumption that all mainstream languages and all mainstream language programmers are low quality and they only follow bad practices.
Disregarding its arrogance, the statement is just false :)
That's the statement. If that implies to you that it's impossible to achieve quality and good practice in any but the aforementioned languages, feel free to do so.
Re: Only to realize that the author defined it's own little world.
Lisp makes it easy to shape a sub-language to fit your own head well. The problem is, few others can figure out YOUR head. Communication conventions are not always pretty or efficient, but they work because they are conventions. If you place parsimony (compact abstractions) above inter-human communication, then all you get is isolated parsimony.
One can document sublanguages, too. The Common Lisp standard for example defines sublanges like LOOP, FORMAT, the Pretty Printer, CLOS, etc.
The better embedded languages in Lisp often follow conventions: see for example CLOS and the various conventions it follows from naming to integration.
The actual problem is that defining sublanguages for most developers is not something they have learned. The result is a multitude of languages for similar tasks with their own infrastructure. For many programmers it is normal to wait for new language constructs for several years - while they work around this by combining language built-in constructs (extreme: creating lots of bloat - see Java), by using preprocessors (upto transpilers of different languages - see Java and Javascript for languages transpiling to those).
There is some complexity in Lisp providing features for embedding languages.
But there is also complexity in working around that with languages not very good in embedding. See for example Clojure, a language which provides functional programming idioms on top of Java - the integration is great - but many developers are not satisfied about the integration when it comes to debugging - see the complexity for example of stack traces in a layered language design.
Functions have a unique interface. It's always the same. Params in, return value out. Plus an imported function is something custom, by definition. You don't have to wonder.
Now you encounter custom syntax. It will not be clear it's custom first as you have no import path. Maybe it's a variant you didn't know about. Maybe it's part of this particular lisp flavor, after all there are so many.
You google it, but it's hard to google syntax so you are not quite sure that no result mean it's custom, impossible to search or just an obscure feature of lisp. After all there are many too.
So you try to decypher the syntax. But there is no contract for it like for a function. No unique entry point and output. It could be doing anything.
Now wihtout an import path you are on a fun hunt for its possible definition with grep.
When you are finally reaching it, it will be 50+ lines of dark magic you now have to decipher because syntax is hard and not trivial code makes awesome features.
In some lisp shops it works because you have three 10x programmers with years of lisp experience working in the same office. In that particular configuration, lisp is cataliser to this great potential, with xp and communication compensating the rest.
But the world is not solely revolving around this perfect combo.
I don't agree. All macros except for things like loop follow the same syntax as a regular function call. The mental overhead of those macros are the same as function calls, with the small difference that you have to know they are syntax so that you don't end up passing it as a function argument.
I would say that the operator voodoo of Haskell is worse, and that is coming from someone that uses Haskell on a daily basis.
There is a certain irony to this. Framework heavy Javascript will be hard to rebuild, even, after 1-2 years (NPM churn). Vanilla JS? Decades. I think you can run a JS script from 1998 without any issues.
Modern Vanilla JS written in 2018 has the same backwards compatibility guarantees, it's extremely likely that you'll be able to run it in 2038.
Once you add Babel, React & co, all bets are off...
> Its ability to self-modify creates programs that are almost impossible for people other than the author to understand.
That's simply not true. I think the only reason that Lisp is unpopular is the syntax, which is unfortunate because it's the syntax which is the source of all its power. As for why the syntax itself is so unpopular, sometimes I wonder if irregular syntax really does help comprehension (i.e., that having {}*&^%$#@! helps as a kind of visual shorthand), but most of the time I think it's really just unfamiliarity.
The guys that wrote this piece have drunk the kool aid of Lisp.
To say that library support for bioinformatics is on the early stages is quite a misunderstanding. Lisp was one if not the first language to be used in everything, including bioinformatics. If it is not used anymore is because people have abandoned it.
People have abandoned it once its flaws became evident. The main flaw of Lisp is using linear data for everything. It is obsolete, it was designed for linear memory like tapes or punched cards, not random access to memory like we have today.
Languages like Clojure has not this limitation but they have others, like forcing you to use the java virtual machine, good if optional, terrible if mandatory.
In current systems with huge amount of RAM, when you could have 128GBs on a developer system having to use a system that shoehorns everything to Lists is going backwards.
It also maximizes verbosity as everything uses the same thing, pattern differentiation is hard for actual humans to read and understand. This was a serious flaw that even Lisp creators quickly identified and tried to solve, remember Lisp was considered a temporary solution while the good solution was(never) found.
Not to talk about the absence of privileged instructions, everybody on the team changing whatever it wants, even basic instructions.
Don't get me wrong, I love Lisp, it is one of the more interesting languages in the world. I am actually working on porting some of the features it has(introspection) to other languages.
But from my point of view it is not that good for production for most projects. I have drunk the kool Aid myself before and have the scar tissue from that.
The main flaw of Lisp is using linear data for everything.
Huh? Which Lisp are you talking about? That seems like a serious misconception! Certainly any Lisp after 1960 has great support for arrays, trees, graphs, hash tables, you name it.
> The main flaw of Lisp is using linear data for everything. It
This is just utter ignorance. Good example of not knowing that you don't know and interpolating from too little data.
The name "Lisp" comes from List Processing, but Lisp programming has not been about list processing since 50's. All the old Lisps like Common Lisp and Scheme had full set of data structures, including vectors and arrays.
Another problem is that lisp let you do anything. And so people do.
So everytime you get on board on a project you hit your toes on the DSL du jour, of course badly documented and tested. It's like learning a new language everytime, except a beta version one with no resources.
And yet another problem is people who talk about things they have no personal experience with, resulting in them parroting something they saw or heard and propagating blatant falsehoods. Which is what you and the person you replied to, are doing.
Wouldn't it be better to actually inform yourselves first, by learning Lisp, rather than regurgitating somebody else's opinions?
I do think Lisp kind of missed the boat when a wave of dynamic languages (Perl, Python, Ruby) went mainstream. Today the most exciting developments in language design seem to be around static type systems, type inference etc.