The problem with Wave was that it requires a network effect to be of any use, and Google was very stingy with invites early on. Interest had died down by the time they started allowing more people to join.
You probably had a high-spec PC, and were using it with only a couple of friends at a time? Wave was cool but the performance issues from being built on XMPP killed it.
> I think the reason Lisp self-asphyxiated is because people don't get it.
As programmer who has spent most time in Python/Java/C but has dabbled in a few lisp dialects but isn’t sure if I “get it”: how can I know if I “get it” or not?
It clicked for me only after getting significantly far in SICP back in the day, doing all the excersizes and creating things inspired by the book. Before that I just thought lispys were just some weird emacs thing. While this is a long time ago for me, I think it still will work for you to feel that 'click'.
For me it clicked after I red Practical Common Lisp, ANSI Common Lisp and On Lisp, Let over Lambda and after I did couple projects in Common Lisp.
One day I looked at various REST API clients generated by JHipster. The clients generated for all languages had huge amount of boilerplate.
"Well, that's fair for ability to generate it from DSL", I thought to myself.
Then I looked at the code generated for Clojure client and it was many, many times smaller and looked as if somebody wrote it by hand. It was nice and neat.
Basically, the calls to APIs were macros and rather than generate a stack of code to be put in repository, the macros generated everything at runtime. The DSL was translated 1:1 to calls to macros and all the complexity of the generated code was completely hidden.
This caused me to spend considerable time thinking about the nature of difference between lisp and all those other languages and at some point it just clicked.
Could you expand on this? This is something I've just started reading about. I'd be interested in good resources to use to get started. At the moment I've just started reading TAPL.
Check out https://github.com/tomprimozic/type-systems there's been a few HN threads about it as well. I can also answer any specific questions you have, or if you want further resources I can try and find them... (there's a good online book I have in mind, but I've no idea how to find it right now!)
Some great stuff in this repo thanks. I'm particularly interested in resources that build a type system up step by step, from very simple and working towards Hindley-Milner
Hm... I'm not sure it works that way. Type systems are quite fragile beasts, if you change one thing you can easily break the rest. Especially when it comes to type inference! Although I'd say that HM is quite simple, especially if you don't consider polymorphism (i.e. if you require that every function parameter has a concrete type like int or bool).
In fact, that might be a good starting point - first implement something with the above 2 types, and where every variable and function parameter has a type annotation. From then, you could (1) add more complex types, like function types, tuples or lists, (2) implement type propagation where each variable has the type of the value it's assigned (like auto in modern C++), and then (3) go full HM type inference.
No, I've just skimmed over some of its chapters to clarify some concepts. I've mostly learned by reading code and then trying to implement my own type systems. Algorithm W is really quite simple and basic.
Another good resource could be Programming Language Zoo http://plzoo.andrej.com/ that covers different evaluation techniques as well.
In general, I've been "involved" in PL design for quite some time, so I've no idea where I've gained all the knowledge I have... but in recent years, there have been quite a few modern resources, even a few on HN IIRC!
Because it lets your math code look more like math and less like code.
v := Vector{}
v2 := Vector{}
to add the two vectors, currently you have to do something like
result := v.Add(v2)
rather than the nicer
result := v + v2
In the small scale, it doesn't seem like a big deal. In a large and complicated scientific program, it can make the code a lot harder to read.
Note: I think it is good that Go does not have operator overloading, though I think it's a shame that means it's not as good for scientific & math programming.
I find being told a story is a great way of learning a new code base. Often the architecture only makes sense given some historical context, and being told the story of this helps me understand why certain bits are where.
https://youtu.be/h9kPmX32j9A?feature=shared