Hacker Newsnew | past | comments | ask | show | jobs | submit | dmbarbour's commentslogin

Today, most operating systems accept programs expressed in an unsafe and opaque programming language (such as x86 machine code) so they're stuck using low-level, hardware-supported features to recover a modicum of safety.

However, this isn't a necessary design. An operating system could instead accept programs in a 'safe' higher-level language. The OS would include a trusted compiler. Then safety can be achieved by construction instead of by hardware support.

Compiled binaries could be cached so we don't need to recompile every time. It is also feasible to configure the OS to use a trusted remote cache, i.e. fetch signed binaries specific to hardware architecture based on a secure hash of the program and a set of parameters.

Existing programming languages would simply treat the OS's chosen language as yet another compilation target.

This approach has a lot of potential for performance because a lot of indirection can be avoided, and even inter-process communication can potentially be reduced to objects talking. However, I've only seen a few aborted attempts. I think the biggest problem is the huge heaping amount of work required just to get back to where we are today.


I think most FP languages don't decouple AST from evaluation context. E.g. I cannot perform abstract interpretation on a function's AST, nor rewrite a function's AST for incremental computation, nor perform explicit partial evaluation during function composition by composing the ASTs. I only can access opaque `Input -> Output`.

Also, FP algorithms developed for one evaluation context cannot easily be ported to another. They are implicitly entangled with assumptions. For example, I cannot evaluate a lazy algorithm in an eager evaluation context without paying a huge price. Adding back-pressure for list processing where a program was not designed for it would easily result in deadlock. Adding exceptions without adding unwind (or bracket, try/finally, etc.) to the existing program expressions will easily result in buggy code.

I think your assertion might be true for some specific integrations of FP nodes into FBP. Is this what you mean?


Today-era functional languages (so basically haskell and scala) don't do that at the "host PL" level, but you can model it in-band, and modeling embedded PLs is what most of the cool FP research for the last 5 years is all about. Free monad, tagless-final interpreters, etc. You make a DSL with abstract capabilities and then code your application in that. Any reified behaviors end up encoded into the type. I hear Facebook is using algebraic effects to model and track data access.


FP is relatively convenient for metaprogramming, yes. GADTs and tagless-final encodings (or as I think of them, Church-encoded ASTs) are especially useful for this.

But it wouldn't be a big difference if the host language was heavily imperative, so long as it favors immutable data structures. The FP aspect for this exploration of DSLs is much more social and cultural than technical.

I think that if we really want to decouple AST from evaluation context, the main tool we'd need is to deconflate module 'import' into separate steps to 'load' a module's AST and 'integrate' definitions, allowing for intermediate processing. This requires some host language changes, elimination of module-global state, etc..


It would be easy to do. That 3-bit word just needs to start at 1 (since it encodes fewer than 8 options). Then the fixed 1-bit can instead encode whether another 7-bit segment follows.


A hashmap isn't sorted keys. But 'O(log(n))' is still incorrect. The fastest you can look up information physically represented in a 3D universe is 'O(cube-root(N))'. That applies to radom access memory and hashmaps, too. Cf. Myth of RAM [1].

[1] http://www.ilikebigbits.com/blog/2014/4/21/the-myth-of-ram-p...


> And how often does that occur in practice for most of the programs people write?

Very frequently. The simple theorems that equational reasoning supports are very convenient for refactoring and optimizing code. Understanding components in isolation is useful for testing, isolating bugs, and refactoring of subprograms independently from their contexts. Since I do testing and refactoring frequently, I make good use of what FP offers.

Sure, this isn't equivalent to full 'correctness' proofs. The lightweight equational reasoning FP offers for expressions and behavior doesn't prove you have 'correct' expressions or behavior. But it doesn't take theorems of correctness to be useful.

> that is true of statically typed OO also

Not really. Statically typed OO doesn't prove nearly as much about the behavior. This is mostly due to potential for aliasing of stateful sub-components. If you have capability secure OO, like E language or NewSpeak or Joe-E, that can help a lot, but the potential for hidden communication via aliased components still hinders a lot of useful equational reasoning.

> only way to get encapsulated state in Haskell is World -> World

That's simply untrue. Encapsulated state in Haskell can be modeled by a variety of types. One is `Machine i o = i → (o, Machine i o)`. Such machines can be composed, each component machine having its own internal, encapsulated state.

> OOP also is not very new, being formed from the culmination have patterns used in the 70s, and is about the same vintage as FP.

I'd say pure FP is a lot younger. The idioms for purely functional expression of IO simply didn't exist in the 70s. E.g. even in the mid 80s, you had languages like Miranda with impure 'readFile :: FileName → Bytes'. The Clean programming language using uniqueness types for IO was a new thing in the mid 80s. The modern use of monads harkens to Wadler in the mid 90s. I tend to think of `pure FP` as more a mid-90s paradigm. It really is a different paradigm - a different way of thinking about and constructing programs - than the ad-hoc impure FP of Lisp and OCaml.


> Very frequently. The simple theorems that equational reasoning supports are very convenient for refactoring and optimizing code.

I was generalizing across all programmers. Of course individual cases may vary, and it really depends on the code that you write!

> Not really. Statically typed OO doesn't prove nearly as much about the behavior. This is mostly due to potential for aliasing of stateful sub-components.

Haskell simply avoids aliasing with value semantics, and even then you can add it back and get the same problems.

> That's simply untrue. Encapsulated state in Haskell can be modeled by a variety of types. One is `Machine i o = i → (o, Machine i o)`. Such machines can be composed, each component machine having its own internal, encapsulated state.

I'm not disagreeing. My point is that you have to bury the state in something else, which is then exposed via a signature (well, in Haskell which supports this, of course).

> I'd say pure FP is a lot younger.

We could debate a lot on what pure FP is, what pure OO is, and who climbed the ladder faster. But the fact that paradigms developed around the same time is probably due to the programming field maturing in that time frame.


> generalizing across all programmers

I think the relevant question is whether functional programmers, not all programmers, regularly leverage the lightweight equational reasoning, refactoring, and context-independent behavior that is available with purely functional programming. I believe most do.

> avoids aliasing with value semantics, and even then you can add it back and get the same problems (...) you have to bury the state in something else, which is then exposed via a signature

In a programming model without side-effects, it is necessarily the case that all 'effects' are modeled in the call-return behavior. The type signature, too, for a strongly typed language. And I agree that, upon modeling state or aliasing, we get to deal with not just the feature but all of its associated problems.

OTOH, the problems of state and aliasing don't implicitly leak into every subprogram. The precise control over effects can be very convenient.

You couch that control in pessimistic terms like "pollute all function signatures it buried through". But in practice I've never had difficulty anticipating where I'll need to model effectful behavior, nor with extending the set of effects as needed. Oleg's recent development of freer monads with more extensible effects [1] is compelling in its potential to remove remaining drudge-work.

[1] http://okmij.org/ftp/Haskell/extensible/more.pdf

> We could debate a lot on what pure FP is, what pure OO is (...) paradigms developed around the same time

Pure FP (programming with mathematical functions, no side-effects) and impure FP (i.e. first-class procedural programming) are essentially different paradigms. They require different patterns of thinking, reasoning about, and constructing programs. Despite the ongoing battle over the "functional programming" branding, it isn't wise to conflate the two paradigms. It was impure FP that developed around the same time as OOP. Pure FP is about twenty years younger, more or less.

(The mention of 'pure OO' seems an irrelevant distraction. Do you believe pure OO vs. impure OO, however you distinguish them, require significantly different design and development patterns and are thus distinct paradigms?)


> I think the relevant question is whether functional programmers, not all programmers, regularly leverage the lightweight equational reasoning, refactoring, and context-independent behavior that is available with purely functional programming. I believe most do.

This begs the question of who is a functional programmer, and how typical are they? I know a few FP programmers who are able to stay in the abstract world for a long time, thinking symbolically, equationally, and don't need petty things like concrete examples (most of the members of WGFP, for example). Then there is the rest of us!

> In a programming model without side-effects, it is necessarily the case that all 'effects' are modeled in the call-return behavior. The type signature, too, for a strongly typed language.

You can always default to World -> World in a pure language, and ya...technically you don't have side effects anymore, but for all practical purposes you do! For this to be useful at all, you have to keep your effects fine grained, and for functions that call other functions (like a general ForEach), effects have to be parametric as well, or you wind up polluting everything (or worse, being unable to express something).

Pure FP culminates from a bunch of experience in the 70s (I'm not talking about Lisp), which happens to be where OO came from as well. Pure OO doesn't really make sense...OO can't be pure but rather organic.


Methinks you misunderstand the programming experience of FP. A nice property of 'equational reasoning' is that, as a universal property of code, I don't have to think about it. It just becomes a law of code - conservation of meaning. Abstraction, splicing, refactoring, and reuse happens easily and fluidly because I don't need to think about whether those actions are safe or correct. It isn't about hand-wavy abstract worlds at all, but rather concrete manipulation of source code. Concrete examples are also very common and useful with pure FP, e.g. use of REPLs is common.

A pure `World→World` effects model is NOT a practical equivalent to introducing side-effects. Unlike with side-effects, it's trivial to constrain a subprogram from access to the whole World, e.g. use divide and conquer tactics to hide parts of the world, or limit a subprogram to a constrained subset of monadic commands that an interpreter uses to access and update the world in limited ways.

No experience from the 70s left academics or industry prepared to support purely functional IO models. It should go without saying that you can't have a complete programming paradigm without an effective story for IO. Miranda, an attempt at a purely functional language of the mid 80s, made a valiant attempt at pure IO, and failed. Even early Haskell, up through the early 90s lacked an effective, coherent story for IO. I find your repeated assertions that pure FP was somehow a child of the 70s to be very dubious.


REPLs don't help out with refactoring, splicing, or even abstraction, at least with the current tool chain we have.

IF you add a World->World effect to your function, it is the equivalent to saying it has unconstrained side effects, is it not? It is the practical equivalent, even if its implementation can pair off the world as needed.

Hindley Milner, Landin, ISWIM, APL, list comprehensions, etc...are all children of the 70s. The IO problem wasn't solved until Wadler in early 90s, but then again we didn't get type parameters in OO languages before then either.


Wat? It's equational reasoning that lets us manipulate concrete code easily. REPLs are just proof that functional programmers like working concrete examples, too.

A World->World function is unconstrained in its effects upon the given world, modulo substructural types. But it is not a "side effect". Among other important differences, your program can't describe an infinite loop with observable effects on that World.

I don't consider type safety, list comprehensions, etc. to be essential for pure FP. Immutable values, first class functions, and explicit effects models on the call-return path (instead of side effects) are the critical ingredients. Comparing the essential IO problem to "type parameters" (which aren't even a problem for a rich ecosystem of dynamically typed OO) seems disingenuous.


Side effect free functional programming was taught using Lisp in the 60s.

See "Pure Lisp", a style and subset, where no side-effects has been used.


I wasn't able to find much on this via search. The papers I found date to 96. Do you know how 60s Pure Lisp handled IO?


I don't believe FP has any strong assumption that all problems reduce to the same kind of algorithm. Rather, different kinds of algorithm are modeled typefully - e.g. with various monads or abstract data types.

As you mention, pure FP does require learning different idioms. OTOH, a lot of those idioms can be applied effectively to imperative programming, so learning isn't necessarily a wasted effort even if you spend most of your time wrist deep in C code.


I see you point. I just think there's better options if you're going to approach FP. I think picking a language that's flexible about it's enforcement is one approach. I think I've learned more from Common LISP and Ocaml about FP than I ever did from Haskell. It's not to say Haskell is a bad language, but it's just not one you can pick up in a day.


I see a lot of potential for a new class of musical instruments based on waving your hands.


More paradigms:

Maude. Pure. OBJ3 - term rewriting

Inform 7 - object-relational

Kaleidoscope. - constraint imperative

Bloom. Dedalus. - Temporal Logic; monotonic data.

Excel. Spreadsheets. - reactive dataflow.

Conway's Game of Life - Cellular automata.

Oz/Mozart - distributed unification.

I would suggest Dedalus (datalog + time) as a cleaner example than Prolog of how to lift logic programming into general purpose computation.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: