Hacker Newsnew | past | comments | ask | show | jobs | submit | DixieDev's commentslogin

This line of thought works for storage in isolation, but does not hold up if write speed is a concern.


Speed can always be improved. If a method is too slow, run multiple machines in parralel. Longevity is different as it cannot scale. A million cd burners are together very fast, but the CDs wont last any longer. So the storage method is is the more profound tech problem.


So long as (fast/optimal) real-time access to new data is not a concern, you can introduce compaction to solve both problems.


> (fast/optimal) real-time access to new data

https://en.wikipedia.org/wiki/Optimal_binary_search_tree#Dyn...


as a line of thought, it totally does. you just extend the workload description to include writes. where this get problematic is that the ideal structure for transactional writes is nearly pessimal from a read standpoint. which is why we seem to end up doubling the write overhead - once to remember and once to optimize. or highly write-centric approach like LSM

I'd love to be clued in on more interesting architectures that either attempt to optimize both or provide a more continuous tuning knob between them


Where I work we tend to write RFCs for fundamental design decisions. Deciding what counts as a "fundamental design decision" is sometimes self-moderated in the moment but we also account for it when making long term plans. For example when initially creating epics in Jira we might find it hard to flesh out as we don't really know how we're going to approach it, so we just start it off with a task to write an RFC.

These can be written either for just our team or for the eyes of all other software teams. In the latter case we put these forward as RFCs for discussion in a fortnightly meeting, which is announced well in advance so people can read them, leave comments beforehand, and only need to attend the meeting if there's an RFC of interest to them up for discussion.

This has gone pretty well for us! It can feel like a pain to write some of these, and at times I think we overuse them somewhat, but I much prefer our approach to any other place I've worked where we didn't have any sort of collaborative design process in place at all.


I view this process like this: code review is a communication tool: you can discuss concrete decisions vs hand waving and explaining in the conceptual space, which of course has its place, but is limited.

But writing the whole working code just to discuss some APIs is too much and will require extra work to change if problems are surfaced on review.

So a design document is something in the middle: it should draw a line where the picture of the planned change is as clear as possible and can be communicated with shareholders.

Other possible middle grounds include PRs that don’t pass all tests or that don’t even build at all. You just have to choose the most appropriate sequence of communication tools to come to agreements in the team and come to a point where the team is on the same page on all the decisions and how the final picture looks.


I would like to see improvements in the speed of feedback - particularly from language servers - but the value of those 'basic' guarantees is more than worth the current cost. Unexpected side effects are responsible for almost every trip I've taken with a debugger in any large Java or C++ project I've ever worked on.


I can remember about 20 years ago a colleague getting quite frustrated that a bug he had been looking at for quite a long time came down to someone doing something bizarre in an overloaded assignment operator in C++.


I've seen methods with names like "get_value()" have extensive side effects.

No type system can fix bad programming.


Of course I think we have all seen horrors like that - what I remember was his completely exasperated response not the technical details of the bug.


Complexity is mostly exponentiellt worse in the unknowns and you can not graph what you already know.

The point in the article is that when we read code we need another visualization to change or mental model. I can scan code and find most bugs fast, but when you are stuck a complexity by row/column sure would be handy to find overloaded assignments.


Your comment is far too reasonable. Go directly to jail. Do not pass ⅄OR.


Nushell is quite nice. Tables are nice to work with and look at, and the cross-platform support is top notch. It feels like what Powershell would have been, had it been designed by people who have actually used a command-line before. The main issues I have are bugs. Most recently, I find that it can't properly handle files with square brackets in the name, which isn't really all that uncommon.

I wouldn't recommend it for day-to-day, production usage just yet, but definitely worth keeping an eye on if you're not a huge fan of the typical stringly-typed shells.


One advantage or Powershell though is that in it you can put something into the pipeline from anywhere in your code, even within imperative code like in the middle a loop, just by mentioning a bare value at nearly any spot. And traditional shells are the same way (though they only support byte streams output), where you can send stuff to stdout/err at any point.

But in nu, it's more like you're just dealing with collections as in most programming languages. If your collection isn't amenable to being generated by a simple expression like a map or fold, then you have to create a collection, prep the collection to be what you want it to be, then you return it.

In that sense it's really different from both Powershell and traditional shells, and more like just using a traditional programming language. So in Nu I miss being able to just "yield" a value from anywhere without staging into a temporary collection first.


> if your collection isn't amenable to being generated by a simple expression like a map or fold, then you have to create a collection, prep the collection to be what you want it to be, then you return it.

This release added the ability to "yield" values via a generator fashion. It's called `unfold`, but will be renamed to `generate` in the next release: https://www.nushell.sh/commands/docs/unfold.html.


Great news!


> It feels like what Powershell would have been, had it been designed by people who have actually used a command-line before.

It is designed by such people. Not sure why PowerShell tables and cross-platform support is not top notch \_(ツ)_/

Effort is salutable, but let's be realistic, Nushell is 10+ years from PowerShell.


Maybe in some areas, but in terms of day to day usage, I prefer nushell


> if you're not a huge fan of the typical stringly-typed shells.

What aspect of the other shells qualify them as 'stringly-typed'? Shells seem quite happy piping raw binary data, and there are commands that can deal with binary streams (like wc). The shell also doesn't seem to do anything with the data, unless specifically asked to do so (eval, backticks, etc). (genuinely ignorant and curious).


"Stringly typed" is usually used as a derogatory term for the alternative to strong typing, which often looks like manipulating raw strings when higher level code would have either better error handling or less boilerplate.

In this sense, dealing with raw bytes is worse.

There is no reason we can't use CLIs that provide higher level operations, but there's just not enough standardization.


I think they are refering to the fact that variables are always strings. You can write stuff like i=0;i=$(( i + 1 ) which looks like you are dealing with integers, but in reality, the shell only deals with strings.


That makes much more sense. I was wondering how shells are classified under type system theory.


You know those status bars in dwm and similar window managers? Those are usually generated from a pipeline outputting a line every few seconds or so.

To make it display something as trivial as the frequency of your CPU, the network speed, etc, you have to randomly parse some ad-hoc, program-dependent format that may be locale-dependent! If these could speak a proper language, one could just take the value of this field/object/element and be done with it.


Where have you seen a service describing something posted less than 12 hours ago as being "posted last week" rather than "posted yesterday"/"posted today"?


The point is that both are technically correct. At 12:00am on January 1 that happens to fall on a Monday something that occurred 1 second ago can be considered to have taken place: last week (depending on when you consider weeks to start/end), last month, last year, yesterday, etc.


Sure, they’re both technically correct, but that wasn’t the question. The question was

> Where have you seen a service describing something posted less than 12 hours ago as being "posted last week" rather than "posted yesterday"/"posted today"?

Every single time display library I have seen will do as the poster explained, and show more granular information the closer it is (generally starting at “X seconds ago”). Your technicality is certainly true, but a strawman.


Actually, it's that question that is the straw man. jstanley's point is that to the reader "posted last week" could mean almost no time at all ago. It's not about the display library choosing to label recent things as this. It's about humans reading such a label and not receiving the intended information, because humans don't cut off what they understand "last week" to mean at some arbitrary 12-hours-ago mark.


Except they do get the intended information. The label doesn’t exist in a vacuum; the user would have seen other examples showing the specificity at intervals other than “last week” which sensitizes them to the cutoff points.

This is literally a non-problem. If a library behaved in the way the parent commenter makes them seem like, then, sure, they have a point. But they don’t. Something that occurred a second ago would say “a second ago”. Something that occurred 5 minutes and 43 seconds ago would say “6 minutes ago”, etc. There is no library in the world which takes a timestamp a second ago and outputs “a week ago” and pretending like there is is, literally, a strawman.


> the user would have seen other examples

Yeah, I've seen enough examples to know that no 2 websites implement the same logic so I shouldn't try to second-guess anything more than what the text literally says.


If the user needs to know the details of your "time display library" to disambiguate "last week" then you've already lost. It's bad UX. Stop doing it.


But… users don’t. Literally no library out there does what the parent commenter claims they could do. They all use sensible cutovers for relative dates.


The point is that if a user sees "last week" it's ambiguous. They DON'T KNOW that your libraries don't behave that way. All they see is "last week". Are you really not getting this?


I got quite annoyed with Neovim config at some point and tried out Kakoune, and ended up contributing some window splitting code to the main repo for Sway. I liked it quite a lot, but it's not built with Windows in mind so I ended up crawling back to Neovim. I'd be interested to hear of any Kakoune-like editors with better cross-platform support/design.


I feel genuinely insane trying to parse this site. It's a miracle that there's no mention of Web3.


Oh I just assumed this was all on the blockchain.


You guys are a little behind. Web3 is dropping the Metaverse and moving on to AI now.


I just assumed the blockchain was created by AI.


I'm mostly indifferent to it because it doesn't really harm readability, and it's not hard to know when to use it, but I can seem why it might be more strongly disliked. You can't entirely rely on it to know a dereference is happening because references (e.g. `int&`) aren't subject to the `->` requirement. It's also annoying if you find that you can refactor `func(T*)` to `func(T&)` and now you have to replace all `->`s with `.`s.


> You can't entirely rely on it to know a dereference is happening because references (e.g. `int&`) aren't subject to the `->` requirement.

That's true but I think the bigger motivation is avoiding ambiguity than seeing when an indirection is happening. (You also can't see whether a method is virtual, i.e. indirecting via the vtable, from the call site.) In C++ references don't have any standalone methods or operators so there's no ambiguity from using methods via a reference just using a dot.


> You can't entirely rely on it to know a dereference is happening because references (e.g. `int&`) aren't subject to the `->` requirement.

To the extent that this is a problem, it's a problem with references as a language feature in general, not with the arrow syntax.


Is writing a simple CSS file and some HTML code really too much effort for a personal blog? Clicking around this site the only non-trivial things are the RSS feed and the paginated scrolling (which seems wholly unnecessary) on the homepage.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: