Hacker Newsnew | past | comments | ask | show | jobs | submit | _a1_'s commentslogin

I appreciate the article, but it would be really nice if the author could add a timestamp to his blog posts. Without timestamps, it's impossible to know if any issue described in the article body still exists.

I didn't read it, because it might present outdated knowledge.


I read it. Didn’t find any outdated information in it.


Please check reply by dig1, it does contains some mis-information. It even incorrectly refer to the Heartbleed problem.


dig1 is wrong. He uses the age old C defence of "it's not a problem with the language, it's just bad programmers programming badly". Apparently buffer reuse isn't a problem because "sane" libraries don't do it. Well, I'll believe it when we stop seeing security issues in C code bases.


The fact that my perfectly valid comment was down voted like this shows that HN has a pretty dysfunctional community. I think that is my last comment here ;)


> Sure, there's a whole slew of crap being grafted onto it like systemd, Wayland, Chromium, etc...

But... you don't have to use any of this

(in fact, I can't even use Wayland at all, even if I wanted to [because of Nvidia])


That,s almost word for word the next sentence in my comment.


People who train themselves to reliably induce lucid dreams call themselves oneironauts (Oneiros = Greek god of dreams).

I was actively interested in this topic when I was younger and had much time to sleep and practice, but after I got my job and life problems kicked in, suddenly I realized that the only thing I require from a dream is to get a good night sleep so I will have strength to face problems during the day. A little bit sad but true.

But, during my lucid dreaming endeavors I realized the technique itself is incredibly powerful. Especially for people who have tendencies for daydreaming. One easy technique used during learning is to create a habit of looking at the watch and asking oneself if this is a dream or not. Doing it for a few times during a day will eventually create a habit, and will eventually increase the chances of unintentionally doing it during the actual dream. Then, the question "is this a dream?" will have a chance of recalling that we have consciousness, while still being inside the dream. It will be a lucid dream.

Maintaining lucid state of a lucid dream is another topic. Sometimes a few seconds after starting to be in a lucid dream, we forget about the state and we go right back to a normal, non-lucid dream. There are techniques for prolonging the state, but require training (like everything I guess).


But is a lucid dream merely a dream in which the dreamer is aware of being asleep and dreaming, or is it more than that? I thought it also included taking control and doing whatever you want e.g. jumping off a building and flying wherever you want, because the laws of physics are suspended in dreaming.

How to activate superuser mode in dreams?


Here's some knowledge from the time I was into lucid dreaming (6+ years ago):

Some people classify the lucidity of a dream as "non-lucid", "semi-lucid" and "lucid".

"Non-Lucid" is just a normal dream. "Semi-Lucid" is a dream in which you know that you're dreaming, but you don't have real control and are just going with the flow. In real lucid dreams you're fully aware that you're dreaming and can control yourself (movement & talking). Experiencing a lucid dream is just plain awesome in my experience. You know that the experience is not real, but it feels real.

Controlling the dream itself (surroundings, other people, flying etc) is something the dreamer has to learn, as it isn't as straight-forward as just thinking/saying "I want to fly now"/"Let there be an orgy". If your brain doesn't expect something to happen, then it likely won't happen.

Flying is relatively easy because it really doesn't change your surroundings. I just imagine how I'm flying and it happens.

To change my surroundings I usually use the "spinning"/"blinking" trick: Either spin around or slowly blink with your eyes while imagining what you want to change. With some good luck it happens.

But dreams are unstable. Things can change quickly and without your control. It's a constant fight against your subconscious. A fight which often leads to waking up.

I have a recommendation for anyone still reading: Instead of flying or fucking/killing people (those were the most common themes on the forums back then), just talk with the people in your dream. You're not talking with real people, you're talking with yourself, but it doesn't feel that way. Talking with dream characters can be useful for introspection, as dreams are heavily influenced by your feelings.

Man, that all sounds so esoteric and non-scientific. Just my experience.


My experience has been that trying to do huge things to a dream setting either just ends the dream it doesn’t really work. For me, it was easy to get to a state of , okay I’m lucid, now what? Only thing is to go with the flow, surf rather than control. I imagine with lots of practice it can be better handled.


It’s pretty easy. Instead of looking at a watch a few times a day you carry around a controller and a few times a day you hit: up, up, down, down, left, right, left, right, B, A and Start, and then ask yourself if you’re dreaming. Then when you do it while dreaming: bam! Superpowers. :)


Apparently while lucid dreaming the brain doesn't renegerate the same like during normal sleep, so it's like less sleep / interrupted sleep.


My daughter has narcolepsy and (according to her) always lucid dreams, so I hope this isn't completely true.


If you remember your dreams it's a sign of bad sleep, so yes it's probably true.


Where did you get this from?



I literally saw this 2 days ago when watching a new series on Netflix called 'Behind her eyes'. Maybe you will like it!


> As an implementation strategy, we do not care about memory leak because we really can't save that much memory by doing precise memory management. It is because most objects that are allocated during an execution of mold are needed until the very end of the program. I'm sure this is an odd memory management scheme (or the lack thereof), but this is what LLVM lld does too.

The fact that someone does it wrong doesn't mean that we should do it wrong as well ;(

I don't think I like this approach. It may work now, but will probably seriously limit the possibilities in the future.


If you think this approach is wrong, could you articulate the reasons why you think it is wrong? This is a classic memory management strategy... if a program is running as a batch program, all memory will be freed when the program exits. Any alternative memory management strategy would have to free and then reuse memory in order to show improvement. If it's a small amount of memory freed, or if the memory is unlikely to be reused, the benefits of freeing memory are smaller and it may actually slow the program down.

The fact that this program can successfully link Chrome means that we have fairly solid baseline performance metrics we can use for "big" programs. Chrome is just about the largest program you might ever need to link.


Yeah, this is also the strategy GCC and co use generally AFAIK. In a program like GCC where a single invocation will operate over a single file/unit, there's just not much benefit to trying to re-use data; if GCC or LLVM were closer to "build servers" with persistent state that compiled and linked objects on demand then it'd make sense, but in their current model, it's easier and safer to just keep data around.


Another classic example is Apache's memory pool. In Apache, you allocate memory from memory pools associated with the current request or connection. Memory pools are freed as a whole when a request or a connection is complete. mold's memory management scheme is not very different from that if you think the entire linker as a single "session" which uses a single memory pool.


> If you think this approach is wrong, could you articulate the reasons why you think it is wrong?

Because later if you want to reuse parts of the code in a continuous environment (e.g. a daemon), then you will be surprised that you have memory leaks all over the place (or worse, someone else will discover it by accident).

I don't have a problem with the end-of-process-releases-all-memory optimization. But I had the impression that the author uses let's-worry-about-leaks-later-because-OS-takes-care-of-it-for-free-(in-my-use-case).

Best approach to take would be to create a memory pool with fast allocation (e.g. TLAB allocation in Java, or how computer games do it), in order to have control over how the memory is freed or when.


I wait for the day that chess will be banned in USA, or some fork of chess will emerge, because obviously chess is racist: white is always first.


https://www.wykop.pl is probably the biggest Polish site that gathers best experts from most of the industries, to provide comments on the newest political, religious, science and engineering news from all around the world. The comments posted by users of wykop.pl are often cited in other sites.

...at least that's what everyone on this site would like to think. In reality, it's just a digg.com clone, before it started to suck ;)

(even name is a reference to digg, 'wykop' means 'a dig site', or 'to dig')


This page is terrible, heavily biased and almost all discussions that occur there are fundamentally flawed due to basics of basics of logical fallacies.

It's one of those websites that makes you lose faith in people e.g when you read popular takes about Bill Gates and COVID related stuff, it's pretty sad.

I bet that just reading titles from the front page for a 3 months would affect your happines cuz majority of headlines there are sad stories, provocative titles about politics, religion, men and women relations with heavy biass towards one side, generally a lot of junk

It's sad that this is one of the biggest ""discussion"" happening sites in the Polish internet.


> Scala just has implicits which can be used for method extension.

I don't think it's fair to say it like this. Scala's implicits can mean different things, depending on where they're used. Scala 3 even divides 'implicit' to multiple keywords.

(kind of 'static' in C++ I guess, only more complicated)


That's exactly what I said, no?

Scala has one feature (implicits) but it can be used ("mean") for different things.

Essentially, you can mark definitions as implicit and you can mark parameters as implicit. Yes, Scala 3 uses different keywords to make it easier to understand which is what, but both is still just the concept of things being implicit.

Think about it: one without the other is completely useless. If you cannot define implicit parameters, then marking any value as implicit will not have any effect. The other way around too: you can mark your parameters as implicit as much as you want, if you can't define implicit values, you will always be forced to pass all parameters manually.

Even implicit classes (excentions) are just syntactic sugar for regular methods that are marked implicit.


I’m not a Scala programmer so I don’t know who is more right here, but _ai_ was saying that calling three different features by one name does not mean there’s really one feature. Which is different than saying one feature can be used in three different ways. The C++ static example was used because in that case the same keyword was used for several literally different features to avoid adding additional reserved words.


It's literally one feature - each of the different "ways" gets rewritten.

It's why Scala 2 and 3 are able to maintain pretty good interoperability.


Using scalac is not a standard use case.

Standard use case is to use sbt (well, or mill, since we're in lihaoyi's thread :D). Sbt starts up slow, but with bloop, or sbtn (native client of sbt), the compilation is really fast (much faster than C++ for example):

    $ time sbtc compile                                                                          
    [info] entering *experimental* thin client - BEEP WHIRR
    [info] terminate the server with `shutdown`
    > compile
    [info] compiling 1 Scala source to /home/.../target/scala-2.13/classes ...
    [info] compile completed
    [success] Total time: 0 s, completed Feb 11, 2021 4:21:51 PM
    sbtc compile  0,08s user 0,02s system 21% cpu 0,492 total


Yeah, set 1.4 was a huge improvement. I wrote a lot of scala several years ago and just got back into it this year and the dev experience is much nicer now with the SBT sever model


> When I hit the start menu, it’s because I want to launch an application. I don’t need to see the rest of the desktop. So why is the Start menu by default only occupying a small portion of the screen, and wasting the remaining space?

Hilarious.

I mean, I fully agree with the author on that. But since the author is not tied to a Windows ecosystem, he doesn't know that a full-screen start menu actually happened on Windows 8, and it was nearly boycotted by Windows userbase because of the fact that it occupies full screen space. Users demanded to have a Windows95-style start menu, and MS had to redesign it.

Why it's so important for Windows users to have a Windows95-style menu, is beyond me.

There you have it :D


He does, he links to the following footnote: "Windows 8 was the best version of Windows. And that’s just a fact."


You're right, I missed that.


I run many applications on my computer at once and I often find myself opening the start menu to launch an application while keeping an eye on another window in the background. The hierarchical organization and text-with-icons style of the classic start menu also feels less overwhelming to me than the flat organization and icons-with-text style of the Windows 8 menu.

But I also think these are minor issues and the backlash against the menu in Windows 8 was over-the-top. At the end of the day, I think the real reason people were mad is that it just looked very different from previous menus, which made casual users uncomfortable. Which is kind of funny seeing as it's very similar to the app menus used by macOS, iOS, and Android.


My complaint with the Windows 8 start menu wasn't that it took up the whole screen, it was that it took up the whole screen to show _the same number of items_ or fewer than the 95-style start menu. And half of them were ads.

A full-screen start menu with the same information density as the 95-style start menu, possibly with a pane for programs and a pane for explorer, is something I'd happily try out.


> Why it's so important for Windows users to have a Windows95-style menu, is beyond me.

Because there is no other mechanism to start an installed program except maybe by navigating with windows explorer and double clicking the exe or typing the full path in cmd.exe.


1) Add shortcuts to the Desktop [1] (since 95)

2) Win+R Run dialog [2] (since 95)

3) Pin to the Taskbar [3] (since Vista)

4) Windows key and type [4] (since Vista)

5) PowerToys Run (new; optional addon from GitHub Microsoft/PowerToys; mentioned in article), and similar tools from third party vendors

[1] So many Installers since '95 still do this by default. I've seen so many Windows users that that's how they launch everything, from a super cluttered Desktop that constantly rearrange. It's partly why I turn off the Desktop entirely, as I personally don't have an interest in managing that cluttered mess.

[2] Not something I'd recommend today, but a lot of people have ingrained muscle memory going all the way back to '95. It has an interesting search heuristic and you don't always need to type a full path. Plus it has autocomplete when you do need to type a full path.

[3] I keep a lot of important things pinned. Pins are also great because they give you automatic global shortcuts for free. Win+{N} where N is between 1 and 9 (inc.) and is the number of the pin in taskbar order.

[4] Searches all installed apps, enter to launch. Arrow keys to navigate if multiple suggestions. Quick, fast, convenient.

Microsoft said when building Windows 8 most of their telemetry showed "no one" actually used the Start Menu to launch apps and people either fell into bucket [1] or bucket [3 + 4] (me), with a few stodgy outliers in the bucket [1 + 2] camp. The full screen Start Menu acts the way most Desktop-heavy users work (hiding all Windows until they could see the Desktop to launch their next app, whether by one of the Minimize All Windows shortcuts or the actual Win+D Desktop shortcut), and helped bring down the amount of auto-installed clutter for those of us in the [3 + 4] camp that didn't want to manage the Desktop as it has been since '95.

As a fun lesson in telemetry, because Vista and 7's telemetry for app launching "mechanism" was opt-in, apparently the people that actually used the Start Menu as a Menu mostly failed to ever opt-in. The vocal anger of that very crowd at Windows 8 is a large part of why Windows 10 moved to a more opt-out telemetry model to avoid the sorts of assumptions that happened in Windows 8. (The irony shouldn't be lost that many of the same people that hated Windows 8 for ignoring their use cases are the same that hate telemetry in general. It shouldn't be that shocking that Microsoft doesn't cater to your use cases if they can't gather telemetry to know that you exist. ¯\_(ツ)_/¯)


> It shouldn't be that shocking that Microsoft doesn't cater to your use cases if they can't gather telemetry to know that you exist.

User research isn't limited to, and existed before, telemetry. Telemetry, if used, should be an additional channel of user research, not an excuse to be lazy.


User Research isn't omniscient by any means and has the same opt-in biases. (More so, even, because most User Research wants to be done in lab environments and that raises the bar from "press check-box for passive data gathering" to "can physically get to lab for testing". Even the middle ground of "surveys" still has a time/attention/patience bar to hurdle that telemetry does not.)

Microsoft certainly used both Telemetry and lots of User Research in the Windows 8 development process, and clearly had many of the same blind spots in both. The point remains that opting into telemetry is still the lowest bar to hop as a user to getting your "voice" heard (as aggregate statistics).


hate speech?


Not speech, they don't speak


ape speech.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: