Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> FWIW I prefer `futures::lock::Mutex` on std, or `async_lock::Mutex` under no_std.

Async mutexes in Rust have so many footguns that I've started to consider them a code smell. See for example the one the Oxide project ran into [1]. IME there are relatively few cases where it makes sense to want to await a mutex asynchronously, and approximately none where it makes sense to hold a mutex over a yield point, which is why a lot of people turn to async mutexes despite advice to the contrary [2]. They are essentially incompatible with structured concurrency, but Rust async in general really wants to be structured in order to be able to play nicely with the borrow checker.

`shadow-rs` [3] bears mentioning as a prebuilt way to do some of the build info collection mentioned later in the post.

[1]: https://rfd.shared.oxide.computer/rfd/0609 [2]: https://docs.rs/tokio/latest/tokio/sync/struct.Mutex.html#wh... [3]: https://docs.rs/shadow-rs/latest/shadow_rs/

 help



Author of the article here! I've actually come to agree with you since writing that article. I'm actually not a fan of mutexes in general and miss having things like TVars from my Haskell days. Just to shout out a deadlock freedom project that I'm not involved in and haven't put in production, but would like to see more exploration in this direction: https://crates.io/crates/happylock

Thanks for the article! As someone who writes a lot of Rust/JS/Wasm FFI it gave me some good food for thought :)

Yes! Mutexes are much nicer in Rust than a lot of languages, but they're still much too low-level for most use-cases. Ironically Lindsey Kuper was an early contributor to the Rust project and IIRC at roughly the same time started talking about LVars [1]. But we still ended up with mutexes as the primary concurrency mechanism in Rust.

[1]: https://dl.acm.org/doi/10.1145/2502323.2502326


I try to avoid tokio in its entirety. There are some embedded use cases with embassy that make sense to me, but I have never needed to write something that benefited from more threads than I had cores to give it. I don't deny those use cases exist, I just don't run into them. I typically spend more time computing than on i/o but so many solid libraries have abandoned their non-async branches I still have to use it more often than I'd like. I get this is a bit of a whine, I could fork those branches if I cared that much. But complaining is easier.

I think the dream is executor-independence. You shouldn't really need to care what executor you or your library consumer is using, and the Rust auto traits are designed so that you can in theory be generic over it. There are a few speed bumps that still make that harder than it really should be though.

I'm not sure what you mean by ‘more threads than I had cores’, though. Unless you tell it otherwise, Tokio will default to one thread per core on the machine.


When you are compute bound threads are just better. Async shines when you are i/o bound and you need to wait on a lot of i/o concurrently. I'm usually compute bound, and I've never needed to wait on more i/o connections than I could handle with threads. Typically all the output and input ip addresses are known in advance and in the helm chart. And countable on one hand.

Oh, right, sure. In Rust the async code and async executor are decoupled. So it's your _executor_ that decides how/whether tasks are mapped to threads and all that jazz.

Meanwhile the async _code_ itself is just a new(ish), lower-level way of writing code that lets you peek under an abstraction. Traditional ‘blocking’ I/O tries to pretend that I/O is an active, sequential process like a normal function call, and then the OS is responsible for providing that abstraction by in fact pausing your whole process until the async event you're waiting on occurs. That's a pretty nice high-level abstraction in a lot of cases, but sometimes you want to take advantage of those extra cycles. Async code is a bit more powerful and ‘closer to the metal’ in that it exposes to your code which operations are going to result in your code being suspended, and so gives you an opportunity to do something else while you wait.

Of course if you're not spending a lot of time doing I/O then the performance improvements probably aren't worth dropping the nice high-level abstraction — if you're barely doing I/O then it doesn't matter if it's not ‘really’ a function call! But even so async functions can provide a nice way of writing things that are kind of like function calls but might not return immediately. For example, request-response–style communication with other threads.


I agree. Async makes sense for Embassy and WASM. I'm skeptical that it really ever makes sense for performance, even if it is technically faster in some extreme cases.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: