Somebody save Kathy Sierra’s blog! https://headrush.typepad.com/ I’ll try to archive it. I love her work. But even if I save it, it should live on somewhere else.
Pinboard was a clone with a different business model: users actually paid for it.
Fast forward, and delicious died, only to be acquired by — you guessed it — Pinboard [1]. Because Pinboard was actually serving its paying customers, it just kept trucking along.
Yes, I did this at my startup. Fast forward a few years, and now the company has more Rust code than Python, and the majority of the company's IP is in Rust.
I suggest beginning with small, one-off things that don't have much impact. People, even developers, tend to shy away from things that aren't familiar. By introducing Rust in a small, low-risk way, it helps people get familiar with it. They get to build familiarity with building Rust projects, navigating the project structure, and reading docs. I submit pull requests that get people to read Rust code, even if it's just to say "looks good". Their familiarity builds slowly over time, meaning they'll be less triggered by seeing Rust in a larger, more impactful project down the road.
How do you boil a software developer? Slowly.
If they give Rust a chance and your team has a champion to guide them, they'll see its merits. I think a lot of people come to Rust for the performance, but that's not why they stay.
I am in the process of oxidizing some stuff at work with Rust. I too am starting from small pieces, things that I can incorporate and call off python directly. Also relying a bit on codegen to blend the two languages and slowly remove all python code.
Yeah, this is just baffling. A team can be so averse to learning new tools, good ones too, that they would rather dump their time into rewriting. Instead of getting paid to level up their skills, they'd rather block forward movement of the company's goals to maintain the status quo.
That's fine. Safety with concurrency doesn't exist in any language. Only rust is special in that it tries to provide safety with concurrency as well. I haven't seen any other language besides rust actually do this.
The reason is obvious. There's a high cost to this type of safety. Rust is hard to use and learn and many times it's safety forces users to awkwardly organize code.
And there's still the potential for race conditions even though the memory is safe, you don't have full safety.
It's not safe when using goroutines to access shared mutable data (and most Go code does this). If you stick to message passing a.k.a. "share data by communicating" you don't run into memory unsafety. But this kind of design is more vulnerable to other concurrency issues, viz. race conditions and deadlocks.
Go is memory safe, that post does not means anything in real life scenario.
Do you have a single example in the last 14 years of memory safety exploit using the Go runtime? I'm talking about public and known exploit not ctf and the like.
Wow, that's super interesting. As you say, it's a contrived CTF example, but I'm pretty shocked that it's possible to read and write arbitrary process memory without importing any packages (especially unsafe, of course).
I'm also surprised that a fix has been theorized at least as far back as 2010[1], but not implemented. Is adding one layer of internal pointer redirection for interfaces, slices, and strings really that much of a performance concern?
Go was released in 2009 and I've never heard about any exploit and what not , by the way this is known and by design it's not new. It's all about the multi word for interface.
I mean if in 14 years there was nothing it's a proof that it's not an issue.
Even the attacker ack that it's not a threat.
"As said before, while a fun exercise it's pretty useless in the current Go threat mode"
* many GC'd languages like Go, C#, Java make it harder to leak memory, while languages where reference counting is more prevalent (Python, Rust) it can be easier to leak memory due to circular references.
* Languages with VMs like C#/Java/Python may be easier to sandbox or execute securely, but since native code is often called into it breaks the sandboxing nature.
* Formally-verified C code (like what aerospace manufacturers write) is safer than e.g. Rust.
* For maximum safety, sandboxing becomes important - so WASM begins to look appealing for non-safety-critical systems (like aerospace) as it allows for applying memory/CPU constraints too in addition to restricting all system access.
In this case it's not memory unsafe. It is guaranteed to crash the program (or get caught). It's closer to a NullReferenceException than it is to reading from a null pointer in C. There's no memory exploitation you can pull off from this bug being in a Go program, but you could in a C program
It's only guaranteed because of the operating system's sandboxing.
> It's closer to a NullReferenceException than it is to reading from a null pointer in C.
No, it's exactly the same as a null pointer dereference in C, because it is literally reading from a null pointer in Go as well. In Java, the compiler inserts null checks before every single dereference and throws an exception for null references.
> There's no memory exploitation you can pull off from this bug being in a Go program, but you could in a C program
Provided the OS sends a SEGV signal for null pointer dereferences, I don't see there being a difference in security between C and Golang in this respect. It's a bigger problem when you're running without an operating system.
In huge number of cases the null dereference is not from accessing 0x0 but some offset to it (ie. accessing a struct member or array element that's not the first one). Of course in practice most of the offsets are below the limit where nothing is ever mapped (on Linux vm.mmap_min_addr and seems 64k by default for me) but it's still very possible to have such dereference to not segfault in C. That should not be possible in Go/Java (if it is, it would almost certainly be considered a bug in the compiler/VM).
Why isn't it possible in Go? If you can use pointers to structures in both Go and C, and you can access the fields of a structure through a pointer in both, then I don't understand why reading a structure field through a null pointer wouldn't cause the dereference of an address like 0x8 in both languages.
Unbounded/large offsets are the critical part. Minimum unit where memory protection can be set is one page (4096 bytes on x86), so compiler could reasonably assume that offsets 0-4095 are always safe to dereference (in the sense that SIGSEGV is guaranteed, which can be then turned into a NullPointerException in the SIGSEGV signal handler) without a NULL check. For anything larger or array accesses, add a explicit check for NULL before dereference.
> In Java, the compiler inserts null checks before every single dereference and throws an exception for null references.
Doesn't OpenJDK install a SIGSEGV handler, and generate the exception from that on a null dereference?
(AFAIK, a lot of runtimes for GC'd languages that support thread-based parallelism do so anyway, because they can use mprotect to do a write barrier in hardware.)
> Doesn't OpenJDK install a SIGSEGV handler, and generate the exception from that on a null dereference?
I thought I had read that they explicitly don't do that, but I can't find it anymore. You may be right. I should have checked before saying that.
> (AFAIK, a lot of runtimes for GC'd languages that support thread-based parallelism do so anyway, because they can use mprotect to do a write barrier in hardware.)
That's true. I guess those implementations must do something more advanced than "throw a NullPointerException if the program segfaults," given their garbage collector runtimes also rely on that signal.
When this happens it'll cause a deoptimization and recompilation of the code to include the null check rather than rely on the signal handler repeatedly.