Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Copland 2010 revisited: Apple's language and API future (arstechnica.com)
55 points by ZeroGravitas on June 16, 2010 | hide | past | favorite | 39 comments


I think John's right to wonder what's next for Obj-C, as much as I like using the whole Obj-C/Cocoa (Touch) environment.

There's just no easy way forward out of the swamps of C-level memory management that doesn't break the world in some way. Even turning on GC (as we have under OS X) isn't a panacea: there are plenty of Core Foundation kinds of calls that don't participate automatically in GC (you have to manually connect each such allocation to GC). And with "unmanaged" pointers at the C/Obj-C/C++ level, you'll always have to step gingerly around GC.

And he's right to think ahead 5-10 years and realize that we're probably not going to be doing manual memory management then.

So what's the way over the chasm? I don't think anyone really knows, even at Apple.

Perhaps (pure speculation) the MacRuby efforts are part of a back-up plan to see if a ("managed" by definition) dynamic language could help bridge the gap without breaking the world. The MacRuby compiler claims performance on a par with Obj-C or better (using LLVM).

It'll be interesting to see how this plays out.


Good point, but I imagine that a better way forward would be to go to a dynamic language that can be optimized really well. Lua or Javascript would be fantastic choices. Ruby is a great language, and one of my favourite to program in, but it just isn't performant enough for low level work.


The MacRuby folks would claim otherwise.

Yes, I'd also love to see JS become "the next step" (since it seems to be becoming the "machine language" of the future) but it would probably have to be augmented like Objective-J to make Cocoa (Touch) development plausible.


The MacRuby folk would be wrong.

I actually hadn't looked too closely at MacRuby. It is quite an achievement, but compared to (particularly) Lua it's just not fast enough. Compare that to LuaJIT and the differences are remarkable. If MacRuby could be made as fast as Lua then I'd be all for it, I just don't think it's possible given the language semantics.

I guess what I'm saying is that the idea of a dynamic language like any of Lua, Ruby, Javascript, etc. . . being the default systems language on iOS and Mac OS would be amazing; I just don't like Ruby's chances when paired against the other two.


I actually hadn't looked too closely at MacRuby.

Might I suggest that you do so?

...but compared to (particularly) Lua it's just not fast enough.

I beg to differ; Here's why: http://christopherroach.com/2010/01/21/ruby-fibonacci-shooto...

I just don't like Ruby's chances when paired against the other two.

I do. JS might give it a run, with WebKit also being Apple-sponsored. But, there's no Apple-sponsored Lua implementation. And JS from WebKit doesn't have any sort of (easy to use) way to access Cocoa, like MacRuby does (with HotCocoa).


Sigh, not that nonsense comparison again.

Apart from using suboptimal Lua code (the author obviously doesn't know about the Lua keyword 'local'), LuaJIT beta2 didn't compile recursion at all. It runs more than 4x faster with the current version and beats MacRuby easily.

Oh, and comparing language implementations based on the speed of a recursive Fibonacci number generator has exactly zero real-world relevance. Try again with a more realistic mix of benchmarks or something domain-specific like SciMark (LuaJIT is only 30% slower than GCC on this one).

Judging from http://antoniocangiano.com/2010/05/16/benchmarking-macruby-0... MacRuby plays in the same league as the other Ruby implementations. And http://shootout.alioth.debian.org/u32/benchmark.php?test=all... tells you where that league can be found, relative to LuaJIT. Hint: scroll to the bottom. :-)


So there's one fibonacci sequence benchmark that's faster? Come on.

The issue with Ruby is that the things that make it so nice (all of the great object model and meta-programming additions) also bring a certain level of overhead. If it was such a simple task to optimize then the MagLev guys would have produced a kick ass speedy Ruby years ago, based on all of the things that they know about Smalltalk VMs. That said, if Apple has the resources to make MacRuby as fast as Objective C then holy crap; I'd be on that like white on rice.

I don't believe for a moment that Apple would ever use Lua as their systems language of choice; more that I personally thought it would be a great pick. As much as I love Ruby I seriously doubt that Apple would ever be able to squeeze the required performance out of it; the language itself is the problem, not the implementation. Look at all of the work that's gone into producing fast Ruby VMs over the last several years . . . Really, there's low hanging fruit out there and Ruby just doesn't make sense in this context.


>> Here's why <<

And when we follow the link to Peter Cooper's comment...

"I decided to give it a go with MacRuby in its regular interpreted mode (which performed well against 1.9.1 in the fib test) and binary-trees gobbled up 3GB of RAM just in the first minute so I cancelled..

pidigits came in at 41.4 seconds on MacRuby (nightly) versus 30.1 seconds for 1.9.1.

On a reduced version of the fannkuck benchmark (using an input of 10 - otherwise it takes a lonnng time to run), MacRuby was 26.038 seconds, 1.9.1 was 24.277 seconds.

So it seems MacRuby being faster than 1.9.1 is hardly a wash at this point, though it shows great potential to eventually beat the 1.9 branch. "


Here's my question: how many popular, big-name consumer applications are written in C# on Windows?


I don't necessarily think that's a fair question. If I were to start a big-name consumer app today for Windows, I'd choose C#. But most of those types of apps have a long history that either predates .Net or predates .Net being all that great. Because of this, even a lot of Microsoft's apps are still written in C++. I'm pretty sure that Microsoft would completely rewrite say Office in .Net if it could do so without losing decades' worth of code.


That's a good question. Lots of corporate intranet apps are written in C#, but it's hard to tell what consumer apps are, since it's not something that companies advertise.

For what it's worth, we do know, however, that .NET technology is an important component of all games made with Unity.


Unity is cross-platform (Mac, Windows, Wii, iOS) though, and is really built on Mono and not Microsoft's implementation. So I don't think that counts, since Ars is arguing that .NET is competing with Obj-C and Cocoa as the native language/API of choice for desktop apps.


I'm pretty sure it counts... Ars is arguing that Objective-C will be left behind by a higher-level, managed language, which is precisely what is happening with Unity, regardless of whether Microsoft implemented it or what platform it runs on.


They are arguing that .NET is already that language for Windows, which I don't think is true.


Indeed. I was surprised by this statement in the article: "Contrast this with the most prominent competing desktop platform, the Microsoft .NET framework and C# language on Windows, where memory-managed code is the default and everything else is considered risky, literally being denoted with the "unsafe" keyword in the source code."

I've been doing development with .NET/C# for the first time for the last six months and I was surprised to see that the default mode for development is unmanaged. Do any big apps actually use managed mode?


  I was surprised to see that the default mode for development is unmanaged.
I find myself wondering what you mean by "default mode", as during the last 9 solid years of .net development work I have found it to very much be the case that managed is the default.

Can you clarify your statement?


Yes, I wasn't at all clear. What I meant was that sitting down as a newbie to Visual Studio, I was surprised at how many of the project types created by doing "File->New Project..." resulted in unmanaged .EXE targets. But I could have been misinterpreting that.


That probably means an executable rather than an assembly, but the underlying execution platform is still the .NET virtual machine, and all that goes with the VM, including managed memory.

I could be mistaken, of course; I'm not an expert at .NET yet.


When you first run Visual Studio you specify what default environment you'd prefer... c++, .net, etc. Perhaps you chose c++?


On the desktop, Java is the obvious missing elephant in the room. Java was popular, the non-GUI parts of Java were fast enough, but there were no good GUI libraries. Java continues to be anathema on the desktop because of the number of crappy apps written with Sun's officially blessed UI library, Swing. It didn't have to be that way. I've written a pretty snappy app using the Eclipse platform, which is based on SWT, which uses platform-specific code. In retrospect it's clear that Sun's write-once, run-anywhere ideal, which was Java's biggest strength on the server, was a disaster on the desktop. Sun should have provided Java bindings to common platform UI libraries and encouraged their use. If they had done that, Java would have been a much bigger player on the desktop, and GUI development would not have lagged so far behind in adoption of modern languages.


GCD, Klang, LLVM, GC are all huge advancements at the basement level.

Apple's language and APIs weren't the problem in the days of Copland. It was the kernel/Core OS.

I'm not sure the case of the sky is falling can be made at this time...to do so you'd have to first argue that Unix is dead/on its way out.


I couldn't read this without thinking about MacRuby.

It's hard for me to believe that he hasn't heard of MacRuby, but if he had, how could he not mention it in the article.

MacRuby could be a great answer to a lot of the issues he brings up.


Scripting languages on any platform are a great answer. PyQt is even a good answer on Linux. They don't establish an "expected" standard, though. If you're developing on Windows, you need a pretty good excuse to use "unsafe" code. On Linux and OSX, people use manual memory management without realizing they've shouldered an unnecessary burden. It's the default.


MacRuby isn't a scripting language, though. Or, it doesn't have to be. It can get compiled down to machine code if you're making an application in it.


I don't understand why mobile devices have so little "RAM" when they have fixed storage in the tens of Gos that is effectively made of memory chips...?

What is the difference between the two kinds of memory, and why isn't more RAM added? Is it a cost problem? a size problem? both? Or is the CPU not capable of addressing more RAM...?


Price for performance and quality. RAM is higher performance memory that does not degrade over time and thus cost a lot more. Flash memory is slower and has a limited number of writes per bit. I think that has something to do with the decisions made.


Still, I find it hard to fathom why the iPad only has 256MB of RAM, for example -- the cost difference between 256 and 512 MB of RAM is very small. More RAM will consume marginally more power, but that doesn't seem a convincing reason either.


I hadn't considered that before. You would figure that it would be pretty easy to add virtual memory to ios.


Indeed, all memory management in iOS is virtual, they just don't use a pagefile on any iOS devices. If the pager's not built in already and just turned off, I'm sure it'd be a very simple thing to add.


But then it would have to swap, the user experience would suffer unpredictably, Steve Jobs would strangle the engineer responsible and the feature would never see the light of day.


"Virtual memory" does not imply "it would have to swap"; indeed, the iPhone does have virtual memory, and each application has its own virtual address space.


As others have pointed out iOS does have virtual memory. As to why it doesn't use a swap file probably the a reason is that flash memory has a limited number of read-write cycles, probably way more than you could ever use under normal usage but with paging stuff to and from flash all the time you might reach the limits much faster.


That may be a factor, but I think it's much more likely to be related to interface fluidity. People swap to SSDs on desktops/laptops without significant issue.

I believe Android uses a similar scheme: virtual memory but no pagefile. Is there a way to turn on paging on Android?


How about Objective-C gets a lot of its juice from Smalltalk. He seems to have a real lack of experience / understanding on what Objective-C actually is and how it has evolved in features and use (pre-Apple to now).

It just seems like a really bad article for Ars Technica. This line in particular makes no sense "Nevertheless, Mac developers and users are not panicking like they did in the Copland era about memory protection and preemptive multitasking.". Garbage Collection = Memory Protection ... what the?


I wouldn't question Siracusa's technical chops, he's definitely earned them. He's intimately familiar with Objective-C and its history.

As for your second claim, I really don't follow. Memory protection has nothing to do with garbage collection in that quote. He's talking about protected/per process address spaces.


I think protomyth is saying that Siracusa's mistake is in assuming that GC is really that important... the "classic" Mac experience was considerably and noticeably worse for not having protected memory. Many even "average" users understood that their Mac didn't have protected memory and so application crashes could often bring down the whole system, thus they clamored for protected memory.

There's absolutely no such outcry, though, among Mac OS X or iOS users for widespread use of garbage collection, and why would there be? In general, Mac/iOS users think of their apps as being more pleasant to use than those on other systems already. I think Siracusa's mistake is in assuming that just because a managed system is inevitable (how much retain/release do you think will be done in 25 years? obviously nearly none) that this means that the benefits of GC are so great as to be a make-or-break... in actuality, the benefit of GC in practice seems to be more on the order of having a nice framework or two built-in.


There are entire categories of applications in the app store that are plagued by crashes. Check out the Netflix applications; I just went to the app store to buy one, and they all have 2.5-3 stars. Check the reviews and you see a ton of one star reviews saying "Crashes constantly," "Crashes," "Useless," "Crashes every time you do X." These are popular commercial apps in what I presume is a pretty popular category of apps. It wasn't the first time I searched for an iPhone app and found a bunch of $2 and $3 paid apps with tons of stability complaints. Actually I felt quite lucky to find one lone Netflix app with a 4-star rating. No point in comparing features or reviews; I'll take the one that doesn't crash constantly.


Yeah, I wrote way too fast and don't like to edit after someone has already commented. I just don't get that is as important as protected memory in the original OS. It just struct me as so out of touch with the how memory is handled by cocoa programmers and the evolution of the platform. I like the new GC, but it is not a quantum leap above the retain/release rules. Particularly with the advent of generated getters and setters. I think he is checking the "spec sheet" and not getting the whole story.


To understand "Garbage Collection = Memory Protection" you'd have to go back and read the original Avoiding Copland 2010 article where he makes this argument: http://arstechnica.com/staff/fatbits/2005/09/1372.ars

(keep in mind that was written in 2005)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: