Hacker Newsnew | past | comments | ask | show | jobs | submit | more flgr's commentslogin

Another use case I had was embedding this into a graph heavy MediaWiki with convenient markup. That means the graphs are very easy to edit and will be updated after each edit with no backend magic required.


I found TradingView to be very good. I.e. https://www.tradingview.com/chart/?symbol=BITSTAMP:BTCUSD

They also have regular stock symbols.


That image next to "Alexa helps you at your desk" is 17.5 MB as of now at at a resolution of 5720 x 3240. Pretty impressive to see an image loading line by line like back in the days.

It will be interesting to see what WeWork ends up using this for.


Jesus, it's the same size when I view the page on my smartphone. It doesn't even look as good as you'd think for 17.5 MB.


I was wondering if they might not be baiting Google into doing that. If they start removing content critical of them from Google Docs that's another big story.


Google should monetize Google Docs in situations like this, i.e. more than a dozen viewing, all with the same referer URL.

Regarding the thesis, it seems to me that unless you have read the advertiser guidelines then you are always going to have your viewing manipulated in ways you do not understand. The recommendations are really 'how best to monetize you'. So with music you may end up with mainstream artist recommends, not because the songs are inherently popular, it is just that they can be monetized.

Therefore, the best way to explore the back catalogue of an artist is to upload your own video and to then see what tracks by your favourite musicians you can set to it. So even within normal YouTube there is a 'better' search engine, better for you but maybe not so monetizable even though that is what the widget is about.


> Das ist nicht gute.

Should be "Das ist nicht gut".


I don't know; personally I feel like the scale here is much too low to be relevant. Also I think not investing the time to tune sort and dist keys makes the comparison meaningless.

But maybe that just becomes meaningful at larger data sizes and maybe most people work with less data most of the time.


This sounds super interesting; a bunch of questions:

Did you just sent multiple images concatenated into the output stream, including file headers? And Mosaic would actually replace the first image with the second and so on? Was the CGI script referenced in an <img> tag?

What image format was this?


Probably jpgs. This used to be how all streaming “webcams” worked until not that long ago


I don't remember the details.

I think they were GIFs. From what I recall, I read the entire images as they were on disk and pushed them down the stream.

Someone mentioned https://en.wikipedia.org/wiki/MIME#Mixed-Replace and that is ringing a tiny bell.

Honestly, it was 23 years ago and I forgot the exact details. I still remember the meeting though. The feeling of dread, excitement, and ridiculousness of knocking out such a powerful machine (at the time) in order to server animated smiley faces.

Nowadays, I knock out powerful machines doing slightly more useful things.


> and a non-constant factor that varies between 1 and 6 is small

Wouldn't the 6 in O(((n^1000)^n)^6) be a non-constant factor that makes a big difference in asymptotic performance though?


This is not a constant factor, this is an exponent. Constant factors are what's standing in front of whole term, like this: O(7*((n^1000)^n)^6), 7 being constant factor.


Sounds like the author did — http://www.lihaoyi.com/post/BenchmarkingScalaCollections.htm...

Notably Vector index access takes 4,260,000 ns for a vector with 1,048,576 elements instead of measured 0 ns for native arrays. is about +4 ns extra per element and hints at the whole data set still fitting into L2 cache.

If the process ends up accessing more working memory than fits into L2 (32 or 64 MB or so) and lookups aren't nicely bundled together, the overhead will approach about +80 ns per element access. Or +0.08 seconds per million element accesses.

It does seem unlikely that every element access would end up causing a cache miss since I'd expect lookups to happen close together, but this can be significant for intense workloads such as OLAP.


Based on the numbers for lookup, it looks worse than that -- it seems to be linear (r2 = .9988).

    df <- data.frame(
      s=c(0,1,4,16,64,256,1024,4096, 16192, 65536,
          262144, 1048576),
      t=c(0, 1, 5, 17, 104, 440, 1780, 8940, 38000,
          198000, 930000, 4260000))
    > summary(lm(t ~ s + 0, data=df))
    ...
    Coefficients:
      Estimate Std. Error t value Pr(>|t|)
    s  4.02825    0.04161   96.82   <2e-16 ***

    Residual standard error: 45060 on 11 degrees of freedom
    Multiple R-squared:  0.9988,    Adjusted R-squared:  0.9987
    F-statistic:  9374 on 1 and 11 DF,  p-value: < 2.2e-16


That's because the program is looking up every item. It's _right there in the text_:

> Note that this time is measuring the time taken to look up every element in the collection, rather than just looking up a single element.

http://www.lihaoyi.com/post/BenchmarkingScalaCollections.htm...


Shame on me for not looking more closely!

So when you look at the per-operation cost (t/s in my model) then what emerges is, indeed, logarithmic (r2 ~ .96 for lm(t/s ~ s + log(s)). Bounded, for sure, but if you walk in with a vector-indexing-bound but array-size-independent algorithm expecting that the performance for 1e3 elements will be representative of the behavior for 1e6, you're in for a surprise.


This is why I've never liked stating that something has "run-time of O(log(n))" since that's rarely true — the assumption for that is that all machine instructions take pretty much the same time, which is not the case. CPU instructions involving cache misses are multiple orders of magnitude more expensive than others.

I think it makes much more sense to talk about concrete operations (or cache misses) instead. Sounds like their implementation has O(log(n)) cache misses.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: