Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Day Live Web Video Streaming Failed Us (techcrunch.com)
15 points by pclark on Jan 21, 2009 | hide | past | favorite | 30 comments


This story is exactly why TechCrunch is such a joke. You can't make giant, sweeping statements like: "The Day Live Web Video Streaming Failed Us" and then, in the second sentence of your story, deliver a bunch of incredibly impressive stats about how successful the various streams were.

I like how the last paragraph reveals the true point of this post though. It's just a veiled advertisement for another lame P2P video service.

How does anyone take these people seriously? It's kind of fun to watch them flop around making mountains out of mole hills but anyone who considers these jokers to be a legitimate news source should re-think how they get their information.


But it did fail. For those 21 million people, the video stopped or it kicked them out or there were audio problems.

I am not sure why the article plugged some obscure video streaming software. CNN.com already does peer to peer video transferring via a flash plugin that it forces you to download.


I haven't seen anything that says all 21 million people had problems. All I see is another sensationalistic headline from TechCrunch.

There is a plug at the end because that's how blogs make money; by selling links and paid mentions.


They probably didn't make it up. I watched on Hulu and it had real problems for me, too. And CNN was impossible. It's possible most Internet users have never even heard of Hulu, and they were the best case.


I tend to agree. TechCrunch writes what it needs to for the pageview numbers they need to hit.

The best part is that this story is syndicated onto the Washington Post's tech page. That version of it is now one of the top stories on Google News.

Out of all the stories being published today, this particular one has been deemed one of the most important on the internet, due to Arrington being such a smooth operator.


Floping around making mountains out of mole hills is what gets attention, from short attention span crowds.

Is is just me or has the HN crowd's attention span noticeably shortened, since the recent big bump of visitors pg reported?


The big challenge for online video is staying up in a crisis. I remember on 9/11 the web failed and most of us resorted to TV & radio.


Interesting that both of those mediums are broadcast/multicast only, not unicast.

All of these video streams (with maybe the exception of CNN's P2P plugin) are unicast, TCP/IP data streams. To really survive a crisis, ISPs need to implement real multicast data streams correctly so that one server can broadcast the video feed without requiring all of the clients to have their own individual copy of the feed going out.

Edit: Sorry, looks like this is already being discussed down thread.


How many times must Mark Cuban say that the economics of content delivery on the internet are fucked, before people listen?

http://blogmaverick.com/2008/05/04/the-ala-carting-of-video-... http://www.google.com/search?ie=UTF-8&q=economics%20vide...


The bulk of Cuban's point here is simply that online video currently has a lower ad load (2 minutes) than network TV (8 minutes), which is a huge reduction in volume. But it doesn't acknowledge that those 2 minutes:

* can be targeted much better than existing TV ads

* have a drastically lower cost of entry for advertisers, and can potentially support a much larger ecosystem of advertisers

* support fine-grained stats and analytics about viewership and popularity, rather than TV's archaic system of up-fronts, sweeps weeks, and Nielsons.

It seems likely that Internet video is going to seriously fuck up the existing networks. But the question is whether it's going to be creative destruction. There are lots of credible models that don't require middlemen to sell us air.


That's all true for the article I linked directly, but the problems are greater than just the lower ad load.

The value of a la carte viewers is lower than the value of scheduled viewers[1] and more importantly, the internet is not designed to assure successful video delivery[2].

He even proposes a mechanism[3] for fixing the economics of video distribution - by intermediating the cable companies(!), for whom a marginal viewer has no cost, between content providers and customers.

I'm truly not a Cuban fanboy, but I think he's got a point on this one.

[1]: http://blogmaverick.com/2007/07/09/metcalfes-law-and-video/

[2]: http://blogmaverick.com/2008/03/28/internet-video-vs-digital...

[3]: http://blogmaverick.com/2008/07/14/the-way-to-save-internet-...



The NYTimes website held up great. I'm on an old school T1 connection and I only experienced a few pauses.

Also, a bunch of people were in the conference room watching it on a TV and at some points I could hear my stream was actually broadcasting ahead of the TV.


Is the only general solution to this problem: 'more edge servers'?

Or is there another technology that could be really helpful for this type of event?


There are some pretty interesting innovations coming down from deep packet inspection providers. In one case they're caching the data that users are serving via bittorrent and re-serving it to the second and future persons requesting it, basically giving users credit for sharing without saturating their upstreams and causing network problems.

It wouldn't surprise me if these products can do the same thing for downstream traffic as well.

Everything old is new again, it's like the 1990s proxy servers brought into the 2000s-2010s.


In this case, the "old" stuff you're talking about never really got "old". There's fundamentally not much of a difference between a proxy server and an edge overlay network fed with traffic redirection; in both cases, you're getting content from a middlebox.


Sure, conceptually. I didn't know of any big ISPs using proxies in the last 5-7 years, though, until the Comcast DPI fiasco happened.

I could well be out of the loop and not aware of any huge Squid or comparable installations, but I believed them to be archaic until recently because of how much faster commercial/service provider type WAN interfaces (DS-x, OC-x) were growing relative to consumer interfaces (DSL, cable). Of course FTTH (large bandwidth to each node) and wireless data (shared upstream + carrier optimizations for signal quality instead of bandwidth + physics) may turn this on it's head.


Multicast.


you think so? can you explain more? I am not looking at the theory of it, rather interested in working models where multicast is being used to deliver online content.


Multicast does not work on the Internet today. But it could work if ISPs invested some effort.


No, multicast cannot work on the Internet if the ISPs invest more effort. There remain fundamentally unresolved problems in Deering-model (IP-level) multicast.

The future of one-to-many content delivery is in edge delivery networks like Akamai. Maybe they'll use a (drastically simplified) multicast model internally, but that's an implementation detail.


what incentives are preventing the ISP's from investing effort?


AFAIK, turning on multicast would create many additional RIB/FIB entries in routers, possibly running routers out of RAM and requiring upgrades.

Also, because multicast duplicates packets, it could increase bandwidth usage in ways that are not easy to bill for, potentially increasing costs without increasing revenue. I don't think this is a real problem, but some people have cited it.


Think about this for a second.

IP addresses (which we're running out of) address hosts. There are hundreds of millions of active full-time hosts.

Multicast addresses address content. How much more content is there than hosts? Do you really think it's feasible to give an IP address to every popular piece of content? YouTube allegedly gets almost 100,000 new videos per day.

The fact is, multicast hasn't been a success because it's simply not a good fit for the routing layer of the Internet. But edge overlay networks can build arbitrarily complex and interesting multicast service models on top of the Unicast Internet, and those overlays have been huge successes.


I think SSM solves that problem.


No, SSM reduces an explosively intractable routing problem to merely an intractable routing problem, by guaranteeing that video streams will only have a single sender (which is not even a win when you're competing with edge overlays).

You still need to have every upstream router maintain awareness of every piece of content being viewed by every downstream user.



Akamai --- or, more likely, Comcast and AT&T --- could accomplish the same thing without detonating your routing tables, just by adding servers and indirecting content with a traffic manager. Cringely is nuts.


P2P, like Joost, but then live.


In many cases P2P streaming would cause more congestion (especially upstream) than edge servers. However, P2P is cheaper than edge servers, especially for extremely bursty workloads like the inauguration or the Victoria's Secret fashion show.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: