We do something similar with Nix (https://www.nixos.org). We have a CI server (Hydra) which creates a closure containing our software and all its dependencies, which we then upload to a network of AWS machines that are created with a few python scripts.
This works fairly well, but from experience we do need to keep an SSH setup on the machines. Just last week we had a load spike on a production server which was caused by a software bug, triggered by a usage pattern that was unexpected and not covered in our tests. If we did not have SSH (and access to the CLI tools on that machine) we would not have been able to debug this unexpected problem. I guess what I'm saying is that as long as software has bugs and hardware has glitches, we'll sometimes need access to low-level tools which can help us figure out the cause of these unexpected scenarios.
There isn't really much benefit of using chunked encoding for streaming video, if you're not generating the video on the fly server-side.
If you are simply streaming a stored video, you can just get the file size, use that as the content-length, and send the video. If the content is in a streamable format the client can just read/play the data as it is received. Chunked encoding will in fact add a little overhead to just sending as is.
Chunked encoding is more apt for situations where you don't know beforehand how much data you are going to send.
1) One time pads require a secure channel to transmit the key, which must be the same length of the message. It is not a feasible cryptography scheme, more of a theoretical framework.
2) Integer factorization is trivially in NP (the decision problem "n has a factor < x" has a certificate: the factor)
A quantum channel provides protection against eavesdropping for random data, but does not provide security for specific data. A shared quantum channel can be used to easily produce a shared one-time key that only the two parties involved know, which can then be used to encrypt specific data over an unsecured channel.
> Most require you to build libraries. This can be a turn off to anyone who wants to get up and running quickly - especially if you just want to try something out. This is especially true of exploratory TDD coding.
This is absolutely true, and especially painful/noticeable if you target multiple platforms. To work around this problem, I've started using the stand-alone version of the Boost Unit Test framework:
#include <boost/test/included/unit_test.hpp>
The problem with including that, like many other Boost header-only libraries, is that they wreck havoc on compile time :/.
Microsoft is a H.264 licensor, with one or more patens in the H.264/AVC patent pool. Having H.264 become the standard for publishing video on the Web will benefit them directly in terms of licensing fees.
This works fairly well, but from experience we do need to keep an SSH setup on the machines. Just last week we had a load spike on a production server which was caused by a software bug, triggered by a usage pattern that was unexpected and not covered in our tests. If we did not have SSH (and access to the CLI tools on that machine) we would not have been able to debug this unexpected problem. I guess what I'm saying is that as long as software has bugs and hardware has glitches, we'll sometimes need access to low-level tools which can help us figure out the cause of these unexpected scenarios.