HVM is more performant in PV in most situations with modern virtualization technology.
PV drivers on HVM have been a thing for a while now, so your IO and network go through PV even with an HVM instance. SRIOV/"Enhanced Networking" is even better than PV networking drivers, so HVM has another win there if you have NICs that support it (The larger Amazon instances do)
PV is significantly slower with system calls than HVM on 64bit hardware, as the x86_64 spec removed two of the four CPU protection rings, one of which being where PV lived in 32bit architectures, below the guest, allowing it to 'intercept' these system calls. Now since it shares a ring with the guest, it cannot intercept these and you are left with doubling the amount of context switches that you were previously. Virtualization extensions from Intel and AMD allow HVM to bypass this and skip the context switch. Other hardware advantages like EPT also give HVM an edge when it comes to memory related performance.
About the only area where PV is still better from a performance standpoint is interrupts/timers
Amazon rebooted lots of PV guests. Presumably they collocate HVM and PV guests on the same box. If there were any HVM guests on the box, then there could be the possibility of an attack. (I guess they could forcibly kick off the HVM guests, but that wouldn't be very nice.)
HVM, at least in the past, had a bunch more code that the guest DomU interacts with vs. fully pv guests. This has security implications.
Now, my knowledge of HVM is a few years... or more like half a decade out of date, for example, I don't even know how to force a HVM guest to only use PV drivers (which would solve 90% of the problem.) and i know that more and more of this has moved into hardware, so it's possible that what was true five years ago is not true now, but... yeah, I don't let untrusted users on HVM guests for the same reason I don't let untrusted users use pygrub or load untrusted kernels directly.
HVM guests at Amazon will default to using PV drivers for IO and networking. (Unless using SRIOV/"Enhanced Networking", which will not use the PV drivers)
PVH is actually PV on top of an HVM container and is a bit different. You can think of it as PV sitting on top of enough HVM bits to take advantage of the hardware extensions Intel and AMD have invested so heavily in while still being majority PV. This gives you the best of both worlds, including the remaining PV performance benefits related to interrupts and timers that PV drivers on HVM can't utilize.
Correct me if I'm wrong, but I don't think this is the case. In fact, I'm running PV on m3.medium instances (EDIT: and c3.large) right now. It depends on the AMI you use to launch the instance, though some instance types only support HVM (e.g. t2.* or r3.* ) or PV (old-gen t1.* or m1.* ), but some, like m3.* and c3.* which are current-gen, support both. See the Amazon Linux AMI availability matrix for example: http://aws.amazon.com/amazon-linux-ami/instance-type-matrix/
from: http://xenbits.xen.org/xsa/advisory-108.html
MITIGATION ==========
Running only PV guests will avoid this vulnerability.
Did amazon reboot all of it's VMs? or just the HVM VMs? why was neflix running on HVM VMs?