Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Any idea who makes it? It's probably close in architecture to GPUs, the kind of thing Nvidia and AMD are positioned to capitalise on.


They used customizable DSPs from Tensilica. I wonder if this is based on the same technology.


You're right, Microsoft revealed last year in a Hot Chips presentation that HPU v1 is a Cadence Tensilica DSP with custom instructions [1]. Given that, I'd bet that HPU v2's neural net core is Cadence's "Vision C5" [2].

[1] http://www.tomshardware.com/news/microsoft-hololens-hpu-arch...

[2] https://www.cadence.com/content/cadence-www/global/en_US/hom...



Microsoft are using Intel/Altera FPGAs in the datacenter, and not likely for mass produced devices like this.

While it is possible they are using FPGAs for the Hololens Processing Unit, I seriously doubt it. The FPGA needed to exceed the performance of a high-end mobile GPU (OpenCL/CUDA for Deep Learning/AI) would have an excessive power consumption for a mobile device.

Another factor is that higher end FPGAs are expensive ($200+ wholesale) and typically cost thousands each, and not suited to a mass produced consumer device.

As for who made this HPU, I'd say AMD helped design it and GlobalFoundries (or maybe TSMC) are doing the production.


I have a friend at Intel who works in the AR division and their comment to me is that in one way or another all AR/MR glasses are running Intel silicon. I asked them specifically, does that include, Hololens, Apple's Project, and a couple others specifically, and they answered in the affirmative. YMMV though.


Its possible it could be Intel silicon, just not a Altera FPGA.



Maybe:

http://www.general-vision.com/ ???

No idea, actually - but they are one of a handful of companies selling such a part.


I was thinking it might be closer in spirit to AMD's APU which combines graphics and compute onto the same chip.


I'm guessing it's close to the tpu design, ie a systolic matrix multiplier.

Other possible candidates include INT8 DSPs


I suspect it will have more in common with mobile GPUs than the big iron ASICs that come from NVIDIA or AMD.


IMHO Its most likely a variant of the AMD GPU or APU, because it would take some major differentiation before nVidia would do a largely custom core processor, even for Microsoft.

nVidia's Tegra Xavier SoC is under development, with the Volta GPU architecture that would include the Tensor core which is specifically intended for deep learning and AI applications.


That's doubltful. AMD is happy to provide bespoke chips for big customers (like the XBox) but what you want for inference is to multiply entire int8 matrixies in hardware dataflow. The silicon providing float32 support in an AMD GPU isn't needed and their dataflows only extend to vectors rather than matrices. In the case where the matrix you're multiplying is much bigger than the execution hardware you can throw at it this doesn't make a big difference in efficiently using adders. But using vectors rather than matrices is hugely wasteful of register read ports if you're always doing matrix operations and don't also have to excel at operations which are only vectors.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: