Sam Hames

Ready for a holiday.

[1903.03129] SLIDE : In Defense of Smart Algorithms over Hardware Acceleration for Large-Scale Deep Learning Systems

https://arxiv.org/abs/1903.03129

An interesting approach to a deep learning problem. Instead of computing everything as matrix multiplication (which generally requires a GPU for throughput), turn it into a sparse lookup table and use a conventional CPU.

I'm not sure I understand the paper well enough to comment on the methodology, but fast inference and training on conventional CPUs would be very exciting - building and running GPU based stacks is fiddly and time consuming whereas the CPU is there and just works. CPUs are also great for scaling down!

Tags

Related By Tags

Details

Revised
Created
Edited