http://www.ddj.com/hpc-high-performance-computing/201803067
DDJ: What types of applications will benefit from SSE5 extensions?
LV: We see three markets where SSE5 will deliver the most immediate impact: High Performance Computing (HPC), multimedia applications, and security.
HPC workloads are increasing and showing up in non-traditional HPC domains. Examples of this are seismic data processing, financial analysis such as stock trend forecasting, and protein-folding algorithms that are used for medicine development. These algorithms require fast floating-point matrix and vector processing capabilities which SSE5 delivers. A floating-point matrix multiply using the new SSE5 extensions is 30 percent faster than a similar algorithm implemented with the existing SSE instructions.
Multimedia is an increasingly important part of the computing experience. Media processing and encryption (DRM) have become a major part of PC workload; new algorithms and formats have been developed, including MPEG-4 and H.264. SSE5 enables enhanced geometry transforms and physics modeling for scientific simulation and gaming, supports HD Video encoding and decoding and enables image enhancement and MP3 recording and manipulation. For example, Discrete Cosine Transformations (DCT), which are a basic building block for encoders, get a 20 percent performance improvement by using the new SSE5 extensions.
Security remains a top concern for the entire industry. SSE5 enables encryption algorithms to run more quickly increasing the usability of security features in the platform. For example, the Advanced Encryption Standard (AES) algorithm gets a factor of 5 performance improvement by using the new SSE5 extension compared to an AES implementation that just uses the AMD64 instructions.
5x speedup for AES using SSE5?
The best figure I obtain on an AMD64 system is 11 cycles/byte, whichmatches your results (you had me worried for a while with 9 cycles/byte!)To go 5 times faster than this would mean close to 2 cycles/byte, aspeed that I find hard to believe without hardware accelerationBut a fully byte oriented implementation runs at about 140 cycles/byteand here the S-Box substitution step is a significant bottleneck. I toothink the PPERM instruction could be used for this and it seems possiblethat this would produce large savings. So 30 cycles/byte might well beachievable in this case.I hence wonder whether this is the comparison that AMD are making.It is also possible that the PPERM instruction could be used to speed upthe Galois field calculations to produce the S-Box mathematically ratherthan by table lookup. I have tried this in the past but it has notproved competitive. But PPERM looks interesting here as well. Brian Gladman
I've only just seen this, but I've been playing with the VIA's AES and looking at Intels AES instructions. I believe the PPERM instruction will be rather important. Combined with the packed byte rotate and shift some rather interesting SIMD byte fiddles should be possible. >From my initial look, it should be possible to implement AES without tables, doing SIMD operations on all 16 bytes at once. I've not looked at it enough yet, but currently I'm doing an AES round in about 140 cycles a block (call it 13 per round plus overhead) on a AMD64, (220e6 bytes/sec on a 2ghz cpu) using normal instructions. I don't believe they will be taking 30 instructions , so they probably have 4-8 SSE instructions per round, it then comes down to how many SSE execution units there are to execute in parallel. As for VIA, on a 1ghz C7 part, cbc mode, 128bit key, for 16byte aligned, I'm getting about 24 cycles per block, for unaligned, about 67 cycles. The chip does ECB mode at 12.6 cycles a block if aligned (2 at a time). It does not handle unaligned ECB, so with manual alignment, 75 cycles. Not bad for a single issue cpu considering the x86 instruction version of AES I have takes 1010 cycles per block. For the intel AES instructions, from my readings, it will be able to do a single AES (128bit) in a bit more that 60 cycles (10 rounds, 6 cycle latency for the instructions). The good part is that they will pipeline. So if you say do 6 AES ecb blocks at once, you can get a throughput of about 12 cycles a block (intel's figures). This is obviously of relevance for counter mode, cbc decrypt and more recent standards like xts and gcm mode. Part of the intel justification for the AES instruction seems to stop cache timing attacks. If the SSE5 instructions allow AES to be done with SIMD instead of tables, they will achieve the same affect, but without as much parallel upside. It also looks like the GF(2^8) maths will also benefit. eric (who has only been able to play with via hardware :-(
No comments:
Post a Comment