Samsung Electronics unveils new memory technology for Giant AI

0
A GPU accelerator loaded with an HBM-PIM



Samsung Electronics successfully demonstrated the performance of an in-memory processing (PIM) chip by installing it on an AMD graphics processing unit (GPU) accelerator board.

PIM refers to the integration of a processor with random access memory (RAM) on a single chip. This technology is expected to help improve the performance of mammoth-sized artificial intelligence (AI).

Samsung announced on October 20 that it had successfully run an MI-100 accelerator card, a commercial GPU from AMD, which was loaded with an HBM-PIM chip. HBM refers to high bandwidth memory.

The HBM-PIM chip doubled the performance of AMD’s GPU accelerator card and reduced power consumption by about 50% on average compared to other GPU accelerators without an HBM-PIM chip, Samsung Electronics said. The one-year power consumption of the GPU accelerator equipped with an HBM-PIM decreased by approximately 2100 GWh compared to GPU accelerators equipped with an HBM only.

“HBM-PIM technology is the industry’s first custom memory solution for the massive AI industry,” said Park Chul-min, general manager of Samsung Electronics. “We will integrate HBM-PIM technology into CXL-PNM solutions through an integrated software standardization process.”

Share.

Comments are closed.