ZeroPoint’s Fast Memory Compression May Reduce AI Energy Use


AI is the latest and most demanding market for high-performance computing, with system architects striving to maximize performance per watt. Swedish startup ZeroPoint, fortified with €5 million ($5.5M USD) in new funding, aims to assist through a novel nanosecond-scale memory compression technique — a truly complex endeavor.

The idea is simple: compress data losslessly before it goes into RAM and decompress it afterward, effectively expanding the memory channel by 50% or more just by integrating a small component into the chip.

Compression is a fundamental computing technology; as ZeroPoint CEO Klas Moreau noted, “We wouldn’t store data on the hard drive today without compressing it. Research suggests 70% of data in memory is unnecessary. So why don’t we compress in memory?”

The challenge is time. Compressing a large file for storage can take anywhere from seconds to hours. However, data passes through memory in fractions of a second, as fast as the CPU can manage. A microsecond’s delay in compressing data entering the memory system could be devastating to performance.

Memory doesn’t advance at the same pace as CPU speeds, although both are interconnected along with other chip components. If the processor is too slow, data backs up in memory; if memory is too slow, the processor wastes cycles waiting for the next data chunk. It all operates in sync, as expected.

While ultra-fast memory compression exists, it introduces another issue: the need to decompress data just as quickly. Otherwise, the system can’t process the data correctly. Without entirely overhauling your architecture to accommodate compressed memory, the method is futile.

ZeroPoint claims to have overcome these obstacles with a rapid, low-level memory compression technique that requires no significant alterations to the existing computing system. By incorporating their technology into your chip, it essentially doubles your memory.

The intricate details may only make sense to experts, but Moreau simplified the basics for the uninitiated.

“We take a small data segment — a cache line, sometimes 512 bits — and identify patterns within it,” he explained. “Data typically contains inefficiently dispersed information. It depends on the data: The more random it is, the less compressible. But for most data loads, we see a throughput increase of 2-4 times.”

This isn’t how memory actually looks. But you get the idea.
Image Credits: ZeroPoint

It’s well-known that memory can be compressed. Moreau mentioned that large-scale computing experts are aware of this (he showed a paper from 2012 about it) but dismissed it as impractical for large-scale implementation. ZeroPoint, he added, has tackled the issues of data compaction and transparent integration, allowing the technology to work seamlessly in current systems. And it all happens within a few nanoseconds.

“Most compression technologies, both software and hardware, operate over thousands of nanoseconds. CXL [compute express link, a high-speed interconnect standard] reduces this to hundreds,” Moreau stated. “We reduce it to 3 or 4.”

Here’s CTO Angelos Arelakis explaining it in his own words:

ZeroPoint’s entry is timely, as companies worldwide seek faster and cheaper compute solutions for training new AI models. Many hyperscalers are interested in technology that offers more power per watt or reduces power costs.

One key caveat is that this technology must be integrated into the chip from the start — no easy add-ons here. Therefore, the company collaborates with chipmakers and system integrators to license the technique and hardware design for standard high-performance computing chips.

Their partners include major players like Nvidia and Intel, as well as companies like Meta, Google, and Apple, which design custom hardware for AI and high-cost tasks. ZeroPoint positions its technology as a cost-saving measure that effectively doubles memory, making it a self-amortizing investment.

The €5 million A round, led by Matterwave Ventures and supported locally by Industrifonden with contributions from Climentum Capital and Chalmers Ventures, has just closed.

Moreau said the funding would enable them to expand into U.S. markets and further strengthen their presence in Sweden.

Devin Coldewey
Devin Coldewey
Devin Coldewey is a Seattle-based writer and photographer. He first wrote for TechCrunch in 2007. Devin covers many topics in technology, science, and space. In the past, he has written for, NBC News, DPReview, and others. He has also appeared on radio, television, and in print.

Latest stories


Related Articles

Leave a reply

Please enter your comment!
Please enter your name here
Captcha verification failed!
CAPTCHA user score failed. Please contact us!
Continue on app