Home
Finance
Travel
Shopping
Academic
Library
Create a Thread
Home
Discover
Spaces
 
 
  • CompactifAI Technology Explained
  • 95% Compression With 2-3% Precision Loss
  • Quantum-Inspired AI Compression
 
Multiverse Computing raises €189M to shrink AI models by 95%

Spanish AI firm Multiverse Computing has secured a €189 million ($215 million) Series B funding round led by Bullhound Capital to scale its groundbreaking CompactifAI technology, which can reduce the size of large language models by up to 95% while maintaining performance and cutting inference costs by 50-80%.

User avatar
Curated by
cdteliot
3 min read
Published
3,227
157
globenewswire.com favicon
GlobeNewswire
Multiverse Computing Raises $215M to Scale Ground-Breaking
multiversecomputing.com favicon
multiversecomputing.com
Multiverse Computing Raises $215M to Scale Ground-Breaking ...
tech.eu favicon
Tech.eu
Multiverse Computing lands €189M to rewire the LLM ecosystem
Spain's Multiverse raises $215m and increases valuation 5x ...
sifted.eu
CompactifAI Technology Explained

CompactifAI leverages quantum-inspired Tensor Networks to compress LLMs in a revolutionary way that goes beyond traditional methods like quantization and pruning. Rather than simply reducing neurons, the technology focuses on compressing the "correlation space" within models by decomposing weight matrices into Matrix Product Operators (MPOs)1. This approach allows for more controlled compression while maintaining model integrity, resulting in up to 95% size reduction with only 2-3% precision loss23.

The technology works through a sophisticated process that includes layer sensitivity profiling to identify which layers can be compressed more aggressively, followed by tensorization where trainable weights are replaced with MPOs1. This not only makes models 4-12x faster but also reduces energy consumption and enables deployment across diverse hardware environments – from cloud infrastructure to edge devices like phones, PCs, and even Raspberry Pi45. CompactifAI models are available for leading open-source LLMs including Llama, DeepSeek, and Mistral, with pricing based on input and output tokens through AWS Marketplace67.

morningstar.com favicon
topmostads.com favicon
techcrunch.com favicon
11 sources
95% Compression With 2-3% Precision Loss

While traditional lossless compression methods like ZipNN can reduce model sizes by 33-50% without any accuracy loss1, Multiverse's CompactifAI technology pushes the boundaries with its impressive 95% compression rate while maintaining remarkable performance. This minimal 2-3% precision loss2 represents an exceptional trade-off that makes AI deployment significantly more accessible and cost-effective. For context, in machine learning, precision refers to the percentage of model predictions that are correct3, so this minimal degradation ensures the compressed models remain highly reliable for real-world applications.

The implications of this compression-to-performance ratio are substantial for enterprise adoption. While corporations typically expect 99+% accuracy from human employees4, CompactifAI's approach demonstrates that slightly reduced precision can deliver massive efficiency gains without compromising essential functionality. This balance is achieved through the quantum-inspired tensor network approach that specifically targets the model's correlation space rather than simply reducing parameters through conventional techniques like pruning or quantization5. The result is a breakthrough that addresses the fundamental challenge of deploying large AI models in resource-constrained environments while maintaining their core capabilities.

morningstar.com favicon
nightfall.ai favicon
reddit.com favicon
8 sources
Quantum-Inspired AI Compression

Multiverse Computing's breakthrough in AI compression stems from quantum principles applied to classical computing problems. Their quantum-inspired approach leverages tensor networks—mathematical structures originally developed for quantum physics—to identify and preserve essential correlations within AI models while eliminating redundancies12. Unlike traditional compression methods that simply reduce parameters or lower numerical precision, this technique reconstructs the model's internal architecture to maintain performance with significantly fewer resources3.

The quantum-inspired methodology has applications beyond LLMs, showing promising results in computer vision as well. Frameworks like QIANets demonstrate how quantum-inspired pruning, tensor decomposition, and annealing-based matrix factorization can reduce CNN inference times by 50-70% while maintaining comparable accuracy to original models34. This versatility makes quantum-inspired compression particularly valuable across the AI ecosystem, enabling deployment in resource-constrained environments from edge devices to industrial settings where computational efficiency is critical5.

tech.eu favicon
globenewswire.com favicon
github.com favicon
9 sources
Related
How does CompactifAI maintain performance with 95% compression
What makes quantum-inspired tensor networks better than pruning or quantization
How will this technology change AI deployment on edge devices like phones
Discover more
CoreWeave first to deploy Nvidia's new GB300 platform
CoreWeave first to deploy Nvidia's new GB300 platform
CoreWeave became the first cloud provider to deploy Nvidia's GB300 NVL72 platform this week, marking the latest milestone in the chipmaker's push to expand AI computing infrastructure beyond traditional hyperscale customers. The deployment represents Nvidia's continued effort to diversify its customer base as demand for AI processing power spreads across industries and geographies. The move...
917
Elon Musk's xAI closes $10 billion funding round
Elon Musk's xAI closes $10 billion funding round
Elon Musk's artificial intelligence company xAI has closed a $10 billion funding round, split evenly between debt and equity financing, as the startup accelerates its push to compete with OpenAI and other AI leaders. The funding, announced Monday by Morgan Stanley, represents one of the largest capital raises in the AI sector this year and positions xAI to expand what it calls "one of the...
17,001
Meta seeks $29B to fund massive AI data center expansion
Meta seeks $29B to fund massive AI data center expansion
Meta Platforms is in advanced discussions with major private equity firms to raise $29 billion for expanding its artificial intelligence data centers across the United States, according to multiple reports Friday. The social media giant plans to structure the funding as $3 billion in equity and $26 billion in debt. The fundraising effort represents the latest escalation in Meta's infrastructure...
5,394
Salesforce CEO: AI now handles up to 50% of company work
Salesforce CEO: AI now handles up to 50% of company work
Salesforce Inc. Chief Executive Marc Benioff revealed today that artificial intelligence now handles between 30% and 50% of the work at his company, marking one of the most concrete examples yet of how AI is reshaping operations at major corporations. In an interview on Bloomberg's "The Circuit with Emily Chang," Benioff said the automation spans software development and customer support roles,...
16,158