2024 was a pivotal year for our company: We won numerous awards, established impactful partnerships across industries, and made significant improvements in our products, including CompactifAI and Singularity.
Looking back at this year’s accomplishments, perhaps our biggest news was the oversubscribed Series A round of $27 million and the opening of our first US office. Longtime tech executive Chris Zaharias has joined our team as the VP of Sales in the US and will lead our US expansion from San Francisco, CA.
It’s been a great year for us, and we will keep that momentum going into 2025.
Here are the highlights of 2024.
Industry and Competitive Awards
In addition to the vote of confidence from our investors, we received significant industry recognition for our work from noted organizations around the globe.
- 2024 Future Unicorn Award: We won this award from DIGITALEUROPE as the startup most likely to achieve a $1 billion valuation and for our positive impact on society, commitment to green values and gender diversity.
- LinkedIn’s Top Startups 2024 of Spain: We were recognized as one of LinkedIn’s Top Startups in Spain due to our accelerated pace, investment attraction and for developing innovative solutions that are redefining sectors.
- The Physics, Innovation and Technology award: The Spanish Royal Society of Physics awarded our co-founder Román Orús for his expertise in quantum tech and recognized Multiverse Computing as one of Europe’s leading quantum software companies.
- The EIC Scaling Club: We were recognized for making a positive impact as one of the highest-potential European deep-tech scale-ups.
- National SME of the Year Award: His Majesty the King of Spain recognizes small- to medium-sized companies for creating employment opportunities and wealth. We won in the innovation and digitalization category.
- Large AI Grand Challenge: Our team won an AI BOOST award and 800,000 hours of compute time to build and train an LLM from the ground up while using quantum technology. Winning teams have a year to develop a large-scale AI model and train it on one of Europe’s supercomputers.
- Generative AI Accelerator: Amazon Web Services selected us from 4,000 applicants for a spot in this second class of 80 startups. We have access to AWS credits, mentorship and other cloud infrastructure to expand CompactifAI.
AI and Quantum-Inspired Projects
Throughout the year, we continued to develop use cases and collect data to quantify the strength of our solutions. A few of our most impactful projects this year include:
- EPIIC European Defence Fund initiative: Airbus Defence and Space selected us to help develop a gesture recognition system for fighter pilots using quantum technology. This work will the way pilots interact with aircraft systems and can reduce the need for manual controls.
- Singularity ML and IBM’s Qiskit Functions Catalog: Developers, researchers, computational and data scientists using Qiskit now have access to Singularity ML. This integration will speed up the development of use case-specific quantum solutions and integrate with standard ML libraries and workflows.
- Iberdrola and Grid Battery Placement: We worked with the second largest utility in the world to optimize battery installation in an electricity grid with quantum software. Over 10 months, the algorithm was tested in different grids and sizes to find a way to create a reliable and cost-efficient solution through the help of our quantum solutions. Our solution outperformed existing benchmarks.
Each year since the founding of our company, we have seen the world of quantum computing grow and transform more industries. Over the last year, we expanded our scope to include AI, and we see a bright future in making large language models (LLMs) more efficient and reducing the associated environmental impact.
In 2025, we will expand use cases for LLMs across even more industries and support AI on the edge and in IOT devices.
We have the stats that show CompactifAI can shrink LLMs and the compute cost that goes along with those enormous models, without compromising the quality of results. As our Chief Scientific Officer Roman Orus said, “Our results imply that standard LLMs are, in fact, heavily overparametrized, and do not need to be large at all.”