More on our Medium
Here we introduce an improved approach to Variational Quantum Attack Algorithms (VQAA) on crytographic protocols. Our methods provide robust quantum attacks to well-known cryptographic algorithms, more efficiently and with remarkably fewer qubits than previous approaches. We implement simulations of our attacks for symmetric-key protocols such as S-DES, S-AES and Blowfish. For instance, we show how our attack allows a classical simulation of a small 8-qubit quantum computer to find the secret key of one 32-bit Blowfish instance with 24 times fewer number of iterations than a brute-force attack. Our work also shows improvements in attack success rates for lightweight ciphers such as S-DES and S-AES. Further applications beyond symmetric-key cryptography are also discussed, including asymmetric-key protocols and hash functions. In addition, we also comment on potential future improvements of our methods. Our results bring one step closer assessing the vulnerability of large-size classical cryptographic protocols with Noisy Intermediate-Scale Quantum (NISQ) devices, and set the stage for future research in quantum cybersecurity.
We efficiently simulate IBM's largest quantum processors, Eagle, Osprey, and Condor, using graph-based Projected Entangled Pair States, achieving unprecedented accuracy with simple tensor updates.
Quantum error correction through surface codes, critical for reliable quantum computing, demands efficient decoding algorithms balancing speed, complexity, and accuracy.
Paper by Gianni del Bimbo, Daniel García Guijo and Esperanza Cuenca Gómez.
Case Study by Gianni del Bimbo, Rodrigo Hernández Cifuentes, Esperanza Cuenca Gómez, Daniel García Guijo and Angus Dunnett
The Cheyette model is a quasi-Gaussian volatility interest rate model widely used to price interest rate derivatives such as European and Bermudan Swaptions for which Monte Carlo simulation has become the industry standard.
How quantum-inspired algorithms solve the most complex PDE and machine learning problems to achieve real business advantage now.
Machine learning algorithms, both in their classical and quantum versions, heavily rely on optimization algorithms based on gradients, such as gradient descent and alike.
A Practical Approach by Esperanza Cuenca Gómez and Pablo Martín Ramiro
Recent advances in deep learning have enabled us to address the curse of dimensionality (COD) by solving problems in higher dimensions. A subset of such approaches of addressing the COD has led us to solving high-dimensional PDEs. This has resulted in opening doors to solving a variety of real-world problems ranging from mathematical finance to stochastic control for industrial applications. Although feasible, these deep learning methods are still constrained by training time and memory.
Machine Learning models capable of handling the large datasets collected in the financial world can often become black boxes expensive to run. The quantum computing paradigm suggests new optimization techniques, that combined with classical algorithms, may deliver competitive, faster and more interpretable models.
Deep neural networks (NN) suffer from scaling issues when considering a large number of neurons, in turn limiting also the accessible number of layers. To overcome this, here we propose the integration of tensor networks (TN) into NNs, in combination with variational DMRG-like optimization. This results in a scalable tensor neural network (TNN) architecture that can be efficiently trained for a large number of neurons and layers.
Naively trained AI models can be heavily biased. This can be particularly problematic when the biases involve legally or morally protected attributes such as ethnic background, age or gender. Existing solutions to this problem come at the cost of extra computation, unstable adversarial optimisation or have losses on the feature space structure that are disconnected from fairness measures and only loosely generalise to fairness.
Current universal quantum computers have a limited number of noisy qubits. Because of this, it is difficult to use them to solve large-scale complex optimization problems. In this paper we tackle this issue by proposing a quantum optimization scheme where discrete classical variables are encoded in non-orthogonal states of the quantum system.
Here we show how universal quantum computers based on the quantum circuit model can handle mathematical analysis calculations for functions with continuous domains, without any digitalization, and with remarkably few qubits. The basic building block of our approach is a variational quantum circuit where each qubit encodes up to three continuous variables (two angles and one radious in the Bloch sphere).
In this work, we demonstrate how to apply non-linear cardinality constraints, important for real-world asset management, to quantum portfolio optimization.
In this work, we develop a new quantum algorithm to solve a combinatorial problem with significant practical relevance occurring in clutch manufacturing. It is demonstrated how quantum optimization can play a role in real industrial applications in the manufacturing sector.
In this paper we consider several algorithms for quantum computer vision using Noisy Intermediate-Scale Quantum (NISQ) devices, and benchmark them for a real problem against their classical counterparts.
Partial Differential Equations (PDEs) are used to model a variety of dynamical systems in science and engineering. Recent advances in deep learning have enabled us to solve them in a higher dimension by addressing the curse of dimensionality in new ways. However, deep learning methods are constrained by training time and memory.
Here we present a quantum algorithm for clustering data based on a variational quantum circuit. The algorithm allows to classify data into many clusters, and can easily be implemented in few-qubit Noisy Intermediate-Scale Quantum (NISQ) devices.