The variational algorithm relies on a local gradient-descent technique, with tensor gradients being computable either manually or by automatic differentiation, in turn allowing for hybrid TNN models combining dense and tensor layers. Our training algorithm provides insight into the entanglement structure of the tensorized trainable weights, as well as clarify the expressive power as a quantum neural state. We benchmark the accuracy and efficiency of our algorithm by designing TNN models for regression and classification on different datasets. In addition, we also discuss the expressive power of our algorithm based on the entanglement structure of the neural network.
Full paper here.