Quantum neurons Yudong Cao with Gian Giacomo Guerreschi, Alán Aspuru-Guzik Quantum Techniques in Machine Learning 2017, Verona, Italy.
The quest for quantum neural nets • Parametrized quantum system that can be trained to accomplish tasks such as classification • In many cases, it is not easy to identify what is the fundamental building block with which one could describe the quantum system as a learning algorithm • This work can be seen as a conceptual attempt at addressing this issue
Nonlinear and parallel Neural network a machine that is designed to Builds up its own rules mimic the way in which the through experience brain performs a particular task or function of interest
Basic requirements for quantum NN 1. Initial state encodes 01001 01001 any N -bit binary string 2. Reflects one or more e.g. attractor dynamics, synaptic connections, integrate & fire, training basic neural computing rules, structure of a NN mechanisms 3. The evolution is based Superposition and entanglement on quantum effects Schuld, M., Sinayskiy, I. & Petruccione, F. Quantum Inf Process (2014) 13: 2567
(artificial) Neuron 𝜏 𝜄 1 𝑐 0 𝜄 𝜄 𝜄 = 𝑥 𝑗 𝑦 𝑗 + 𝑐 𝑗
Can we reali lize art rtif ific icia ial l neurons on a quantum computer?
QM + NN: an unlikely match ? Quantum Mechanics (QM) Neural Networks (NN) • Lossy transformations • Unitary evolution • Clustering, classification, • Rotation in Hilbert space compression etc
Challenges • Sigmoid / step function activation How to realize on quantum computers, whose dynamics is lin linear? Reversible circuits Dissipative dynamics • Measurement? Open system? Cost scaling? May collapse the state / reduce to classical probabilistic algorithms Story of quantum error correction
Our proposal Neuron Qubit Activation Rotation angle Information from previous layer active 𝑦 1 1 𝑥 1 0 𝑆 𝑧 (𝜒) 𝑥 2 𝑦 2 𝜏 𝜄 … 1 𝜒 𝑥 𝑜 𝑦 𝑜 0 𝜄 Activation 𝑧 = 𝜏(𝜄) 0 rest active rest 𝜄 = 𝑥 𝑗 𝑦 𝑗 + 𝑐 𝜒 = 𝛿𝜄 + 𝜌 𝑗 4
𝑔 𝑦 = arctan tan 2 𝑦 Introduce nonlinearity Repeat-until-success (RUS) circuits: 𝑦 Given ability to realize 𝑆 𝑧 2𝑦 One could use RUS to realize 𝑆 𝑧 (2𝑔(𝑦)) Nonlinear! Measure 0: 𝑆 𝑧 (𝑔(𝑦)) 𝜔 Su Success Measure 1: 𝑆 𝑧 (𝜌/4) 𝜔 Fail ail bu but eas easil ily cor orrectable Repeat until success cos 𝜄 −sin 𝜄 2 2 𝑆 𝑧 𝜄 = sin 𝜄 cos 𝜄 2 2
… … 𝑆 𝑧 𝑔 °𝑙 (𝑦) 𝑆 𝑧 𝑦 𝑆 𝑧 𝑔(𝑦 ) = 𝑔 °𝑙 (𝑦) 𝑔 𝑔 … 𝑔 𝑦 … 𝑙 times
𝑦 1 = 0 Controlled rotations by 0 or Close to either 1 𝑆 𝑧 (𝑔 ∘𝑙 (𝜄)) 0 𝑦 2 = 1 angle 𝑥 𝑗 , 𝑐 𝑆 𝑧 ( ߠ ) 0 1 due to nonlinear function 𝑔 RUS x k 𝑦 3 = 0 𝜄 0 𝜄 = 𝑥 𝑗 𝑦 𝑗 + 𝑐 Prev. … 𝑗 Nonlinear layer Weighted output Weighted Nonlinear 𝑦 1 Prev. layer sum sum output |010…> 𝑥 1 𝑧 = 𝜏(𝜄) 𝑦 2 𝑥 2 𝑥 3 𝑦 3 𝜄 = 𝑥 𝑗 𝑦 𝑗 + 𝑐 … 𝑗 0 0 𝑆 𝑧 (2𝑔(𝜄)) 0
• Size • Neuron type • Connectivity • Activation function • Weight/bias setting • Training method • …
Feedforward network “cat”
𝒚 𝟐 𝒚 𝟐 𝒕 XOR network 0 0 1 0 1 0 1 0 0 𝑦 1 1 1 1 𝑡 𝑦 2 Train the network such that 𝑡 = 𝑦 1 ⨁𝑦 2 Input Correct output 00 1 + 01 0 1 10 0 + 2 + 11 1 1+ 𝑎𝑎 Accuracy: 𝑎𝑎 2
Solid: training on 00 1 + 01 0 1 10 0 + + 11 1 2 Dashed: testing on 00 10 average 01 11
8-bit parity network 𝑦 1 𝑦 2 𝑦 3 8 𝑦 4 𝑡 ⋮ 𝑦 5 𝑦 6 𝑦 7 𝑦 8 𝑎𝑎 Train the network such that 𝑡 = 𝑦 1 ⨁ … ⨁𝑦 8 1+ 𝑎𝑎 Accuracy: 2
Solid: training on 2 8 −1 1 𝑗 2 8 Parity(𝑗) 𝑗=0 Dashed: testing on 2 8 =256 states 00000000 00000001 average ⋯ 11111111
𝑡 𝑘 Hopfield network 𝑥 𝑗𝑘 𝑡 𝑗 Initial state 𝜄 𝑗 = 𝑥 𝑗𝑘 𝑡 𝑘 + 𝑐 𝑗 Repeat Update 𝑘≠𝑗 𝑡 𝑗 = 1 𝜄 𝑗 > 0 −1 𝜄 𝑗 < 0 Final state (attractor)
Hopfield net of quantum neurons 𝑡 1 𝑡 2 𝜄 𝑗 = 𝑥 𝑗𝑘 𝑡 𝑘 + 𝑐 𝑗 𝑘≠𝑗 𝑡 𝑗 = 1 𝜄 𝑗 > 0 𝑡 4 𝑡 3 −1 𝜄 𝑗 < 0 RUS x k RUS x k RUS x k … (0) (0) (0) (0) (1) (3) (2) 𝑟 1 𝑟 2 𝑟 3 𝑟 4 𝑟 3 𝑟 2 𝑟 4 𝑜 + 𝑢 + 𝑙 qubits for Hopfield network of 𝑜 neurons and 𝑢 updates
Numerical example 3x3 grid attractors: letters C and Y + + + 1 0 0 0 1 1 initial input after 1 update after 2 updates after 3 updates 1 0
Summary • Building block for quantum neural network satisfying • Initial state encoding n -bit strings Neuron <-> Qubit • One or more neural computing mechanisms Sigmoid/step function, attractor • Evolution based on quantum effects Train with superposition of examples • Application and extensions • Superposition of weights (networks) ? • Different forms of networks • Different activation functions
Acknowledgements Post ostdo docs cs Peter Johnson Jonathan Olson Gr Grad adua uate te stu stude dent nts Jhonathan Romero Fontalvo Hannah (Sukin) Sim Tim Menke Gian Giacomo Alán Aspuru-Guzik Florian Hase Guerreschi
Thanks!
Recommend
More recommend