日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

量子相干与量子纠缠_量子分类

發布時間:2023/11/29 编程问答 26 豆豆
生活随笔 收集整理的這篇文章主要介紹了 量子相干与量子纠缠_量子分类 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

量子相干與量子糾纏

My goal here was to build a quantum deep neural network?for?classification?tasks, but all the effort involved in calculating errors, updating weights, training a model, and so forth turned out to be?completely?unnecessary. The above circuit is much simpler than it must already look, and I am going to fully break it down for you.

我的目標是建立一個用于分類任務的量子深層神經網絡,但是計算誤差,更新權重,訓練模型等所有工作完全沒有必要。 上面的電路比必須已經看起來的要簡單得多,我將為您詳細介紹一下。

Disclaimer

免責聲明

This circuit is intentionally not?optimized. Rather, it is intended to be comprehensible. I intend to address optimization as I add complexity to future circuits, which will have their own associated articles.

該電路有意未進行優化。 相反,其意圖是易于理解的。 我打算解決優化問題,因為這會增加將來的電路的復雜性,這些電路將有自己的相關文章。

Background

背景

This origin of this classification task is a very simple neural network that had been written in Python. Long ago, I rewrote this neural network in C to force me to better understand how it worked. Without the use of NumPy, in particular, I had to write all the functions from scratch (I avoided potentially-helpful C libraries, as well). Armed with this relatively-deep understanding, I selected this same neural network to translate further from C into OpenQASM.

分類任務的起源是一個非常簡單的神經網絡,該網絡已經用Python編寫。 很久以前,我用C語言重寫了這個神經網絡,以迫使我更好地了解它的工作原理。 特別是在不使用NumPy的情況下,我不得不從頭開始編寫所有功能(我也避免了可能有用的C庫)。 有了相對較深的理解,我選擇了相同的神經網絡將其從C進一步轉換為OpenQASM。

Registers

寄存器

This circuit uses four registers. The “a” register consists of two ancilla qubits, each paired up with one qubit from the two-qubit “data” register. The “train” register consists of the training data from the original neural network in Python;?the?data?is?mapped?to 11?qubits.?And, of course, there is a classical register for taking measurements.

該電路使用四個寄存器。 “ a”寄存器由兩個輔助量子位組成,每個輔助量子位與兩個量子位“數據”寄存器中的一個量子位配對。 “訓練”寄存器由Python中原始神經網絡的訓練數據組成; 數據被映射到11個量子位。 當然,還有一個經典的寄存器可以進行測量。

The reason for the two ancilla qubits and the two data qubits is that the original neural network had only two classifications, represented numerically by 0 and 1. One ancilla-data pair is used to compare the test state to the training data that is classified as 0, and the other ancilla-data pair is used to compare the test state to the training data that is classified as 1.

使用兩個輔助量子位和兩個數據量子位的原因是,原始神經網絡只有兩個分類,用數字0和1表示。一個輔助數據對用于將測試狀態與分類為的訓練數據進行比較0,另一個輔助數據對用于將測試狀態與分類為1的訓練數據進行比較。

Initial States

初始狀態

The ancilla qubits are initialized with Hadamard gates, the first operation when performing SWAP Tests, which are used to compare quantum states.

使用Hadamard門初始化輔助量子位,這是執行SWAP測試時的第一個操作,用于比較量子態。

Read more?about?SWAP?Tests:

進一步了解SWAP測試:

  • Comparing Quantum States

    比較量子態

  • Basis-Specific SWAP Test

    基礎特定的SWAP測試

  • Simplified Quantum Machine Learning (QML) Classification

    簡化量子機器學習(QML)分類

  • Comparing Entangled States

    比較糾纏的國家

The data qubits are prepared identically with simple rotations around the y axis. The training data is also mapped with y rotations, except for one qubit which remains in it’s ground state and one which has a Pauli-X (NOT) gate applied to it.

圍繞y軸進行簡單旋轉即可完全相同地準備數據量子位。 訓練數據還映射了y旋轉,除了一個qubit保持其基態,另一個保留了Pauli-X(NOT)門。

Normalization

正常化

The reason why the data qubits and training qubits can be mapped with y rotations is because the original data contained integer values that had to be normalized between zero and one. If you have values ranging from 0 to 360, the 360 would be normalized to 1, 180 would be normalized to?0.5, 90 would be normalized to?0.25, and so forth. I took the normalized values from my C language implementation and converted them to y-axis?rotations.

可以使用y旋轉映射數據qubit和訓練qubit的原因是,原始數據包含必須在0到1之間歸一化的整數值。 如果您的值介于0到360之間,則將360標準化為1,將180標準化為0.5,將90標準化為0.25,依此類推。 我從C語言實現中獲取了標準化的值,并將其轉換為y軸旋轉。

Calculating Theta

計算θ

Calculating the angle of rotation around the y axis is normally a matter of trigonometry, but not in this case. I did not want the normalized values to be converted into probabilities of measuring |1> because that would cause states closer to |0> and |1> to seem closer together than states near the equator of the Bloch Sphere. For purposes of SWAP Testing, the distance between 0 and 1 has to be the same as the distance between 49 and 50. Therefore, each qubit’s rotation around the y axis is merely the classical normalized value multiplied times pi.

計算繞y軸的旋轉角度通常是三角問題,但在這種情況下不是。 我不希望將規范化的值轉換為測量| 1>的概率,因為這將導致比| Bloch球的赤道附近的狀態更接近| 0>和| 1>的狀態在一起。 為了進行SWAP測試,0和1之間的距離必須與49和50之間的距離相同。因此,每個qubit繞y軸的旋轉僅僅是經典歸一化值乘以pi。

Controlled-SWAPs

受控交換

SWAP Tests begin by applying Hadamard gates to the ancilla qubits. These are followed by Fredkin gates, which are controlled-SWAP gates. The ancilla qubits are the control qubits. For additional detail, I refer again to the links I provided earlier.

SWAP測試通過將Hadamard門應用到輔助量子位開始。 這些之后是Fredkin門,它們是受控SWAP門。 輔助量子位是控制量子位。 有關更多詳細信息,請再次參考我之前提供的鏈接。

I went in simple numerical order for viewability. If the training data is classified as 0, the Fredkin gate takes a[0] as it’s control and compares data[0] to that training qubit. If the training data is classified as?1, the Fredkin gate takes a[1] as it’s control and compares data[1] to that training qubit. In?other?words, a[0] and data[0] are being used to compare the test state to all the training data that is classified as 0 and a[1] and data[1] are being used to compare the test state to all the training data that is classified as 1.

我以簡單的數字順序查看。 如果訓練數據被分類為0,則Fredkin門將a [0]作為控制,并將data [0]與該訓練量子位進行比較。 如果訓練數據分類為1,則Fredkin門將a [1]作為控制,并將data [1]與該訓練量子位進行比較。 換句話說,a [0]和data [0]用于將測試狀態與分類為0的所有訓練數據進行比較,而a [1]和data [1]用于將測試狀態與進行比較的測試狀態進行比較。所有分類為1的訓練數據。

Finalizing the SWAP Tests

完成SWAP測試

SWAP Tests are finalized by taking x measurements of the ancilla qubits. The x measurements are distinguishable from the usual z measurements by the presence of Hadamard gates that are applied immediately preceding the measurements.

通過對輔助量子比特進行x次測量來完成SWAP測試。 x測量值與通常的z測量值的區別在于存在哈達瑪德門(Hadamard gate),這些門緊接在測量之前被應用。

Measurements

測量

Measuring the ancilla qubits provides the distance between the test data and the training data. You measure |0> with a probability of 1 when states are identical and you measure |0> with a probability of 0.5 when states are maximally different. The a[0] qubit measures the distance between the test data and the training data that is classified as 0, and the a[1] qubit measures the distance between the test data and the training data that is classified as 1.

測量輔助量子位可提供測試數據和訓練數據之間的距離。 當狀態相同時,以0的概率測量| 0>,而在狀態最大不同時,以0.5的概率測量| 0>。 a [0]量子位測量測試數據和分類為0的訓練數據之間的距離,而a [1]量子位測量測量數據與分類為1的訓練數據之間的距離。

Classification

分類

For?this?article, I selected a value for the test data that should result in it being classified as a 1. That is to say that the same value is determined to be probably 1 when you run it classically. And, according to the histogram, the ancilla qubit representing the 1 classification did, in fact, have a higher probability of being measured?as?|0>?than?the?0?classification. This means that the test data is closer to the training data that is classified as 1 than it is to the training data that is classified as 0.

在本文中,我為測試數據選擇了一個值,該值應將其分類為1。也就是說,當您經典運行它時,確定該值可能為1。 并且,根據直方圖,代表1分類的輔助量子比特實際上被測量為| 0>的概率要大于0分類。 這意味著測試數據離分類為1的訓練數據更近,而離分類為0的訓練數據更近。

Future Work

未來的工作

The original neural network was slightly more complex. It actually used three features to distinguish the two classes, but I only used one of those features here. Therefore, a logical next step would be to perform quantum classification using multiple features. Beyond that, another logical step would be to allow more than just two classes, however that would require changing the classical model that the circuit is based on; at this stage, it is important to know that the quantum result is aligned with the classical result, especially as the quantum circuit grows in complexity.

原始的神經網絡稍微復雜一些。 它實際上使用了三個功能來區分這兩個類,但是我在這里只使用了其中一個功能。 因此,邏輯上的下一步將是使用多個特征執行量子分類。 除此之外,另一個邏輯步驟是允許不止兩個類別,但是這將需要更改電路所基于的經典模型; 在這一階段,重要的是要知道量子結果與經典結果一致,尤其是隨著量子電路復雜性的增長。

Acknowledgment

致謝

This circuit was written in OpenQASM using the IBM Q Experience circuit editor, and it ran on the provided 32-qubit?simulator.

該電路是使用IBM Q Experience電路編輯器以OpenQASM編寫的,并在提供的32量子位模擬器上運行。

翻譯自: https://medium.com/swlh/quantum-classification-cecbc7831be

量子相干與量子糾纏

總結

以上是生活随笔為你收集整理的量子相干与量子纠缠_量子分类的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。