### 数学代写|信息论代写information theory代考|ELEN90030

statistics-lab™ 为您的留学生涯保驾护航 在代写信息论information theory方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写信息论information theory代写方面经验极为丰富，各种代写信息论information theory相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 数学代写|信息论代写information theory代考|Density Matrix Treatment of Logical Entropy

This section facilitates the transition to quantum logical information theory classical’ antecedents. It was previously noted that binary relation $R \subseteq U \times U$ on $U=\left{u_{1}, \ldots, u_{n}\right}$ can be represented by an $n \times n$ incidence matrix $\operatorname{In}(R)$ where
$$\operatorname{In}(R){i j}=\left{\begin{array}{c} 1 \text { if }\left(u{i}, u_{j}\right) \in R \ 0 \text { if }\left(u_{i}, u_{j}\right) \notin R \end{array}\right.$$

And then taking $R$ as the equivalence relation indit $(\pi)$ associated with a partition $\pi=\left{B_{1}, \ldots, B_{m}\right}$, the density matrix $\rho(\pi)$ of the partition $\pi$ (with equiprobable points) is just the incidence matrix In (indit $(\pi)$ ) rescaled to be of trace 1 (i.e., sum of diagonal entries is 1 ):
$$\rho(\pi)=\frac{1}{|U|} \operatorname{In}(\text { indit }(\pi)) .$$
The more general density matrix for $\pi$ with point probabilities $p=\left(p_{1}, \ldots, p_{n}\right)$ is constructed block by block. For any block $B_{i}$, consider the unit column vector $\left|B_{i}\right\rangle$ with entries $\sqrt{\frac{p_{j}}{\operatorname{Pr}\left(B_{i}\right)}}$ for $u_{j} \in B_{i}$ and otherwise zero. Then $\rho\left(B_{i}\right)$ is defined as that unit column vector times its transpose which yields the $n \times n$ matrix $\rho\left(B_{i}\right)=$ $\left|B_{i}\right\rangle\left|B_{i}\right\rangle^{t}$ with:
$$\rho\left(B_{i}\right){j k}=\left{\begin{array}{c} \sqrt{\frac{p{j}}{\operatorname{Pr}\left(B_{i}\right)}} \sqrt{\frac{p_{p}}{\operatorname{Pr}\left(B_{i}\right)}}=\frac{\sqrt{P_{j} P_{k}}}{\operatorname{Pr}\left(B_{i}\right)} \text { if }\left(u_{j}, u_{k}\right) \in B_{i} \times B_{i} \ 0 \text { otherwise. } \end{array}\right.$$
Then the density matrix for the partition $\pi$ is the probability weighted sum: $\rho(\pi)=$ $\sum_{i=1}^{m} \operatorname{Pr}\left(B_{i}\right) \rho\left(B_{i}\right)$ so that:
$$\rho(\pi){j k}=\left{\begin{array}{c} \sqrt{p{j} p_{k}} \text { if }\left(u_{j}, u_{k}\right) \in \operatorname{indit}(\pi) \ 0 \text { otherwise. } \end{array}\right.$$
Since the self-pairs $(u, u)$ are always in the inditsets, the diagonal elements are just the point probabilities $p=\left(p_{1}, \ldots, p_{n}\right.$ ) and the trace (sum of diagonal elements) is always 1 . For instance, if $U=\left{u_{1}, \ldots, u_{4}\right}$ and $\pi=$ $\left{\left{u_{1}\right},\left{u_{2}, u_{4}\right},\left{u_{3}\right}\right}$ with the point probabilities $p=\left(p_{1}, \ldots, p_{4}\right)$, then the density matrix is:
$$\rho(\pi)=\left[\begin{array}{cccc} p_{1} & 0 & 0 & 0 \ 0 & p_{2} & 0 & \sqrt{p_{2} p_{4}} \ 0 & 0 & p_{3} & 0 \ 0 & \sqrt{p_{4} p_{2}} & 0 & p_{4} \end{array}\right]$$

## 数学代写|信息论代写information theory代考|Linearizing Logical Entropy to Quantum Logical

As noted by Charles Bennett, one of the founders of quantum information theory, the idea of information-as-distinctions carries over to quantum mechanics.
[Information] is the notion of distinguishability abstracted away from what we are distinguishing, or from the carrier of information. …And we ought to develop a theory of information which generalizes the theory of distinguishability to include these quantum properties… [2, pp. 155-157]
Given a normalized vector $|\psi\rangle$ in an $n$-dimensional Hilbert space $V$, a pure state density matrix is formed as $\rho(\psi)=|\psi\rangle\langle\psi|$ and a mixed state density matrix is some probability mixture $\rho=\sum_{i} p_{i} \rho\left(\psi_{i}\right)$ of pure state density matrices. Any such density matrix always has a spectral decomposition into the form $\rho=\sum p_{i} \rho\left(\psi_{i}\right)$ where the different vectors $\psi_{i}$ and $\psi_{i}$ are orthogonal. The general definition of the quantum logical entropy of a density matrix is: $h(\rho)=1-\operatorname{tr}\left[\rho^{2}\right]$ where if $\rho$ is a pure state if and only if $\operatorname{tr}\left[\rho^{2}\right]=1$ so $h(\rho)=0$ and $\operatorname{tr}\left[\rho^{2}\right]<1$ for mixed states for $h(p)>0$ for mixed states.

The formula $h(\rho)=1-\operatorname{tr}\left[\rho^{2}\right]$ is hardly new. Indeed, $\operatorname{tr}\left[\rho^{2}\right]$ is usually called the purity of the density matrix since a state $\rho$ is pure if and only if $\operatorname{tr}\left[\rho^{2}\right]=1$, so

$h(\rho)=0$, and otherwise, $\operatorname{tr}\left[\rho^{2}\right]<1$, so $h(\rho)>0$; and the state is said to be mixed. Hence, the complement $1-\operatorname{tr}\left[\rho^{2}\right]$ has been called the “mixedness” [9, p. 5] or “impurity” of the state $\rho$. The seminal paper of Manfredi and Feix [10] approaches the same formula $1-\operatorname{tr}\left[\rho^{2}\right]$ (which they denote as $S_{2}$ ) from the advanced viewpoint of Wigner functions, and they present strong arguments for this notion of quantum entropy.

Our goal is to develop quantum logical entropy in a manner that brings out the analogy with classical logical entropy and relates it closely to quantum measurement as the process of creating distinctions in QM.

Let $F: V \rightarrow V$ be a self-adjoint operator (observable) on a $n$-dimensional Hilbert space $V$ with the real eigenvalues $\phi_{1}, \ldots, \phi_{1}$, and let $U=\left{u_{1}, \ldots, u_{n}\right}$ be an orthonormal (ON) basis of eigenvectors of $F$. The quantum version of a “dit” is a “qudit.” A qudit is relativized to an observable, just as classically a distinction is a distinction of a partition. Then, there is a set partition $\pi=\left{B_{i}\right}_{i=1, \ldots, l}$ on the ON basis $U$ so that $B_{i}$ is a basis for the eigenspace of the eigenvalue $\phi_{i}$ and $\left|B_{i}\right|$ is the “multiplicity” (dimension of the eigenspace) of the eigenvalue $\phi_{i}$ for $i=1, \ldots, I$. Note that the real-valued function $f: U \rightarrow \mathbb{R}$ takes each eigenvector in $u_{j} \in B_{i} \subseteq$ $U$ to its eigenvalue $\phi_{i}$ so that $f^{-1}\left(\phi_{i}\right)=B_{i}$ contains all the information in the self-adjoint operator $F: V \rightarrow V$ since $F$ can be reconstructed by defining it on the basis $U$ as $F u_{j}=f\left(u_{j}\right) u_{j}$.

## 数学代写|信息论代写information theory代考|Theorems About Quantum Logical Entropy

Classically, a pair of elements $\left(u_{j}, u_{k}\right)$ either “cohere” together in the same block of a partition on $U$, i.e., are an indistinction of the partition, or they do not, i.e., they are a distinction of the partition. In the quantum case, the nonzero off-diagonal entries $\alpha_{j} \alpha_{k}^{*}$ in the pure state density matrix $\rho(\psi)=|\psi\rangle\langle\psi|$ are called quantum “coherences” ([4, p. 303]; [1, p. 177]) because they give the amplitude of the eigenstates $\left|u_{j}\right\rangle$ and $\left|u_{k}\right\rangle$ “cohering” together in the coherent superposition state vector $|\psi\rangle=\sum_{j=1}^{h}\left\langle u_{j} \mid \psi\right\rangle\left|u_{j}\right\rangle=\sum_{j} \alpha_{j}\left|u_{j}\right\rangle$. The coherences are classically modeled by the nonzero off-diagonal entries $\sqrt{p_{j} p_{k}}$ for the indistinctions $\left(u_{j}, u_{k}\right) \in B_{i} \times B_{i}$, i.e., coherences $\approx$ indistinctions.

For an observable $F$, let $\phi: U \rightarrow \mathbb{R}$ be for $F$-eigenvalue function assigning the eigenvalue $\phi\left(u_{i}\right)=\phi_{i}$ for each $u_{i}$ in the ON basis $U=\left{u_{1}, \ldots, u_{n}\right}$ of $F$-eigenvectors. The range of $\phi$ is the set of $F$-eigenvalues $\left{\phi_{1}, \ldots, \phi_{I}\right}$. Let $P_{\phi_{i}}: V \rightarrow V$ be the projection matrix in the $U$-basis to the eigenspace of $\phi_{i}$. The projective $F$-measurement of the state $\psi$ transforms the pure state density matrix $\rho(\psi)$ (represented in the ON basis $U$ of $F$-eigenvectors) to yield the Lüders mixture density matrix $\hat{\rho}(\psi)=\sum_{i=1}^{I} P_{\phi_{i}} \rho(\psi) P_{\phi_{i}}[1$, p. 279]. The off-diagonal elements of $\rho(\psi)$ that are zeroed in $\hat{\rho}(\psi)$ are the coherences (quantum indistinctions or quindits) that are turned into “decoherences” (quantum distinctions or qudits of the observable being measured). ${ }^{2}$

For any observable $F$ and a pure state $\psi$, a quantum logical entropy was defined as $h(F: \psi)=\operatorname{tr}\left[P_{[q u d i t(F)]} \rho(\psi) \otimes \rho(\psi)\right]$. That definition was the quantum generalization of the “classical” logical entropy defined as $h(\pi)=p \times p(\operatorname{dit}(\pi))$.

## 数学代写|信息论代写information theory代考|Density Matrix Treatment of Logical Entropy

$$\operatorname{In}(R){ij}=\left{ 1 如果 (在一世,在j)∈R 0 如果 (在一世,在j)∉R\正确的。$$

ρ(圆周率)=1|在|在⁡( 去 (圆周率)).

$$\rho\left(B_{i}\right){jk}=\left{ pj公关⁡(乙一世)pp公关⁡(乙一世)=磷j磷ķ公关⁡(乙一世) 如果 (在j,在ķ)∈乙一世×乙一世 0 否则。 \正确的。 吨H和n吨H和d和ns一世吨是米一个吨r一世XF○r吨H和p一个r吨一世吨一世○n圆周率一世s吨H和pr○b一个b一世l一世吨是在和一世GH吨和ds在米:ρ(圆周率)=$$∑一世=1米公关⁡(乙一世)ρ(乙一世)$s○吨H一个吨: \rho(\pi){jk}=\左{ pjpķ 如果 (在j,在ķ)∈去⁡(圆周率) 0 否则。 \正确的。 由于自对$(u, u)$总是在 inditsets 中，对角线元素只是点概率$p=\left(p_{1}, \ldots, p_{n}\right.$) 和迹线（对角线元素的总和）始终为 1 。例如，如果$U=\left{u_{1}, \ldots, u_{4}\right}$和$\pi=\left{\left{u_{1}\right},\left{ u_{2}, u_{4}\right},\left{u_{3}\right}\right}$点概率为$p=\left(p_{1}, \ldots, p_{4}\对）$，则密度矩阵为：由于自对$(u, u)$总是在 inditsets 中，对角线元素只是点概率$p=\left(p_{1}, \ldots, p_{n}\right.$) 和迹线（对角线元素的总和）始终为 1 。例如，如果$U=\left{u_{1}, \ldots, u_{4}\right}$和$\pi=\left{\left{u_{1}\right},\left{ u_{2}, u_{4}\right},\left{u_{3}\right}\right}$点概率为$p=\left(p_{1}, \ldots, p_{4}\对）$，则密度矩阵为： \rho(\pi)=\左[ p1000 0p20p2p4 00p30 0p4p20p4\右]$\$

## 数学代写|信息论代写information theory代考|Linearizing Logical Entropy to Quantum Logical

[信息]是从我们正在区分的东西或信息载体中抽象出来的可区分性概念。…我们应该发展一种信息理论，将可区分性理论推广到包括这些量子特性… [2, pp. 155-157]

H(ρ)=0，否则，tr⁡[ρ2]<1， 所以H(ρ)>0; 据说状态是混合的。因此，补1−tr⁡[ρ2]被称为“混合性”[9, p. 5]或国家的“杂质”ρ. Manfredi 和 Feix [10] 的开创性论文采用了相同的公式1−tr⁡[ρ2]（他们表示为小号2) 从 Wigner 函数的高级观点来看，它们为这种量子熵的概念提供了强有力的论据。

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。