## 数学代写|信息论代写information theory代考|FEO3350

statistics-lab™ 为您的留学生涯保驾护航 在代写信息论information theory方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写信息论information theory代写方面经验极为丰富，各种代写信息论information theory相关的作业也就用不着说。

## 数学代写|信息论代写information theory代考|The SMI of a System of Interacting Particles in Pairs Only

In this section we consider a special case of a system of interacting particles. We start with an ideal gas-i.e. system for which we can neglect all intermolecular interactions. Strictly speaking, such a system does not exist. However, if the gas is very dilute such that the average intermolecular distance is very large the system behaves as if there are no interactions among the particle.

Next, we increase the density of the particles. At first we shall find that pairinteractions affect the thermodynamics of the system. Increasing further the density, triplets, quadruplets, and so on interactions, will also affect the behavior of the system. In the following we provide a very brief description of the first order deviation from ideal gas; systems for which one must take into account pair-interactions but neglect triplet and higher order interactions. The reader who is not interested in the details of the derivation can go directly to the result in Eq. (2.51) and the following analysis of the MI.

We start with the general configurational PF of the system, Eq. (2.31) which we rewrite in the form:
$$Z_N=\int d R^N \prod_{i<j} \exp \left[-\beta U_{i j}\right]$$
where $U_{i j}$ is the pair potential between particles $i$ and $j$. It is assumed that the total potential energy is pairwise additive.
Define the so-called Mayer $f$-function, by:
$$f_{i j}=\exp \left(-\beta U_{i j}\right)-1$$
We can rewrite $Z_N$ as:
$$Z_N=\int d R^N \prod_{i<j}\left(f_{i j}+1\right)=\int d R^N\left[1+\sum_{i<j} f_{i j}+\sum f_{i j} f_{j k}+\cdots\right]$$
Neglecting all terms beyond the first sum, we obtain:
$$Z_N=V^N+\frac{N(N-1)}{2} \int f_{12} d R^N=V^N+\frac{N(N-1)}{2} V^{N-2} \int f_{12} d R_1 d R_2$$

## 数学代写|信息论代写information theory代考|Entropy-Change in Phase Transition

In this section, we shall discuss the entropy-changes associated with phase transitions. Here, by entropy we mean thermodynamic entropy, the units of which are cal/(deg $\mathrm{mol}$ ). However, as we have seen in Chap. 5 of Ben-Naim [1]. The entropy is up to a multiplicative constant an SMI defined on the distribution of locations and velocities (or momenta) of all particles in the system at equilibrium. To convert from entropy to SMI one has to divide the entropy by the factor $k_B \log _e 2$, where $k_B$ is the Boltzmann constant, and $\log _e 2$ is the natural $\log$ arithm of 2 , which we denote by $\ln 2$. Once we do this conversion from entropy to SMI we obtain the SMI in units of bits. In this section we shall discuss mainly the transitions between gases, liquids and solids. Figure 2.9 shows a typical phase diagram of a one-component system. For more details on phase diagrams, see Ben-Naim and Casadei [8].

It is well-known that solid has a lower entropy than liquid, and liquid has a lower entropy of a gas. These facts are usually interpreted in terms of order-disorder. This interpretation of entropy is invalid; more on this in Ben-Naim [6]. Although, it is true that a solid is viewed as more ordered than liquid, it is difficult to argue that a liquid is more ordered or less ordered than a gas.

In the following we shall interpret entropy as an SMI, and different entropies in terms of different MI due to different intermolecular interactions. We shall discuss changes of phases at constant temperature. Therefore, all changes in SMI (hence, in entropy) will be due to locational distributions; no changes in the momenta distribution.

The line SG in Fig. 2.9 is the line along in which solid and gas coexist. The slope of this curve is given by:
$$\left(\frac{d P}{d T}\right)_{e q}=\frac{\Delta S_s}{\Delta V_s}$$
In the process of sublimation ( $s$, the entropy-change and the volume change for both are always positive. We denoted by $\Delta V_s$ the change in the volume of one mole of the substance, when it is transferred from the solid to the gaseous phase. This volume change is always positive. The reason is that a mole of the substance occupies a much larger volume in the gaseous phase than in the liquid phase (at the same temperature and pressure).

The entropy-change $\Delta S_s$ is also positive. This entropy-change is traditionally interpreted in terms of transition from an ordered phase (solid) to a disordered (gaseous) phase. However, the more correct interpretation is that the entropy-change is due to two factors; the huge increase in the accessible volume available to each particle and the decrease in the extent of the intermolecular interaction. Note that the slope of the SG curve is quite small (but positive) due to the large $\Delta V_s$.

# 信息论代写

## 数学代写|信息论代写information theory代考|The SMI of a System of Interacting Particles in Pairs Only

$$Z_N=\int d R^N \prod_{i<j} \exp \left[-\beta U_{i j}\right]$$

$$f_{i j}=\exp \left(-\beta U_{i j}\right)-1$$

$$Z_N=\int d R^N \prod_{i<j}\left(f_{i j}+1\right)=\int d R^N\left[1+\sum_{i<j} f_{i j}+\sum f_{i j} f_{j k}+\cdots\right]$$

$$Z_N=V^N+\frac{N(N-1)}{2} \int f_{12} d R^N=V^N+\frac{N(N-1)}{2} V^{N-2} \int f_{12} d R_1 d R_2$$

## 数学代写|信息论代写information theory代考|Entropy-Change in Phase Transition

$$\left(\frac{d P}{d T}\right)_{e q}=\frac{\Delta S_s}{\Delta V_s}$$

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

## 数学代写|信息论代写information theory代考|EE430

statistics-lab™ 为您的留学生涯保驾护航 在代写信息论information theory方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写信息论information theory代写方面经验极为丰富，各种代写信息论information theory相关的作业也就用不着说。

## 数学代写|信息论代写information theory代考|The Forth Step: The SMI of Locations and Momentaof N Independent Particles in a Box of Volume V.Adding a Correction Due to Indistinguishabilityof the Particles

The final step is to proceed from a single particle in a box, to $N$ independent particles in a box of volume $V$, Fig. 2.4.

We say that we know the microstate of the particle, when we know the location $(x, y, z)$, and the momentum $\left(p_x, p_y, p_z\right)$ of one particle within the box. For a system of $N$ independent particles in a box, we can write the SMI of the system as $N$ times the SMI of one particle, i.e., we write:
$$\mathrm{SMI}(N \text { independent particles })=N \times \mathrm{SMI} \text { (one particle) }$$
This is the SMI for $N$ independent particles. In reality, there could be correlation among the microstates of all the particles. We shall mention here correlations due to the indistinguishability of the particles, and correlations is due to intermolecular interactions among all the particles. We shall discuss these two sources of correlation separately. Recall that the microstate of a single particle includes the location and the momentum of that particle. Let us focus on the location of one particle in a box of volume $V$. We write the locational SMI as:
$$H_{\max }(\text { location })=\log V$$
For $N$ independent particles, we write the locational SMI as:
$$H_{\max } \text { (locations of N particles) }=\sum_{i=1}^N H_{\max }(\text { one particle })$$
Since in reality, the particles are indistinguishable, we must correct Eq. (2.22). We define the mutual information corresponding to the correlation between the particles as:

$$I(1 ; 2 ; \ldots ; N)=\ln N !$$
Hence, instead of (2.22), for the SMI of $N$ indistinguishable particles, will write:
$$H(\text { Nparticles })=\sum_{i=1}^N H(\text { oneparticle })-\ln N !$$
A detailed justification for introducing $\ln N$ ! as a correction due to indistinguishability of the particle is discussed in Sect. 5.2 of Ben-Naim [1]. Here we write the final result for the SMI of $N$ indistinguishable (but non-interacting) particles as:
$$H(N \text { indistinguishable particles })=N \log V\left(\frac{2 \pi m e k_B T}{h^2}\right)^{3 / 2}-\log N !$$

## 数学代写|信息论代写information theory代考|The Entropy of a System of Interacting Particles. Correlations Due to Intermolecular Interactions

In this section we derive the most general relationship between the SMI (or the entropy) of a system of interacting particles, and the corresponding mutual information (MI). Later on in this chapter we shall apply this general result to some specific cases. The implication of this result is very important in interpreting the concept of entropy in terms of SMI. In other words, the “informational interpretation” of entropy is effectively extended for all systems of interacting particles at equilibrium.
We start with some basic concepts from classical statistical mechanics [7]. The classical canonical partition function (PF) of a system characterized by the variable $T, V, N$, is:
$$Q(T, V, N)=\frac{Z_N}{N ! \Lambda^{3 N}}$$
where $\Lambda^3$ is called the momentum partition function (or the de Broglie wavelength), and $Z_N$ is the configurational PF of the system”
$$Z_N=\int \cdots \int d R^N \exp \left[-\beta U_N\left(R^N\right)\right]$$
Here, $U_N\left(R^N\right)$ is the total interaction energy among the $N$ particles at a configuration $R^N=R_1, \cdots, R_N$. Statistical thermodynamics provides the probability density for finding the particles at a specific configuration $R^N=R_1, \cdots, R_N$, which is:
$$P\left(R^N\right)=\frac{\exp \left[-\beta U_N\left(R^N\right)\right]}{Z_N}$$
where $\beta=\left(k_B T\right)^{-1}$ and $T$ the absolute temperature. In the following we chose $k_B=1$. This will facilitate the connection between the entropy-change and the change in the SMI. When there are no intermolecular interactions (ideal gas), the configurational $\mathrm{PF}$ is $Z_N=V^N$, and the corresponding partition function is reduced to:
$$Q^{i g}(T, V, N)=\frac{V^N}{N ! \Lambda^{3 N}}$$
Next we define the change in the Helmholtz energy $(A)$ due to the interactions as:
$$\Delta A=A-A^{i g}=-T \ln \frac{Q(T, V, N)}{Q^{i g}(T, V, N)}=-T \ln \frac{Z_N}{V^N}$$
This change in Helmholtz energy corresponds to the process of “turning-on” the interaction among all the particles at constant $(T, V, N)$, Fig. 2.5.
The corresponding change in the entropy is:
\begin{aligned} \Delta S & =-\frac{\partial \Delta A}{\partial T}=\ln \frac{Z_N}{V^N}+T \frac{1}{Z_N} \frac{\partial Z_N}{\partial T} \ & =\ln Z_N-N \ln \mathrm{V}+\frac{1}{T} \int d R^N P\left(R^N\right) U_N\left(R^N\right) \end{aligned}
We now substitute $U_N\left(R^N\right)$ from (2.36) into (2.35) to obtain the expression for the change in entropy corresponding to “turning on” the interactions:
$$\Delta S=-N \ln V-\int P\left(R^N\right) \ln P\left(R^N\right) d R^N$$

# 信息论代写

## 数学代写|信息论代写information theory代考|The Forth Step: The SMI of Locations and Momentaof N Independent Particles in a Box of Volume V.Adding a Correction Due to Indistinguishabilityof the Particles

$$\mathrm{SMI}(N \text { independent particles })=N \times \mathrm{SMI} \text { (one particle) }$$

$$H_{\max }(\text { location })=\log V$$

$$H_{\max } \text { (locations of N particles) }=\sum_{i=1}^N H_{\max }(\text { one particle })$$

$$I(1 ; 2 ; \ldots ; N)=\ln N !$$

$$H(\text { Nparticles })=\sum_{i=1}^N H(\text { oneparticle })-\ln N !$$

$$H(N \text { indistinguishable particles })=N \log V\left(\frac{2 \pi m e k_B T}{h^2}\right)^{3 / 2}-\log N !$$

## 数学代写|信息论代写information theory代考|The Entropy of a System of Interacting Particles. Correlations Due to Intermolecular Interactions

$$Q(T, V, N)=\frac{Z_N}{N ! \Lambda^{3 N}}$$

$$Z_N=\int \cdots \int d R^N \exp \left[-\beta U_N\left(R^N\right)\right]$$

$$P\left(R^N\right)=\frac{\exp \left[-\beta U_N\left(R^N\right)\right]}{Z_N}$$

$$Q^{i g}(T, V, N)=\frac{V^N}{N ! \Lambda^{3 N}}$$

$$\Delta A=A-A^{i g}=-T \ln \frac{Q(T, V, N)}{Q^{i g}(T, V, N)}=-T \ln \frac{Z_N}{V^N}$$

\begin{aligned} \Delta S & =-\frac{\partial \Delta A}{\partial T}=\ln \frac{Z_N}{V^N}+T \frac{1}{Z_N} \frac{\partial Z_N}{\partial T} \ & =\ln Z_N-N \ln \mathrm{V}+\frac{1}{T} \int d R^N P\left(R^N\right) U_N\left(R^N\right) \end{aligned}

$$\Delta S=-N \ln V-\int P\left(R^N\right) \ln P\left(R^N\right) d R^N$$

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

## 数学代写|信息论代写information theory代考|COMP2610

statistics-lab™ 为您的留学生涯保驾护航 在代写信息论information theory方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写信息论information theory代写方面经验极为丰富，各种代写信息论information theory相关的作业也就用不着说。

## 数学代写|信息论代写information theory代考|Third Step: Combining the SMI for the Location and Momentum of a Particle in a $1 D$ System. Addition of Correction Due to Uncertainty

If the location and the momentum (or velocity) of the particles were independent events, then the joint SMI of location and momentum would be the sum of the two SMIs in Eqs. (2.4) and (2.12). Therefore, for this case we write:
\begin{aligned} H_{\max }(\text { location and momentum }) & =H_{\max }(\text { location })+H_{\max }(\text { momentum }) \ & =\log \left[\frac{L \sqrt{2 \pi e m k_B T}}{h_x h_p}\right] \end{aligned}
It should be noted that in the very writing of Eq. (2.14), the assumption is made that the location and the momentum of the particle are independent. However, quantum mechanics imposes restriction on the accuracy in determining both the location $x$ and the corresponding momentum $p_x$. Originally, the two quantities $h_x$ and $h_p$ that we defined above, were introduced because we did not care to determine the location and the momentum with an accuracy better than $h_x$ and $h_p$, respectively. Now, we must acknowledge that quantum mechanics imposes upon us the uncertainty condition, about the accuracy with which we can determine simultaneously both the location and the corresponding momentum of a particle. This means that in Eq. (2.14), $h_x$ and $h_p$ cannot both be arbitrarily small; their product must be of the order of Planck constant $h=6.626 \times 10^{-34} \mathrm{Js}$. Therefore, we introduce a new parameter $h$, which replaces the product:
$$h_x h_p \approx h$$
Accordingly, we modify Eq. (2.14) to:
$$H_{\max }(\text { location and momentum })=\log \left[\frac{L \sqrt{2 \pi e m k_B T}}{h}\right]$$

## 数学代写|信息论代写information theory代考|The SMI of One Particle in a Box of Volume $\mathrm{V}$

Figure 2.3 shows one simple particle in a cubic box of volume $V$.
To proceed from the 1D to the 3D system, we assume that the locations of the particle along the three axes $x, y$ and $z$ are independent. With this assumption, we can write the SMI of the location of the particle in a cube of edges $L$, as a sum of the SMI along $x, y$, and $z$, i.e.
$$H(\text { location in } 3 \mathrm{D})=3 H_{\max } \text { (location in 1D) }$$
We can do the same for the momentum of the particle if we assume that the momentum (or the velocity) along the three axes $x, y$ and $z$ are independent. Hence, we can write the SMI of the momentum as:
$$H_{\max }(\text { momentum in } 3 \mathrm{D})=3 H_{\max }(\text { momentum in 1D) }$$
We can now combine the SMI of the locations and momenta of one particle in a box of volume $V$, taking into account the uncertainty principle, to obtain the result:
$$H_{\max }(\text { location and momentum in } 3 \mathrm{D})=3 \log \left[\frac{L \sqrt{2 \pi e m k_B T}}{h}\right]$$

# 信息论代写

## 数学代写|信息论代写information theory代考|Third Step: Combining the SMI for the Location and Momentum of a Particle in a $1 D$ System. Addition of Correction Due to Uncertainty

$$H[f(x)]=-\int f(x) \log f(x) d x$$

$$f_{e q}(x)=\frac{1}{L}$$

$$H(\text { locations in } 1 D)=\log L$$

$$H\left(\text { locations in 1D) }=\log L-\log h_x\right.$$

## 数学代写|信息论代写information theory代考|The SMI of One Particle in a Box of Volume $\mathrm{V}$

$$H(\text { location in } 3 \mathrm{D})=3 H_{\max } \text { (location in 1D) }$$

$$H_{\max }(\text { momentum in } 3 \mathrm{D})=3 H_{\max }(\text { momentum in 1D) }$$

$$H_{\max }(\text { location and momentum in } 3 \mathrm{D})=3 \log \left[\frac{L \sqrt{2 \pi e m k_B T}}{h}\right]$$

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

## 数学代写|信息论代写information theory代考|ELEN90030

statistics-lab™ 为您的留学生涯保驾护航 在代写信息论information theory方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写信息论information theory代写方面经验极为丰富，各种代写信息论information theory相关的作业也就用不着说。

## 数学代写|信息论代写information theory代考|A coin hidden in one of eight boxes

Bob placed a coin in one of eight boxes, Fig. 1.41. Bob tells Linda that the box, in which the coin is, was chosen at random, i.e. with equal probability of $1 / 8$. To eliminate any traces of subjectivity, a random integer between one and eight was chosen and then placed the coin in the box with that number. Linda was also told that there are exactly eight boxes, and that the coin is in one of the boxes. Linda does not know where the coin is, and she has to ask binary questions in order to find out where the coin is.
I tell you, the reader, that the SMI for this game is:
$$\text { SMI(coin in eight boxes) }=\log _2 8$$
I also tell you that this number may be interpreted as a measure of information associated with the distribution $\left(\frac{1}{8}, \frac{1}{8}, \cdots, \frac{1}{8}\right)$ in the following sense: If you know only the distribution, you can find out the missing information on where the coin is, by asking binary questions, and if you are smart enough you are guaranteed to obtain this information with just three questions.
Now, pause and answer the following questions:
(i) Is the SMI for this game a subjective quantity?
(ii) Does the SMI for this game depend on who plays the game?
(iii) Does Bob calculate a different SMI for this game than Linda?

The answer to each of these three questions is No! This seems strange to someone who does not read carefully the description and rules of the game. In this description, we used the word “information” that Bob knows, but Linda doesn’t. We also used the word “smart,” which might suggest to some that if the person who plays the game is not smart, he or she might calculate a different SMI for this game. All these “words” do not change the fact that the number: $\log _2 8=3$ is not a subjective number. In the description of the game I told you that Bob placed the coin in one of the boxes, so he must know the information on the location of the coin, while Linda doesn’t. However, when I ask you about the SMI that Bob will calculate for this game, the answer is $\log _2 8=3$, independently of what Bob knows or doesn’t. When Bob plays the game, it means that all he knows is that there are eight equally probable possibilities. With that information he still has to ask three questions.

## 数学代写|信息论代写information theory代考|A dart hit a board divided into eight regions of unequal areas

This game is a little more difficult since it involves a non-uniform distribution.
It is known that a dart was thrown on a board with a unit area. The board is divided into eight regions with areas $p_1, p_2, \cdots, p_8$. It is also known that the dart is in one of those areas and the probabilities of being in one of those regions is proportional to the ratio of the area of that region and the total area of the board (which was chosen as unity). Thus, we know that:
$$\sum p_i=1$$
And we define the SMI for this distribution as:
$$\text { SMI(dart on eight regions })=-\sum p_i \log p_i$$
The sum is over al $i=1,2, \ldots, 8$. Now, we play the same game as before. Bob threw the dart and Linda has to ask binary questions in order to find out where the dart is.

Read questions (i) to (iii) asked in connection with the previous game and answer them. Again, the answers to all those questions is No! Clearly, if the distribution is not uniform the average number of questions one needs to ask in order to obtain the missing information is smaller than $\log _2 8$. This was proven in Chap. 2 of BenNaim [1]. However, whatever the distribution is, it determines the value of the SMI as defined in Eq. (1.49), and this value is independent of who plays the game, who knows or does not know where the dart is, and whether or not the game is played at all. The value of the SMI is determined once you are given the distribution, and this number has no element of subjectivity. The game we built upon this distribution, and the identification of specific persons involved in this game are parts of the interpretation of the SMI; they do not affect the value of the SMI.

# 信息论代写

## 数学代写|信息论代写information theory代考|A coin hidden in one of eight boxes

$$\text { SMI(coin in eight boxes) }=\log _2 8$$

(i)这款游戏的SMI是否属于主观数量?
(ii)这个游戏的SMI是否取决于谁玩这个游戏?
(iii) Bob在这个游戏中计算的SMI是否与Linda不同?

## 数学代写|信息论代写information theory代考|A dart hit a board divided into eight regions of unequal areas

$$\sum p_i=1$$

$$\text { SMI(dart on eight regions })=-\sum p_i \log p_i$$

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

## 数学代写|信息论代写information theory代考|CSYS5030

statistics-lab™ 为您的留学生涯保驾护航 在代写信息论information theory方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写信息论information theory代写方面经验极为丰富，各种代写信息论information theory相关的作业也就用不着说。

## 数学代写|信息论代写information theory代考|Three Coins with Magnets

The first example is an extension of the example discussed in Sect. 1.4.1. Instead of two coins, we have three coins each having a magnet, or a spin, at its center so there are interactions between the magnets. The centers of the three coins form a perfect triangle with edge $R$. We assume that the interaction energy between the three magnets has the form:
$$U\left(x_1, x_2, x_3\right)=\left(x_1 x_2+x_1 x_3+x_2 x_3\right) / R$$
where $x_1$ and $x_2$ can have the values of 1 and -1 corresponding to the states of the magnet: “up” and “down,” respectively. Clearly, whenever both $x_1$ and $x_2$ have the same sign, we have a positive interaction energy, and when they have different signs we have negative interaction energy. All the probabilities in this system are derived from the equation:

$$P\left(x_1, x_2, x_3\right)=\frac{\exp \left[-U\left(x_1, x_2, x_3\right)\right]}{\sum_{x_1, x_2, x_3} \exp \left[-U\left(x_1, x_2, x_3\right)\right]}$$
Note that we use here the Boltzmann distribution, with $\left(k_B T\right)=1$. This is very similar to the three spin system we have discussed in Chap. 4 of Ben-Naim [1], and also in Chap. 3 of this book. The only difference is that here we are not interested in the temperature dependence of the various quantities, but only to the extent of interaction, hence extent of dependence between the spins- which varies with the distance R.

There are altogether eight possible configurations of the three spins as shown in Fig. 1.11. For any distance R, we have two high-probability configurations (either all “up-up” or all “down-down”) and six configurations with lower probability (one “up” and two “down,” or one “down” and two “up”).

In Fig. 1.12b, we show the pair-probabilities for this system as a function of the distance $\mathrm{R}$. One should compare this figure with Fig. 1.5, which is reproduced in Fig. 1.12a, for two coins. Note that $P(1,1)$ in Fig. $1.12 \mathrm{~b}$ is the probability of finding the pair of two coins in a state “up-up” in the presence of the third coin (we sometimes denote this probability by $P\left(1,1, _\right)$which means “up-up-unspecified”).

## 数学代写|信息论代写information theory代考|Three Regions on a Board

In the example of Sect. 1.5.1 we had three coins, or two spins, each of which could be in one of two states, “up” or “down.” We saw that there is no way of representing either the SMI or the MI in a Venn diagram.

In the next example we replace the three coins by three regions on a board. We throw a dart on the board of unit area. We know that the dart hit the board. The events are: “the dart is in region A” (or B, or C). We shall treat this system in two languages. First, as events having probabilities and represented in a Venn diagram. Second, as random variables, having SMIs and MIs which cannot be represented by a Venn diagram.
The system discussed in this section is shown in Fig. 1.17.
It is an extension of the system discussed in Sect. 1.4.2. Instead of two overlapping regions, we have here three overlapping regions, only in pairs, not in triplets. We assume that a dart was thrown on a board of unit area. Each of the regions A, B, and $\mathrm{C}$ have the same area chosen as $q=0.1$, hence, the probability of finding the dart in any of these areas is 0.1 .

We denote by $d$ the area of overlapping between A and B, and between A and C. We denote by $x$ the overlapping area between B and C. We start by listing the triplet probabilities which can be read from Fig. 1.17, These are:
\begin{aligned} & P(1,1,1)=0 \text { (no triple overlapping) } \ & P(0,0,0)=1-3 q+2 d+x \end{aligned}
(this is the area of the whole board minus the area of $A \cup B \cup C$ )
\begin{aligned} & P(1,0,0)=q-2 d \ & P(0,1,0)=q-d-x \ & P(0,0,1)=q-d-x \ & P(1,1,0)=d \ & P(1,0,1)=d \ & P(0,1,1)=x \end{aligned}
Clearly, the sum of all these is one:
$$\sum_{x_1, x_2, x_3} P\left(x_1, x_{2,}, x_3\right)=1$$

# 信息论代写

## 数学代写|信息论代写information theory代考|Three Coins with Magnets

$$U\left(x_1, x_2, x_3\right)=\left(x_1 x_2+x_1 x_3+x_2 x_3\right) / R$$

$$P\left(x_1, x_2, x_3\right)=\frac{\exp \left[-U\left(x_1, x_2, x_3\right)\right]}{\sum_{x_1, x_2, x_3} \exp \left[-U\left(x_1, x_2, x_3\right)\right]}$$

## 数学代写|信息论代写information theory代考|Three Regions on a Board

\begin{aligned} & P(1,1,1)=0 \text { (no triple overlapping) } \ & P(0,0,0)=1-3 q+2 d+x \end{aligned}
(这是整块黑板的面积减去$A \cup B \cup C$的面积)
\begin{aligned} & P(1,0,0)=q-2 d \ & P(0,1,0)=q-d-x \ & P(0,0,1)=q-d-x \ & P(1,1,0)=d \ & P(1,0,1)=d \ & P(0,1,1)=x \end{aligned}

$$\sum_{x_1, x_2, x_3} P\left(x_1, x_{2,}, x_3\right)=1$$

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

## 数学代写|信息论代写information theory代考|TELE9754

statistics-lab™ 为您的留学生涯保驾护航 在代写信息论information theory方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写信息论information theory代写方面经验极为丰富，各种代写信息论information theory相关的作业也就用不着说。

## 数学代写|信息论代写information theory代考|$p_i$ is not a measure of information, and $-\log p_i$ is not measured in bits

In numerous textbooks on IT, as well as in popular science books one can find a description of $-\log p_i$ as a measure of information associated with the event $i$, hence, the SMI $=-\sum p_i \log p_i$ is interpreted as an average information. This erroneous misinterpretation of SMI is discussed further in Ben-Naim [1]. Here, we focus only on the single term $-\log p_i$, which is sometimes referred to as “self-information,” or the amount of information you get when you know that the event $i$ occurs. Some even assign to this the term a value in units of bits.
Here is how “self-information” is introduced in Wikipedia:
Definition: Claude Shannon’s definition of self-information was chosen to meet several axioms:

If two independent events are measured separately, the total amount of information is the sum of the self-information of the individual events…given an event $\mathrm{x}$ with probability $\mathrm{P}$, the information content is defined as follows:
$$I_X(x)=-\log \left(P_X(x)\right)$$
This whole quotation is not only untrue; it is misleading as well. First of all, Shannon never defined self-information, (neither in the original article, Shannon [2], nor in Shannon and Weaver [4], and, of course, this was never chosen to meet “several axioms.”

Shannon searched for a measure of information based on the whole distribution and not for a single event. His conditions (as in Shannon [2]: “it is reasonable to require of it the following properties”), were entirely different from the conditions or requirements stated in abovementioned quotation.

If an event with a probability 1 occurs, it is not surprising, it is very much expected, but it is not true that it yields no information. When I hear that an event $x$ with probability $100 \%$ occurred, I obtained the information that ” $x$ occurred”.

If an event with lower probability occurred, I am more surprised. This it is true. But it is not true that I obtained more information!
Suppose that we have four dice with different probability distributions, say
\begin{aligned} & \text { die } \mathrm{A}: p_1=1, p_2=p_3=p_4=p_5=p_6=0 \ & \text { die } \mathrm{B}: p_1=0.9, p_2=0.1, p_3=p_4=p_5=p_6=0 \ & \text { die } \mathrm{C}: p_1=0.8, p_2=0.2, p_3=p_4=p_5=p_6=0 \ & \text { die } \mathrm{D}: p_1=0.7, p_2=0.3, p_3=p_4=p_5=p_6=0 \end{aligned}

## 数学代写|信息论代写information theory代考|SMI is not a probability

In the beginning of this section we claimed that probability in general, may not be interpreted as SMI. It is true that in a special case when all $p_i=p_0=\frac{1}{n}$, then $-\log p_0$ may be interpreted as SMI. However, in general $-\log p_i$ is not SMI. From this particular example, one cannot conclude that SMI is, in general, probability.
The association of SMI with probability is probably due to Brillouin [6]. On page 120 of his book “Science and Information Theory,” we find:
The probability has a natural tendency to increase, and so does entropy. The exact relation is given by the famous Boltzmann-Planck formula:
$$S=k \ln P$$
It is difficult to overestimate the amount of misinformation that is packed in these two sentences. Probability has no natural tendency to increase! Probability does not behave as entropy! There is no exact relationship between entropy and probability! The quoted formula is not the Boltzmann-Planck formula.

The correct Boltzmann-Planck relationship for the entropy is $S=k \ln W$, where $W$ is the total number of accessible microstates in the system. This relationship is a special case SMI for the case when all the events have equal probabilities. As we showed above, in general, probability is not SMI (except when $p_i=p_0=\frac{1}{n}$ ).

Here, we claim that entropy (being a special case of SMI) is never related to probability by an equation $S=k \ln P$.

The simplest reason for my claim is that probability is a positive number between 0 to 1 . Therefore, $\ln P$ varies between minus infinity to 0 . Entropy, as well as SMI is always a positive number greater or equal to 0. More on this in Ben-Naim [7].

# 信息论代写

## 数学代写|信息论代写information theory代考|$p_i$ is not a measure of information, and $-\log p_i$ is not measured in bits

$$I_X(x)=-\log \left(P_X(x)\right)$$

\begin{aligned} & \text { die } \mathrm{A}: p_1=1, p_2=p_3=p_4=p_5=p_6=0 \ & \text { die } \mathrm{B}: p_1=0.9, p_2=0.1, p_3=p_4=p_5=p_6=0 \ & \text { die } \mathrm{C}: p_1=0.8, p_2=0.2, p_3=p_4=p_5=p_6=0 \ & \text { die } \mathrm{D}: p_1=0.7, p_2=0.3, p_3=p_4=p_5=p_6=0 \end{aligned}

## 数学代写|信息论代写information theory代考|SMI is not a probability

SMI与概率的关联可能是由于布里渊[6]。在他的书《科学与信息论》的第120页，我们发现:

$$S=k \ln P$$

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

## 数学代写|信息论代写information theory代考|ECET602

statistics-lab™ 为您的留学生涯保驾护航 在代写信息论information theory方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写信息论information theory代写方面经验极为丰富，各种代写信息论information theory相关的作业也就用不着说。

## 数学代写|信息论代写information theory代考|KOLMOGOROV SUFFICIENT STATISTIC

Suppose that we are given a sample sequence from a $\operatorname{Bernoulli}(\theta)$ process. What are the regularities or deviations from randomness in this sequence? One way to address the question is to find the Kolmogorov complexity $K\left(x^n \mid n\right)$, which we discover to be roughly $n H_0(\theta)+\log n+c$. Since, for $\theta \neq \frac{1}{2}$, this is much less than $n$, we conclude that $x^n$ has structure and is not randomly drawn Bernoulli $\left(\frac{1}{2}\right)$. But what is the structure? The first attempt to find the structure is to investigate the shortest program $p^$ for $x^n$. But the shortest description of $p^$ is about as long as $p^$ itself; otherwise, we could further compress the description of $x^n$, contradicting the minimality of $p^$. So this attempt is fruitless.

A hint at a good approach comes from an examination of the way in which $p^*$ describes $x^n$. The program “The sequence has $k 1$ ‘s; of such sequences, it is the $i$ th” is optimal to first order for $\operatorname{Bernoulli}(\theta)$ sequences. We note that it is a two-stage description, and all of the structure of the sequence is captured in the first stage. Moreover, $x^n$ is maximally complex given the first stage of the description. The first stage, the description of $k$, requires $\log (n+1)$ bits and defines a set $S=\left{x \in{0,1}^n: \sum x_i=k\right}$. The second stage requires $\log |S|=\log \left(\begin{array}{l}n \ k\end{array}\right) \approx n H_0\left(\bar{x}_n\right) \approx n H_0(\theta)$ bits and reveals nothing extraordinary about $x^n$.

We mimic this process for general sequences by looking for a simple set $S$ that contains $x^n$. We then follow it with a brute-force description of $x^n$ in $S$ using $\log |S|$ bits. We begin with a definition of the smallest set containing $x^n$ that is describable in no more than $k$ bits.

Definition The Kolmogorov structure function $K_k\left(x^n \mid n\right)$ of a binary string $x \in{0,1}^n$ is defined as
$$K_k\left(x^n \mid n\right)=\min _{p: l(p) \leq k} \log |S| .$$

The set $S$ is the smallest set that can be described with no more than $k$ bits and which includes $x^n$. By $\mathcal{U}(p, n)=S$, we mean that running the program $p$ with data $n$ on the universal computer $\mathcal{U}$ will print out the indicator function of the set $S$.

## 数学代写|信息论代写information theory代考|MINIMUM DESCRIPTION LENGTH PRINCIPLE

A natural extension of Occam’s razor occurs when we need to describe data drawn from an unknown distribution. Let $X_1, X_2, \ldots, X_n$ be drawn i.i.d. according to probability mass function $p(x)$. We assume that we do not know $p(x)$, but know that $p(x) \in \mathcal{P}$, a class of probability mass functions. Given the data, we can estimate the probability mass function in $\mathcal{P}$ that best fits the data. For simple classes $\mathcal{P}$ (e.g., if $\mathcal{P}$ has only finitely many distributions), the problem is straightforward, and the maximum likelihood procedure [i.e., find $\hat{p} \in \mathcal{P}$ that maximizes $\hat{p}\left(X_1, X_2, \ldots, X_n\right)$ ] works well. However, if the class $\mathcal{P}$ is rich enough, there is a problem of overfitting the data. For example, if $X_1, X_2, \ldots, X_n$ are continuous random variables, and if $\mathcal{P}$ is the set of all probability distributions, the maximum likelihood estimator given $X_1, X_2, \ldots, X_n$ is a distribution that places a single mass point of weight $\frac{1}{n}$ at each observed value. Clearly, this estimate is too closely tied to actual observed data and does not capture any of the structure of the underlying distribution.

To get around this problem, various methods have been applied. In the simplest case, the data are assumed to come from some parametric distribution (e.g., the normal distribution), and the parameters of the distribution are estimated from the data. To validate this method, the data should be tested to check whether the distribution “looks” normal, and if the data pass the test, we could use this description of the data. A more general procedure is to take the maximum likelihood estimate and smooth it out to obtain a smooth density. With enough data, and appropriate smoothness conditions, it is possible to make good estimates of the original density. This process is called kernel density estimation.

However, the theory of Kolmogorov complexity (or the Kolmogorov sufficient statistic) suggests a different procedure: Find the $p \in \mathcal{P}$ that minimizes
$$L_p\left(X_1, X_2, \ldots, X_n\right)=K(p)+\log \frac{1}{p\left(X_1, X_2, \ldots, X_n\right)} .$$
This is the length of a two-stage description of the data, where we first describe the distribution $p$ and then, given the distribution, construct the Shannon code and describe the data using $\log \frac{1}{p\left(X_1, X_2, \ldots, X_n\right)}$ bits. This procedure is a special case of what is termed the minimum description length (MDL) principle: Given data and a choice of models, choose the model such that the description of the model plus the conditional description of the data is as short as possible.

# 信息论代写

## 数学代写|信息论代写information theory代考|KOLMOGOROV SUFFICIENT STATISTIC

$$K_k\left(x^n \mid n\right)=\min _{p: l(p) \leq k} \log |S| .$$

## 数学代写|信息论代写information theory代考|MINIMUM DESCRIPTION LENGTH PRINCIPLE

$$L_p\left(X_1, X_2, \ldots, X_n\right)=K(p)+\log \frac{1}{p\left(X_1, X_2, \ldots, X_n\right)} .$$

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

## 数学代写|信息论代写information theory代考|ELEN90030

statistics-lab™ 为您的留学生涯保驾护航 在代写信息论information theory方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写信息论information theory代写方面经验极为丰富，各种代写信息论information theory相关的作业也就用不着说。

## 数学代写|信息论代写information theory代考|THE HALTING PROBLEM AND THE NONCOMPUTABILITY OF KOLMOGOROV COMPLEXITY

This statement is false.
This paradox is sometimes stated in a two-statement form:

These paradoxes are versions of what is called the Epimenides liar para$d o x$, and it illustrates the pitfalls involved in self-reference. In 1931, Gödel used this idea of self-reference to show that any interesting system of mathematics is not complete; there are statements in the system that are true but that cannot be proved within the system. To accomplish this, he translated theorems and proofs into integers and constructed a statement of the above form, which can therefore not be proved true or false.

The halting problem in computer science is very closely connected with Gödel’s incompleteness theorem. In essence, it states that for any computational model, there is no general algorithm to decide whether a program will halt or not (go on forever). Note that it is not a statement about any specific program. Quite clearly, there are many programs that can easily be shown to halt or go on forever. The halting problem says that we cannot answer this question for all programs. The reason for this is again the idea of self-reference.

To a practical person, the halting problem may not be of any immediate significance, but it has great theoretical importance as the dividing line between things that can be done on a computer (given unbounded memory and time) and things that cannot be done at all (such as proving all true statements in number theory). Gödel’s incompleteness theorem is one of the most important mathematical results of the twentieth century, and its consequences are still being explored. The halting problem is an essential example of Gödel’s incompleteness theorem.

One of the consequences of the nonexistence of an algorithm for the halting problem is the noncomputability of Kolmogorov complexity. The only way to find the shortest program in general is to try all short programs and see which of them can do the job. However, at any time some of the short programs may not have halted and there is no effective (finite mechanical) way to tell whether or not they will halt and what they will print out. Hence, there is no effective way to find the shortest program to print a given string.

## 数学代写|信息论代写information theory代考|UNIVERSAL GAMBLING

Suppose that a gambler is asked to gamble sequentially on sequences $x \in{0,1}^*$. He has no idea of the origin of the sequence. He is given fair odds (2-for-1) on each bit. How should he gamble? If he knew the distribution of the elements of the string, he might use proportional betting because of its optimal growth-rate properties, as shown in Chapter 6. If he believes that the string occurred naturally, it seems intuitive that simpler strings are more likely than complex ones. Hence, if he were to extend the idea of proportional betting, he might bet according to the universal probability of the string. For reference, note that if the gambler knows the string $x$ in advance, he can increase his wealth by a factor of $2^{l(x)}$ simply by betting all his wealth each time on the next symbol of $x$. Let the wealth $S(x)$ associated with betting scheme $b(x), \sum b(x)=1$, be given by
$$S(x)=2^{l(x)} b(x) .$$
Suppose that the gambler bets $b(x)=2^{-K(x)}$ on a string $x$. This betting strategy can be called universal gambling. We note that the sum of the bets
$$\sum_x b(x)=\sum_x 2^{-K(x)} \leq \sum_{p: p \text { halts }} 2^{-l(p)}=\Omega \leq 1,$$
and he will not have used all his money. For simplicity, let us assume that he throws the rest away. For example, the amount of wealth resulting from a bet $b(0110)$ on a sequence $x=0110$ is $2^{l(x)} b(x)=2^4 b(0110)$ plus the amount won on all bets $b(0110 \ldots)$ on sequences that extend $x$.

# 信息论代写

## 数学代写|信息论代写information theory代考|UNIVERSAL GAMBLING

$$S(x)=2^{l(x)} b(x) .$$

$$\sum_x b(x)=\sum_x 2^{-K(x)} \leq \sum_{p: p \text { halts }} 2^{-l(p)}=\Omega \leq 1,$$

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

## 数学代写|信息论代写information theory代考|CPSC530

statistics-lab™ 为您的留学生涯保驾护航 在代写信息论information theory方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写信息论information theory代写方面经验极为丰富，各种代写信息论information theory相关的作业也就用不着说。

## 数学代写|信息论代写information theory代考|MODELS OF COMPUTATION

To formalize the notions of algorithmic complexity, we first discuss acceptable models for computers. All but the most trivial computers are universal, in the sense that they can mimic the actions of other computers.

We touch briefly on a certain canonical universal computer, the universal Turing machine, the conceptually simplest universal computer.

In 1936, Turing was obsessed with the question of whether the thoughts in a living brain could be held equally well by a collection of inanimate parts. In short, could a machine think? By analyzing the human computational process, he posited some constraints on such a computer. Apparently, a human thinks, writes, thinks some more, writes, and so on. Consider a computer as a finite-state machine operating on a finite symbol set. (The symbols in an infinite symbol set cannot be distinguished in finite space.) A program tape, on which a binary program is written, is fed left to right into this finite-state machine. At each unit of time, the machine inspects the program tape, writes some symbols on a work tape, changes its state according to its transition table, and calls for more program. The operations of such a machine can be described by a finite list of transitions. Turing argued that this machine could mimic the computational ability of a human being.

After Turing’s work, it turned out that every new computational system could be reduced to a Turing machine, and conversely. In particular, the familiar digital computer with its CPU, memory, and input output devices could be simulated by and could simulate a Turing machine. This led Church to state what is now known as Church’s thesis, which states that all (sufficiently complex) computational models are equivalent in the sense that they can compute the same family of functions. The class of functions they can compute agrees with our intuitive notion of effectively computable functions, that is, functions for which there is a finite prescription or program that will lead in a finite number of mechanically specified computational steps to the desired computational result.

We shall have in mind throughout this chapter the computer illustrated in Figure 14.1. At each step of the computation, the computer reads a symbol from the input tape, changes state according to its state transition table, possibly writes something on the work tape or output tape, and moves the program read head to the next cell of the program read tape. This machine reads the program from right to left only, never going back, and therefore the programs form a prefix-free set. No program leading to a halting computation can be the prefix of another such program. The restriction to prefix-free programs leads immediately to a theory of Kolmogorov complexity which is formally analogous to information theory.

We can view the Turing machine as a map from a set of finite-length binary strings to the set of finite- or infinite-length binary strings. In some cases, the computation does not halt, and in such cases the value of the function is said to be undefined. The set of functions $f:{0,1}^* \rightarrow$ ${0,1}^* \cup{0,1}^{\infty}$ computable by Turing machines is called the set of partial recursive functions.

## 数学代写|信息论代写information theory代考|KOLMOGOROV COMPLEXITY: DEFINITIONS AND EXAMPLES

Let $x$ be a finite-length binary string and let $\mathcal{U}$ be a universal computer. Let $l(x)$ denote the length of the string $x$. Let $\mathcal{U}(p)$ denote the output of the computer $\mathcal{U}$ when presented with a program $p$.

We define the Kolmogorov (or algorithmic) complexity of a string $x$ as the minimal description length of $x$.

Definition The Kolmogorov complexity $K_{\mathcal{U}}(x)$ of a string $x$ with respect to a universal computer $\mathcal{U}$ is defined as
$$K_{\mathcal{U}}(x)=\min {p: \mathcal{U}(p)=x} l(p),$$ the minimum length over all programs that print $x$ and halt. Thus, $K{\mathcal{U}}(x)$ is the shortest description length of $x$ over all descriptions interpreted by computer $\mathcal{U}$.

A useful technique for thinking about Kolmogorov complexity is the following – if one person can describe a sequence to another person in such a manner as to lead unambiguously to a computation of that sequence in a finite amount of time, the number of bits in that communication is an upper bound on the Kolmogorov complexity. For example, one can say “Print out the first 1,239,875,981,825,931 bits of the square root of $e . “$ Allowing 8 bits per character (ASCII), we see that the unambiguous 73symbol program above demonstrates that the Kolmogorov complexity of this huge number is no greater than $(8)(73)=584$ bits. Most numbers of this length (more than a quadrillion bits) have a Kolmogorov complexity of nearly $1,239,875,981,825,931$ bits. The fact that there is a simple algorithm to calculate the square root of $e$ provides the saving in descriptive complexity.

In the definition above, we have not mentioned anything about the length of $x$. If we assume that the computer already knows the length of $x$, we can define the conditional Kolmogorov complexity knowing $l(x)$ as
$$K_{\mathcal{U}}(x \mid l(x))=\min _{p: \mathcal{U}(p, l(x))=x} l(p) .$$

# 信息论代写

## 数学代写|信息论代写information theory代考|MODELS OF COMPUTATION

1936年，图灵痴迷于一个问题，那就是一个生命大脑中的思想能否同样被一组无生命的部分所保存。简而言之，机器会思考吗?通过分析人类的计算过程，他为这种计算机设定了一些约束条件。显然，人类思考，写作，再思考，写作，等等。把计算机看作是在有限符号集上运行的有限状态机。(无限符号集中的符号不能在有限空间中区分。)写入二进制程序的程序磁带从左向右输入到这台有限状态机中。在每个单位时间，机器检查程序磁带，在工作磁带上写一些符号，根据其过渡表改变其状态，并调用更多的程序。这种机器的操作可以用一个有限的过渡列表来描述。图灵认为这台机器可以模仿人类的计算能力。

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。