### 数学代写|信息论代写information theory代考| Logical Mutual Information

statistics-lab™ 为您的留学生涯保驾护航 在代写信息论information theory方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写信息论information theory代写方面经验极为丰富，各种代写信息论information theory相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 数学代写|信息论代写information theory代考|Logical Mutual Information

Intuitively, the mutual logical information $m(X, Y)$ in the joint distribution ${p(x, y)}$ would be the probability that a sampled pair of pairs $(x, y)$ and $\left(x^{\prime}, y^{\prime}\right)$ would be distinguished in both coordinates, i.e., a distinction $x \neq x^{\prime}$ of $p(x)$ and a distinction $y \neq y^{\prime}$ of $p(y)$. In terms of subsets, the subset for the mutual information is intersection of infosets for $X$ and $Y$ :
$$S_{X \wedge Y}=S_{X} \cap S_{Y} \text { so } m(X, Y)=\mu\left(S_{X \wedge Y}\right)=\mu\left(S_{X} \cap S_{Y}\right) \text {. }$$
In terms of disjoint unions of subsets:
$$S_{X \vee Y}=S_{X \wedge \neg Y} \uplus S_{Y \wedge \neg X} \uplus S_{X \wedge Y}$$
so
$$\begin{gathered} h(X, Y)=\mu\left(S_{X \vee Y}\right)=\mu\left(S_{X \wedge \neg Y}\right)+\mu\left(S_{Y \wedge \neg X}\right)+\mu\left(S_{X \wedge Y}\right) \ =h(X \mid Y)+h(Y \mid X)+m(X, Y) \end{gathered}$$
or:
$$m(X, Y)=h(X)+h(Y)-h(X, Y)$$
as illustrated in Fig. 3.3.
Expanding $m(X, Y)=h(X)+h(Y)-h(X, Y)$ in terms of probability averages gives:
$$m(X, Y)=\sum_{x, y} p(x, y)[[1-p(x)]+[1-p(y)]-[1-p(x, y)]]$$
Logical mutual information in a joint probability distribution.
Since $S_{Y}=S_{Y \wedge \neg X} \cup S_{Y \wedge X}=\left(S_{Y}-S_{X}\right) \cup\left(S_{Y} \cap S_{X}\right)$ and the union is disjoint, we have the formula:
$$h(Y)=h(Y \mid X)+m(X, Y)$$

which can be taken as the basis for a logical analysis of variation (ANOVA) for categorical data. The total variation in $Y, h(Y)$, is equal to the variation in $Y$ “within” $X$ (i.e., with no variation in $X), h(Y \mid X)$, plus the variation “between” $Y$ and $X$ (i.e., variation in both $X$ and $Y), m(X, Y)$.

The Common Dits Theorem (see the Appendix) shows that two nonempty partition ditsets always intersect. The same holds for the positive supports of the basic infosets $S_{X}$ and $S_{Y}$.

## 数学代写|信息论代写information theory代考|Shannon Mutual Information

Applying the dit-bit transform $1-p \rightsquigarrow \log \left(\frac{1}{p}\right)$ to the logical mutual information formula
$$m(X, Y)=\sum_{x, y} p(x, y)[[1-p(x)]+[1-p(y)]-[1-p(x, y)]]$$
expressed in terms of probability averages gives the corresponding Shannon notion:
$$\begin{gathered} I(X, Y)=\sum_{x, y} p(x, y)\left[\left[\log \left(\frac{1}{p(x)}\right)\right]+\left[\log \left(\frac{1}{p(y)}\right)\right]-\left[\log \left(\frac{1}{p(x, y)}\right)\right]\right] \ =\sum_{x, y} p(x, y) \log \left(\frac{p(x, y)}{p(x) p(y)}\right) \end{gathered}$$
Shannon mutual information in a joint probability distribution.
Since the dit-bit transform preserves sums and differences, the logical formulas for the Shannon entropies gives the mnemonic (Fig. 3.5):
$$I(X, Y)=H(X)+H(Y)-H(X, Y)=H(X, Y)-H(X \mid Y)-H(Y \mid X)$$

## 数学代写|信息论代写information theory代考|Independent Joint Distributions

A joint probability distribution ${p(x, y)}$ on $X \times Y$ is independent if each value is the product of the marginals: $p(x, y)=p(x) p(y)$.
For an independent distribution, the Shannon mutual information
$$I(X, Y)=\sum_{x \in X, y \in Y} p(x, y) \log \left(\frac{p(x, y)}{p(x) p(y)}\right)$$

is immediately seen to be zero so we have:
$$H(X, Y)=H(X)+H(Y)$$
Shannon entropies for independent ${p(x, y)}$.
For the logical mutual information $m(X, Y)$, independence gives:
\begin{aligned} m(X, Y) &=\sum_{x, y} p(x, y)[1-p(x)-p(y)+p(x, y)] \ &=\sum_{x, y} p(x) p(y)[1-p(x)-p(y)+p(x) p(y)] \ &=\sum_{x} p(x)[1-p(x)] \sum_{y} p(y)[1-p(y)] \ &=h(X) h(Y) \end{aligned}
Logical entropies for independent ${p(x, y)}$.
Independence means the joint probability $p(x, y)$ can always be separated into $p(x)$ times $p(y)$. This carries over to the standard two-draw probability interpretation of logical entropy. Thus independence means that in two draws, the probability $m(X, Y)$ of getting distinctions in both $X$ and $Y$ is equal to the probability $h(X)$ of getting an $X$-distinction times the probability $h(Y)$ of getting a $Y$-distinction. Similarly, Table $3.1$ shows that, under independence, the four atomic areas in the Venn diagram for the logical entropies can each be expressed as the four possible products of the areas ${h(X), 1-h(X)}$ and ${h(Y), 1-h(Y)}$ that are defined in terms of one variable as shown in Table 3.1.

The nonempty-supports-always-intersect proposition shows that $h(X) h(Y)>0$ implies $m(X, Y)>0$, and thus that logical mutual information $m(X, Y)$ is still positive for independent distributions when $h(X) h(Y)>0$, in which case $m(X, Y)=h(X) h(Y)$. This is a striking difference between the average bit-count Shannon entropy and the dit-count logical entropy. Aside from the waste case where $h(X) h(Y)=0$, there are always positive probability mutual distinctions for $X$ and $Y$.

## 数学代写|信息论代写information theory代考|Logical Mutual Information

H(X,是)=μ(小号X∨是)=μ(小号X∧¬是)+μ(小号是∧¬X)+μ(小号X∧是) =H(X∣是)+H(是∣X)+米(X,是)

H(是)=H(是∣X)+米(X,是)

Common Dits Theorem（见附录）表明两个非空分区 ditset 总是相交的。基本信息集的积极支持也是如此小号X和小号是.

## 数学代写|信息论代写information theory代考|Independent Joint Distributions

H(X,是)=H(X)+H(是)

nonempty-supports-always-intersect 命题表明H(X)H(是)>0暗示米(X,是)>0，因此逻辑互信息米(X,是)当独立分布仍然为正时H(X)H(是)>0， 在这种情况下米(X,是)=H(X)H(是). 这是平均位数香农熵和滴数逻辑熵之间的显着差异。除了废物箱H(X)H(是)=0, 总是存在正概率相互区分X和是.

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。