### 数学代写|信息论作业代写information theory代考|Information Measures for Continuous

statistics-lab™ 为您的留学生涯保驾护航 在代写信息论information theory方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写信息论information theory代写方面经验极为丰富，各种代写信息论information theory相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 数学代写|信息论作业代写information theory代考|Random Variables

The definitions of mutual information for discrete random variables can be directly extended to continuous random variables. Let $X$ and $Y$ be random variables with joint probability density function (pdf) $p(x, y)$ and marginal pdfs $p(x)$ and $p(y)$. The average mutual information between $X$ and $Y$ is defined as follows.

Definition $1.8$ The Average Mutual Information between two continuous random variables $X$ and $Y$ is defined as
$$I(X ; Y)=\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} p(x) p(y \mid x) \log \frac{p(y \mid x) p(x)}{p(x) p(y)} d x d y$$
It should be pointed out that the definition of average mutual information can be carried over from discrete random variables to continuous random variables, but the concept and physical interpretation cannot. The reason is that the information content in a continuous random variable

is actually infinite, and we require infinite number of bits to represent a continuous random variable precisely. The self information, and hence the entropy, is infinite. To get around the problem we define a quantity called the differential entropy.
Definition 1.9 The Differential Entropy of a continuous random variable $X$ is defined as
$$h(X)=-\int_{-\infty}^{\infty} p(x) \log p(x) d x$$
Again, it should be understood that there is no physical meaning attached to the above quantity. We carry on with extending our definitions further.

Definition 1.10 The Average Conditional Entropy of a continuous random variable $X$ given $Y$ is defined as
$$h(X \mid Y)=-\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} p(x, y) \log p(x \mid y) d x d y$$
The average mutual information can be expressed as
$$I(X ; Y)=h(X)-h(X \mid Y)=h(Y)-h(Y \mid X)$$
Following is the list of some properties of differential entropy:

1. $h(a X)=h(X)+\log |a|$.
2. If $X$ and $Y$ are independent, then $h(X+Y) \geq h(X)$. This is because $h(X+Y) \geq h(X+Y \mid Y)=$ $h(X \mid Y)=h(X)$.

## 数学代写|信息论作业代写information theory代考|Relative Entropy

An interesting question to ask is how similar (or different) are two probability distributions? Relative entropy is used as a measure of distance between two distributions.

Definition 1.11 The Relative Entropy or Kullback Leibler (KL) Distance between two probability mass functions $p(x)$ and $q(x)$ is defined as
$$D(p | q)=\sum_{x \in X} p(x) \log \left(\frac{p(x)}{q(x)}\right)$$
It can be interpreted as the expected value of $\log \left(\frac{p(x)}{q(x)}\right)$.
Example 1.9 Consider a Gaussian distribution $p(x)$ with mean and variance given by $\left(\mu_{1}, \sigma_{1}^{2}\right)$, and another Gaussian distribution $q(x)$ with mean and variance given by $\left(\mu_{2}, \sigma_{2}^{2}\right)$. Using (1.33), we can find the KL distance between two Gaussian distributions as

$$D(p | q)=\frac{1}{2}\left[\frac{\sigma_{1}^{2}}{\sigma_{2}^{2}}+\left(\frac{\mu_{2}-\mu_{1}}{\sigma_{2}}\right)^{2}-1-\log {2}\left(\frac{\sigma{1}^{2}}{\sigma_{2}^{2}}\right)\right]$$
The distance becomes zero when the two distributions are identical, i.e., $\mu_{1}=\mu_{2}$ and $\sigma_{1}^{2}=\sigma_{2}^{2}$. It is interesting to note that when $\mu_{1} \neq \mu_{2}$, the distance is minimum for $\sigma_{1}^{2}=\sigma_{2}^{2}$. This minimum distance is given by
$$D_{\min }(p | q)=\frac{1}{2}\left(\frac{\mu_{2}-\mu_{1}}{\sigma_{2}}\right)^{2}$$
Also, the KL distance is infinite if either $\sigma_{1}^{2} \rightarrow 0$ or $\sigma_{2}^{2} \rightarrow 0$, that is, if either of the distributions tends to the Dirac delta.
The average mutual information can be seen as the relative entropy between the joint distribution, $p(x, y)$, and the product distribution, $p(x) p(y)$, i.e.,
$$I(X ; Y)=D(p(x, y) | p(x) p(y))$$
We note that, in general, $D(p | q) \neq D(q | p)$. Thus, even though the relative entropy is a distance measure, it does not follow the symmetry property of distances. To overcome this, another measure, called the Jensen Shannon distance, is sometimes used to define the similarity between two distributions.

Definition $1.12$ The Jensen Shannon Distance between two probability mass functions $p(x)$ and $q(x)$ is defined as
$$J S D(p | q)=\frac{1}{2} D(p | m)+\frac{1}{2} D(q | m)$$
where $m=\frac{1}{2}(p+q)$.
DID YOU If the base of the logarithm is 2 , then, $0 \leq J S D(p | q) \leq 1$. Note that Jensen Shannon distance is
KNOW sometimes referred to as Jensen Shannon divergence or Information Radius in literature.

## 数学代写|信息论作业代写information theory代考|Huffman Coding

We will now study an algorithm for constructing efficient source codes for a DMS with source symbols that are not equally probable. A variable length encoding algorithm was suggested by Huffiman in 1952 , based on the source symbol probabilities $P\left(x_{i}\right), i=1,2, \ldots, L$. The algorithm is optimal in the sense that the average number of bits required to represent the source symbols is a minimum provided the prefix condition is met. The steps of the Huffman coding algorithm are as follows:
(i) Arrange the source symbols in decreasing order of their probabilities.
(ii) Take the bottom two symbols and tie them together as shown in Fig. 1.11. Add the probabilities of the two symbols and write it on the combined node. Label the two branches with a ‘ $l$ ‘ and a ‘ 0 ‘ as depicted in Fig. 1.11.
(iii) Treat this sum of probabilities as a new probability associated with a new symbol. Again pick the two smallest probabilities, tie them together to form a new probability. Each time we perform the combination of two symbols we reduce the total number of symbols by one. Whenever we tie together two probabilities (nodes) we label the two branches with $\mathrm{a}$ ‘ 1 ‘ and $\mathrm{a} \mathrm{} ~ 0$ ‘.

(iv) Continue the procedure until only one probability is left (and it should be 1 if your addition is right!). This completes the construction of the Huffman tree.
(v) To find out the prefix codeword for any symbol, follow the branches from the final node back to the symbol. While tracing back the route, read out the labels on the branches. This is the codeword for the symbol.
The algorithm can be easily understood using the following example.

## 数学代写|信息论作业代写information theory代考|Random Variables

H(X)=−∫−∞∞p(X)日志⁡p(X)dX

H(X∣是)=−∫−∞∞∫−∞∞p(X,是)日志⁡p(X∣是)dXd是

1. H(一个X)=H(X)+日志⁡|一个|.
2. 如果X和是是独立的，那么H(X+是)≥H(X). 这是因为H(X+是)≥H(X+是∣是)= H(X∣是)=H(X).

## 数学代写|信息论作业代写information theory代考|Relative Entropy

D(p|q)=∑X∈Xp(X)日志⁡(p(X)q(X))

D(p|q)=12[σ12σ22+(μ2−μ1σ2)2−1−日志⁡2(σ12σ22)]

D分钟(p|q)=12(μ2−μ1σ2)2

Ĵ小号D(p|q)=12D(p|米)+12D(q|米)

DID YOU 如果对数的底是 2 ，那么，0≤Ĵ小号D(p|q)≤1. 请注意，Jensen Shannon 距离

## 数学代写|信息论作业代写information theory代考|Huffman Coding

(i) 以它们的概率的降序排列源符号。
(ii) 如图 1.11 所示，将底部的两个符号绑在一起。将两个符号的概率相加，并将其写在组合节点上。用’标记两个分支l’和一个’0’，如图1.11所示。
(iii) 将此概率总和视为与新符号相关联的新概率。再次选择两个最小的概率，将它们连接在一起形成一个新的概率。每次我们执行两个符号的组合时，我们都会将符号的总数减一。每当我们将两个概率（节点）联系在一起时，我们将两个分支标记为一个’1′ 和一个 0 ‘.

(iv) 继续这个过程，直到只剩下一个概率（如果你的加法正确，它应该是 1！）。这样就完成了 Huffman 树的构建。
(v) 要找出任何符号的前缀码字，请沿着从最终节点到符号的分支。一边追溯路线，一边读出树枝上的标签。这是符号的代码字。

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。