### 数学代写|信息论代写information theory代考|ECE4042

statistics-lab™ 为您的留学生涯保驾护航 在代写信息论information theory方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写信息论information theory代写方面经验极为丰富，各种代写信息论information theory相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 数学代写|信息论代写information theory代考|Definition of entropy of a continuous random variable

Up to now we have assumed that a random variable $\xi$, with entropy $H_{\xi}$, can take values from some discrete space consisting of either a finite or a countable number of elements, for instance, messages, symbols, etc. However, continuous variables are also widespread in engineering, i.e. variables (scalar or vector), which can take values from a continuous space $X$, most often from the space of real numbers. Such a random variable $\xi$ is described by the probability density function $p(\xi)$ that assigns the probability
$$\Delta P=\int_{\xi \varepsilon \Delta X} p(\xi) d \xi \approx p(A) \Delta V \quad(A \in \Delta X)$$
of $\xi$ appearing in region $\Delta X$ of the specified space $X$ with volume $\Delta V(d \xi=d V$ is a differential of the volume).

How can we define entropy $H_{\xi}$ for such a random variable? One of many possible formal ways is the following: In the formula
$$H_{\xi}=-\sum_{\xi} P \xi \ln P(\xi)=-\mathbb{E}[\ln P(\xi)]$$
appropriate for a discrete variable we formally replace probabilities $P(\xi)$ in the argument of the logarithm by the probability density and, thereby, consider the expression
$$H_{\xi}=-\mathbb{E}[\ln p(\xi)]=-\int_{x} p(\xi) \ln p(\xi) d \xi .$$
This way of defining entropy is not well justified. It remains unclear how to define entropy in the combined case, when a continuous distribution in a continuous space coexists with concentrations of probability at single points, i.e. the probability density contains delta-shaped singularities. Entropy (1.6.2) also suffers from the drawback that it is not invariant, i.e. it changes under a non-degenerate transformation of variables $\eta=f(\xi)$ in contrast to entropy (1.6.1), which remains invariant under such transformations.

## 数学代写|信息论代写information theory代考|Properties of entropy in the generalized version

Entropy (1.6.13), (1.6.16) defined in the previous section possesses a set of properties, which are analogous to the properties of an entropy of a discrete random variable considered earlier. Such an analogy is quite natural if we take into account the interpretation of entropy (1.6.13) (provided in Section 1.6) as an asymptotic case (for large $N$ ) of entropy (1.6.1) of a discrete random variable.

The non-negativity property of entropy, which was discussed in Theorem $1.1$, is not always satisfied for entropy (1.6.13), (1.6.16) but holds true for sufficiently large $N$. The constraint
$$H_{\xi}^{P / Q} \leqslant \ln N$$
results in non-negativity of entropy $H_{\xi}$.
Now we move on to Theorem $1.2$, which considered the maximum value of entropy. In the case of entropy (1.6.13), when comparing different distributions $P$ we need to keep measure $v$ fixed. As it was mentioned, quantity (1.6.17) is non-negative and, thus, (1.6.16) entails the inequality
$$H_{\xi} \leqslant \ln N .$$
At the same time, if we suppose $P=Q$, then, evidently, we will have
$$H_{\xi}=\ln N .$$
This proves the following statement that is an analog of Theorem $1.2$.

## 数学代写|信息论代写information theory代考|Encoding of discrete information

The definition of the amount of information, given in Chapter 1, is justified when we deal with a transformation of information from one kind into another, i.e. when considering encoding of information. It is essential that the law of conservation of information amount holds under such a transformation. It is very useful to draw an analogy with the law of conservation of energy. The latter is the main argument for introducing the notion of energy. Of course, the law of conservation of information is more complex than the law of conservation of energy in two respects. The law of conservation of energy establishes an exact equality of energies, when one type of energy is transformed into another. However, in transforming information we have a more complex relation, namely ‘not greater’ $(\leqslant)$, i.e. the amount of information cannot increase. The equality sign corresponds to optimal encoding. Thus, when formulating the law of conservation of information, we have to point out that there possibly exists such an encoding, for which the equality of the amounts of information occurs.

The second complication is that the equality is not exact. It is approximate, asymptotic, valid for complex (large) messages and for composite random variables. The larger a system of messages is, the more exact such a relation becomes. The exact equality sign takes place only in the limiting case. In this respect, there is an analogy with the laws of statistical thermodynamics, which are valid for large thermodynamic systems consisting of a large number (of the order of the Avogadro number) of molecules.

When conducting encoding, we assume that a long sequence of messages $\xi_{1}, \xi_{2}$, … is given together with their probabilities, i.e. a sequence of random variables. Therefore, the amount of information (entropy $H$ ) corresponding to this sequence can be calculated. This information can be recorded and transmitted by different realizations of the sequence. If $M$ is the number of such realizations, then the law of conservation of information can be expressed by the equality $H=\ln M$, which is complicated by the two above-mentioned factors (i.e. actually. $H \leqslant \ln M$ ).

Two different approaches may be used for solving the encoding problem. One can perform encoding of an infinite sequence of messages, i.e. online (or ‘sliding’) encoding. The inverse procedure, i.e. decoding, will be performed analogously.

## 数学代写|信息论代写information theory代考|Definition of entropy of a continuous random variable

$$\Delta P=\int_{\xi \varepsilon \Delta X} p(\xi) d \xi \approx p(A) \Delta V \quad(A \in \Delta X)$$

$$H_{\xi}=-\sum_{\xi} P \xi \ln P(\xi)=-\mathbb{E}[\ln P(\xi)]$$

$$H_{\xi}=-\mathbb{E}[\ln p(\xi)]=-\int_{x} p(\xi) \ln p(\xi) d \xi$$

## 数学代写|信息论代写information theory代考|Properties of entropy in the generalized version

$$H_{\xi}^{P / Q} \leqslant \ln N$$

$$H_{\xi} \leqslant \ln N \text {. }$$

$$H_{\xi}=\ln N .$$

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。