标签: COMP431

计算机代写|密码学与网络安全代写cryptography and network security代考|CS499

如果你也在 怎样代写密码学与网络安全cryptography and network security这个学科遇到相关的难题,请随时右上角联系我们的24/7代写客服。

密码学是对安全通信技术的研究,它只允许信息的发送者和预定接收者查看其内容。

statistics-lab™ 为您的留学生涯保驾护航 在代写密码学与网络安全cryptography and network security方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写密码学与网络安全cryptography and network security代写方面经验极为丰富,各种代写密码学与网络安全cryptography and network security相关的作业也就用不着说。

我们提供的密码学与网络安全cryptography and network security及其相关学科的代写,服务范围广, 其中包括但不限于:

  • Statistical Inference 统计推断
  • Statistical Computing 统计计算
  • Advanced Probability Theory 高等概率论
  • Advanced Mathematical Statistics 高等数理统计学
  • (Generalized) Linear Models 广义线性模型
  • Statistical Machine Learning 统计机器学习
  • Longitudinal Data Analysis 纵向数据分析
  • Foundations of Data Science 数据科学基础
计算机代写|密码学与网络安全代写cryptography and network security代考|CS499

计算机代写|密码学与网络安全代写cryptography and network security代考|Prefix Codes

For a prefix code, no codeword is a prefix, of the first part, of another codeword. Therefore, the code shown in Table $3.7$ is a prefix. On the other hand, the code shown in Table $3.8$ is not the prefix because the binary word 10 , for instance, is a prefix for the codeword 100.

To decode a sequence of binary words produced by a prefix encoder, the decoder hegins at the first hinary digit of the sequence, and decodes a codeword at a time. It is similar to a decision tree, which is a representation of the codewords of a given source code.

Figure $3.3$ illustrates the decision tree for the prefix code pointed in Table 3.9.

The tree has one initial state and four final states, which correspond to the symbols $x_1, x_2$, and $x_3$. From the initial state, for each received bit, the decoder searches the tree until a final state is found.

The decoder then emits a corresponding decoded symbol and returns to the initial state. Therefore, from the initial state, after receiving a 1 , the source decoder decodes symbol $x_1$ and returns to the initial state. If it receives a 0 , the decoder moves to the lower part of the tree; in the following, after receiving another 0 , the decoder moves further to the lower part of the tree and, after receiving a 1 , the decoder retrieves $x_2$ and returns to the initial state.
Considering the code from Table $3.9$, with the decoding tree from Figure 3.9, the binary sequence 011100010010100101 is decoded into the output sequence $x_1 x_0 x_0 x_3 x_0 x_2 x_1 x_2 x_1$.

By construction, a prefix code is always unequivocally decodable, which is important to avoid any confusion at the receiver.

Consider a code that has been constructed for a discrete source with alphabet $\left{x_1, x_2, \ldots, x_K\right}$. Let $\left{p_1, p_1, \ldots, p_K\right}$ be the source statistics and $l_k$ be the codeword length for symbol $x_k, k=1, \ldots, K$. If the binary code constructed for the source is a prefix one, then one can use the Kraft-McMillan inequality
$$
\sum_{k=1}^K 2^{-l_k} \leq 1,
$$
in which factor 2 is the radix, or number of symbols, of the binary alphabet.
For a memoryless discrete source with entropy $H(X)$, the codeword average length of a prefix code is limited to
$$
H(X) \leq \bar{L}<H(X)+1
$$
The left-hand side equality is obtained on the condition that symbol $x_k$ be emitted from the source with probability $p_k=2^{-l_k}$, in which $l_k$ is the length of the codeword assigned to symbol $x_k$.

计算机代写|密码学与网络安全代写cryptography and network security代考|The Information Unit

There is some confusion between the binary digit, abbreviated as bit, and the information particle, also baptized as bit by John Tukey and Claude Shannon.
In a meeting of the Institute of Electrical and Electronics Engineers (IEEE), the largest scientific institution in the world, the author of this book proposed the shannon [Sh] as a unit of information transmission, which is equivalent to bit per second. It is instructive to say that the bit, as used today, is not a unit of information because it is not approved by the International System of Units (SI).

What is curious about that meeting was the misunderstanding that surrounded the units, in particular, regarding the difference between the concepts of information unit and digital logic unit (Alencar, 2007).

To make things clear, the binary digit is associated with a certain state of a digital system, and not to information. A binary digit “1” can refer to 5 volts, in TTL logic, or 12 volts, for CMOS logic.

The information bit exists independent of any association with a particular voltage level. It can be associated, for example, with a discrete information or with the quantization of an analog information.

For instance, the information bits recorded on the surface of a compact disk are stored as a series of depressions on the plastic material, which are read by an optical beam, generated by a semiconductor laser. But, obviously, the depressions are not the information. They represent a means for the transmission of information, a material substrate that carries the data.

In the same way, the information can exist, even if it is not associated with light or other electromagnetic radiation. It can be transported by several means, including paper, and materializes itself when it is processed by a computer or by a human being.

计算机代写|密码学与网络安全代写cryptography and network security代考|CS499

密码学与网络安全代考

 

计算机代写|密码学与网络安全代写cryptography and network security代考|Prefix Codes

对于前缀码,没有一个码字是另一个码字的第一部分的前缀。因此,表$3.7$中显示的代码是一个前缀。另一方面,表$3.8$中显示的代码不是前缀,因为二进制字10,例如,是码字100的前缀


要解码由前缀编码器产生的二进制字序列,解码器从序列的第一个零位开始,每次解码一个码字。它类似于决策树,决策树是给定源代码的码字的表示

图$3.3$说明了表3.9中所指向的前缀代码的决策树

树有一个初始状态和四个最终状态,分别对应符号$x_1, x_2$和$x_3$。从初始状态开始,对于每个接收到的比特,解码器搜索树,直到找到最终状态

然后解码器发出相应的已解码符号并返回初始状态。因此,从初始状态开始,在接收到1后,源解码器对符号$x_1$进行解码,并返回到初始状态。如果它接收到0,则解码器移动到树的较低部分;在接下来的代码中,在接收到另一个0之后,解码器进一步移动到树的较低部分,在接收到1之后,解码器检索$x_2$并返回到初始状态。
考虑表$3.9$中的代码,使用图3.9中的解码树,将二进制序列011100010010100101解码为输出序列$x_1 x_0 x_0 x_3 x_0 x_2 x_1 x_2 x_1$


通过构造,前缀码总是明确可解码的,这对于避免接收端产生任何混淆是很重要的

考虑一个为字母$\left{x_1, x_2, \ldots, x_K\right}$的离散源构造的代码。设$\left{p_1, p_1, \ldots, p_K\right}$为源统计数据,$l_k$为符号$x_k, k=1, \ldots, K$的码字长度。如果为源代码构造的二进制代码是前缀1,那么可以使用Kraft-McMillan不等式
$$
\sum_{k=1}^K 2^{-l_k} \leq 1,
$$
,其中因子2是二进制字母表的基数或符号数。对于熵为$H(X)$的无记忆离散源,前缀码的码字平均长度被限制在
$$
H(X) \leq \bar{L}<H(X)+1
$$
,在符号$x_k$以$p_k=2^{-l_k}$的概率从源发射的条件下得到左侧等式,其中$l_k$是分配给符号$x_k$的码字长度

计算机代写|密码学与网络安全代写cryptography and network security代考|The Information Unit

.信息单元


二进制数字(简称bit)和信息粒子(John Tukey和Claude Shannon也称bit)之间存在一些混淆。在世界上最大的科学机构——电气与电子工程师协会(IEEE)的一次会议上,本书的作者提出香农[Sh]作为信息传输单位,相当于比特/秒。今天使用的位不是信息单位,因为它没有得到国际单位制(SI)的批准


关于那次会议,令人好奇的是围绕着单元的误解,特别是关于信息单元和数字逻辑单元概念之间的差异(Alencar, 2007)


为了说明问题,二进制数字与数字系统的某种状态有关,而与信息无关。二进制数字“1”可以指5伏(在TTL逻辑中)或12伏(在CMOS逻辑中)


信息位的存在独立于任何与特定电压水平的关联。例如,它可以与离散信息或模拟信息的量化相关联


例如,记录在光盘表面的信息位被存储为塑料材料上的一系列凹坑,由半导体激光器产生的光束读取。但是,很明显,抑郁并不是信息。它们代表一种信息传输的手段,一种承载数据的材料基板


以同样的方式,信息可以存在,即使它与光或其他电磁辐射无关。它可以通过多种方式运输,包括纸张,当它被计算机或人类处理时,它就变成了现实

计算机代写|密码学与网络安全代写cryptography and network security代考 请认准statistics-lab™

金融工程代写

非参数统计代写

广义线性模型代考

有限元方法代写

随机分析代写

时间序列分析代写

回归分析代写

MATLAB代写

R语言代写问卷设计与分析代写
PYTHON代写回归分析与线性模型代写
MATLAB代写方差分析与试验设计代写
STATA代写机器学习/统计学习代写
SPSS代写计量经济学代写
EVIEWS代写时间序列分析代写
EXCEL代写深度学习代写
SQL代写各种数据建模与可视化代写

计算机代写|密码学与网络安全代写cryptography and network security代考|COMP431

如果你也在 怎样代写密码学与网络安全cryptography and network security这个学科遇到相关的难题,请随时右上角联系我们的24/7代写客服。

密码学是对安全通信技术的研究,它只允许信息的发送者和预定接收者查看其内容。

statistics-lab™ 为您的留学生涯保驾护航 在代写密码学与网络安全cryptography and network security方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写密码学与网络安全cryptography and network security代写方面经验极为丰富,各种代写密码学与网络安全cryptography and network security相关的作业也就用不着说。

我们提供的密码学与网络安全cryptography and network security及其相关学科的代写,服务范围广, 其中包括但不限于:

  • Statistical Inference 统计推断
  • Statistical Computing 统计计算
  • Advanced Probability Theory 高等概率论
  • Advanced Mathematical Statistics 高等数理统计学
  • (Generalized) Linear Models 广义线性模型
  • Statistical Machine Learning 统计机器学习
  • Longitudinal Data Analysis 纵向数据分析
  • Foundations of Data Science 数据科学基础
计算机代写|密码学与网络安全代写cryptography and network security代考|COMP431

计算机代写|密码学与网络安全代写cryptography and network security代考|Requirements for an Information Metric

A few fundamental properties are necessary for the entropy in order to obtain an axiomatic approach to base the information measurement (Reza, 1961).

  • If the event probabilities suffer a small change, the associated measure must change in accordance, in a continuous manner, which provides a physical meaning to the metric
    $H\left(p_1, p_2, \ldots, p_N\right)$ is continuous in $p_k, k=1,2, \ldots, N$, (3.7) $0 \leq p_k \leq 1$.
  • The information measure must be symmetric in relation to the probability set $P$. The is, the entropy is invariant to the order of events
    $$
    H\left(p_1, p_2, p_3, \ldots, p_N\right)=H\left(p_1, p_3, p_2, \ldots, p_N\right)
    $$ The maximum of the entropy is obtained when the events are equally probable. That is, when nothing is known about the set of events, or about what message has been produced, the assumption of a uniform distribution gives the highest information quantity that corresponds to the highest level of uncertainty
  • Maximum of $H\left(p_1, p_2, \ldots, p_N\right)=H\left(\frac{1}{N}, \frac{1}{N}, \ldots, \frac{1}{N}\right)$.
  • Example: Consider two sources that emit four symbols. The first source symbols, shown in Table 3.2, have equal probabilities, and the second source symbols, shown in Table 3.3, are produced with unequal probabilities.
  • The mentioned property indicates that the first source attains the highest level of uncertainty, regardless of the probability values of the second source, as long as they are different.
  • Consider that an adequate measure for the average uncertainty has been found $H\left(p_1, p_2, \ldots, p_N\right)$ associated with a set of events. Assume that event $\left{x_N\right}$ is divided into $M$ disjoint sets, with probabilities $q_k$, such that
  • $$
  • p_N=\sum_{k=1}^M q_k,
  • $$ and the probabilities associated with the new events can be normalized in such a way that
  • $$
  • \frac{q_1}{p_n}+\frac{q_2}{p_n}+\cdots+\frac{q_m}{p_n}=1 .
  • $$

计算机代写|密码学与网络安全代写cryptography and network security代考|Source Coding

The efficient representation of data produced by a discrete source is called source coding. For a source coder to obtain a good performance, it is necessary to take the symbol statistics into account. If the symbol probabilities are different, it is useful to assign short codewords to probable symbols and long ones to infrequent symbols. This produces a variable length code, such as the Morse code.
Two usual requirements to build an efficient code are:

  1. The codewords generated by the coder are binary.
  2. The codewords are unequivocally decodable, and the original message sequence can be reconstructed from the binary coded sequence.
    Consider Figure 3.2, which shows a memoryless discrete source, whose output $x_k$ is converted by the source coder into a sequence of 0 s and $1 \mathrm{~s}$, denoted $b_k$. Assume that the source alphabet has $K$ different symbol and that the $k$-ary symbol, $x_k$, occurs with probability $p_k, k=0,1, \ldots, K-1$.

Let $l_k$ be the average length, measured in bits, of the binary word assigned to symbol $x_k$. The avcrage length of the words produccd by the source coder is defined as (Haykin, 1988)
$$
\bar{L}s=\sum{k=1}^K p_k l_k .
$$
The parameter $\bar{L}$ represents the average number of bits per symbol from those that are used in the source coding process. Let $L_{\min }$ be the smallest possible value of $\bar{L}$. The source coding efficiency is defined as (Haykin, 1988)
$$
\eta=\frac{L_{\min }}{\bar{L}} .
$$
Because $\bar{L} \geq L_{\min }$, then $\eta \leq 1$. The source coding efficiency increases as $\eta$ approaches 1 .

Shannon’s first theorem, or source coding theorem, provides a means to determine $L_{\min }$ (Haykin, 1988).

Given a memoryless discrete source with entropy $H(X)$, the average length of the codewords is limited by
$$
\bar{L} \geq H(X) .
$$
Entropy $H(X)$, therefore, represents a fundamental limit for the average number of bits per source symbol $\bar{L}$, that are needed to represent a memoryless discrete source, and this number can be as small as, but never smaller than, the source entropy $H(X)$.

计算机代写|密码学与网络安全代写cryptography and network security代考|COMP431

密码学与网络安全代考

 

计算机代写|密码学与网络安全代写密码学和网络安全代考|信息指标要求


为了获得信息测量的公理方法(Reza, 1961),熵的一些基本性质是必要的

  • 如果事件概率发生了微小的变化,那么相关的度量必须以连续的方式进行相应的变化,这为度量
    提供了物理意义$H\left(p_1, p_2, \ldots, p_N\right)$ 是连续的 $p_k, k=1,2, \ldots, N$, (3.7) $0 \leq p_k \leq 1$
  • 信息度量必须与概率集对称 $P$。是,熵对事件
    的顺序是不变的$$
    H\left(p_1, p_2, p_3, \ldots, p_N\right)=H\left(p_1, p_3, p_2, \ldots, p_N\right)
    $$ 当事件等可能时,得到熵的最大值。也就是说,当对事件集或已产生的消息一无所知时,均匀分布的假设给出了与最高不确定性水平对应的最高信息量
  • 的最大值 $H\left(p_1, p_2, \ldots, p_N\right)=H\left(\frac{1}{N}, \frac{1}{N}, \ldots, \frac{1}{N}\right)$
  • 示例:考虑两个发出四个符号的源。表3.2所示的第一个源符号具有等概率,表3.3所示的第二个源符号具有等概率。上述属性表明,无论第二个源的概率值如何,只要它们不同,第一个源的不确定性水平最高。
  • 考虑已经找到了平均不确定度的适当测度 $H\left(p_1, p_2, \ldots, p_N\right)$ 与一组事件相关联。假设那个事件 $\left{x_N\right}$ 分为 $M$ 不相交集,有概率 $q_k$,使
  • $$
  • p_N=\sum_{k=1}^M q_k,
  • $$ 与新事件相关的概率可以标准化为
  • $$
  • \frac{q_1}{p_n}+\frac{q_2}{p_n}+\cdots+\frac{q_m}{p_n}=1 .
  • $$

计算机代写|密码学与网络安全代写cryptography and network security代考|Source Coding

.源编码


离散源产生的数据的有效表示称为源编码。为了使源代码获得良好的性能,必须考虑符号统计量。如果符号概率不同,将短码字分配给可能的符号,将长码字分配给不常见的符号是有用的。这就产生了可变长度的码,如莫尔斯电码。构建有效代码的两个通常要求:

  1. 编码器生成的码字为二进制。
  2. 码字是明确可解码的,由二进制编码序列可以重构出原始的消息序列。
    考虑图3.2,其中显示了一个无记忆的离散源,其输出$x_k$由源代码编码器转换为0 s和$1 \mathrm{~s}$组成的序列,表示为$b_k$。假设源字母表有$K$不同的符号,$k$ -ary符号$x_k$以$p_k, k=0,1, \ldots, K-1$的概率出现。

设$l_k$为分配给符号$x_k$的二进制字的平均长度,以位为单位。由源代码编码器产生的单词的平均长度定义为(Haykin, 1988)
$$
\bar{L}s=\sum{k=1}^K p_k l_k .
$$
参数$\bar{L}$表示源编码过程中使用的每个符号的平均比特数。设$L_{\min }$为$\bar{L}$的最小值。源编码效率定义为(Haykin, 1988)
$$
\eta=\frac{L_{\min }}{\bar{L}} .
$$
因为$\bar{L} \geq L_{\min }$,那么$\eta \leq 1$。当$\eta$接近1时,源编码效率提高

香农第一定理,或源编码定理,提供了确定$L_{\min }$的方法(Haykin, 1988) 给定一个熵为$H(X)$的无记忆离散源,码字的平均长度受
$$
\bar{L} \geq H(X) .
$$
熵$H(X)$的限制,因此,表示每个源符号$\bar{L}$所需的平均比特数的基本限制,这个数字可以小到,但绝不小于源熵$H(X)$。

计算机代写|密码学与网络安全代写cryptography and network security代考 请认准statistics-lab™

金融工程代写

非参数统计代写

广义线性模型代考

有限元方法代写

随机分析代写

时间序列分析代写

回归分析代写

MATLAB代写

R语言代写问卷设计与分析代写
PYTHON代写回归分析与线性模型代写
MATLAB代写方差分析与试验设计代写
STATA代写机器学习/统计学习代写
SPSS代写计量经济学代写
EVIEWS代写时间序列分析代写
EXCEL代写深度学习代写
SQL代写各种数据建模与可视化代写

计算机代写|密码学与网络安全代写cryptography and network security代考|CS388H

如果你也在 怎样代写密码学与网络安全cryptography and network security这个学科遇到相关的难题,请随时右上角联系我们的24/7代写客服。

密码学是对安全通信技术的研究,它只允许信息的发送者和预定接收者查看其内容。

statistics-lab™ 为您的留学生涯保驾护航 在代写密码学与网络安全cryptography and network security方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写密码学与网络安全cryptography and network security代写方面经验极为丰富,各种代写密码学与网络安全cryptography and network security相关的作业也就用不着说。

我们提供的密码学与网络安全cryptography and network security及其相关学科的代写,服务范围广, 其中包括但不限于:

  • Statistical Inference 统计推断
  • Statistical Computing 统计计算
  • Advanced Probability Theory 高等概率论
  • Advanced Mathematical Statistics 高等数理统计学
  • (Generalized) Linear Models 广义线性模型
  • Statistical Machine Learning 统计机器学习
  • Longitudinal Data Analysis 纵向数据分析
  • Foundations of Data Science 数据科学基础
计算机代写|密码学与网络安全代写cryptography and network security代考|CS388H

计算机代写|密码学与网络安全代写cryptography and network security代考|Information Theory

Information theory is a branch of probability theory which has application and correlation with many areas, including communication systems, communication theory, physics, language and meaning, cybernetics, psychology, art, and complexity theory (Pierce, 1980). The basis for the theory was established by Harry Theodor Nyquist (1889-1976) (Nyquist, 1924), also known as Harry Nyquist, and Ralph Vinton Lyon Hartley (18881970), who invented the Hartley oscillator (Hartley, 1928). They published the first articles on the subject, in which the factors that influenced the transmission of information were discussed.

The seminal article by Claude E. Shannon (1916-2001) extended the theory to include new factors, such as the noise effect in the channel and the savings that could be obtained as a function of the statistical structure of the original message and the information receiver characteristics (Shannon, 1948b). Shannon defined the fundamental communication problem as the possibility of, exactly or approximately, reproducing, at a certain point, a message that has been chosen at another one.

The main semantic aspects of the communication, initially established by Charles Sanders Peirce (1839-1914), a philosopher and creator of semiotic theory, are not relevant for the development of the Shannon information theory. What is important is to consider that a particular message is selected from a set of possible messages.

Of course, as mentioned by John Robinson Pierce (1910-2002), quoting the philosopher Alfred Jules Ayer (1910-1989), it is possible to communicate not only information but also knowledge, errors, opinions, ideas, experiences, desires, commands, emotions, and feelings. Heat and movement can be communicated, as well as force, weakness, and disease (Pierce, 1980).

Hartley has found several reasons as to why the natural information should measure the logarithm:

  • It is a practical metric in engineering, considering that various parameters, such as time and bandwidth, are proportional to the logarithm of the number of possibilities.
  • From a mathematical point of view, it is an adequate measure because several limit operations are simply stated in terms of logarithms.
  • It has an intuitive appeal, as an adequate metric, because, for instance, two binary symbols have four possibilities of occurrence.

The choice of the logarithm base defines the information unit. If base 2 is used, the unit is the bit, an acronym suggested by John W. Tukey for binary digit, that is also a play or words, that can also mean a piece of information. The information transmission is informally given in bit/s, but a unit has been proposed to pay tribute to the scientist who developed the concept; it is called the shannon or [Sh] for short. This has a direct correspondence with the unit for frequency, hertz or [Hz], for cycles per second, which was adopted by the International System of Units (SI) ${ }^1$.

计算机代写|密码学与网络安全代写cryptography and network security代考|Information Measurement

The objective of this section is to establish a measure for the information content of a discrete system, using probability theory. Consider a discrete random experiment, such as the occurrence of a symbol, and its associated sample space $\Omega$, in which $X$ is a real random variable (Reza, 1961).
The random variable $X$ can assume the following values:
$$
\begin{aligned}
&X=\left{x_1, x_2, \ldots, x_n\right}, \
&\text { in which } \bigcup_{k=1}^N x_k=\Omega,
\end{aligned}
$$
with probabilities in the set $P$
$$
\begin{gathered}
P=\left{p_1, p_2, \ldots, p_n\right}, \
\text { in which } \sum_{k=1}^N p_k=1 .
\end{gathered}
$$
The information associated with a particular event is given by
$$
I\left(x_i\right)=\log \left(\frac{1}{p_i}\right),
$$
which is meaningful because the sure event has probability one and zero information, by a property of the logarithm, and the impossible event has zero probability and infinite information.

计算机代写|密码学与网络安全代写cryptography and network security代考|CS388H

密码学与网络安全代考

 

计算机代写|密码学与网络安全代写密码学与网络安全代考|信息论


信息论是概率论的一个分支,它与许多领域都有应用和关联,包括通信系统、通信理论、物理学、语言和意义、控制论、心理学、艺术和复杂性理论(Pierce, 1980)。该理论的基础是由哈里·西奥多·奈奎斯特(1889-1976)(奈奎斯特,1924),也被称为哈里·奈奎斯特,和拉尔夫·文顿·里昂·哈特利(18881970)建立的,他们发明了哈特利振荡器(哈特利,1928)。他们发表了关于这一主题的第一批文章,其中讨论了影响信息传播的因素


克劳德·e·香农(1916-2001)的开创性文章扩展了这一理论,将新的因素包括在内,例如信道中的噪声效应,以及原始信息的统计结构和信息接收者特性的函数所能获得的节省(香农,1988b)。Shannon将基本通信问题定义为在某一点上精确地或近似地再现在另一个点上选择的信息的可能性


通信的主要语义方面最初由符号学理论的创始人、哲学家查尔斯·桑德斯·皮尔斯(Charles Sanders Peirce, 1839-1914)建立,但与香农信息理论的发展无关。重要的是要考虑一个特定的消息是从一组可能的消息中选择的。


当然,正如约翰·罗宾逊·皮尔斯(1910-2002)引用哲学家阿尔弗雷德·朱尔斯·阿耶尔(1910-1989)所提到的,不仅可以交流信息,还可以交流知识、错误、意见、想法、经验、欲望、命令、情绪和感觉。热和运动可以交流,力、弱和疾病也可以交流(Pierce, 1980)


Hartley发现了几个原因,为什么自然信息应该测量对数:

  • 在工程上是一个实用的度量,考虑到各种参数,如时间和带宽,与可能性的对数成正比。从数学的角度来看,这是一种适当的度量,因为几种极限运算都是用对数表示的。

它具有直观的吸引力,作为一个适当的度量,因为,例如,两个二进制符号有四种出现的可能性


对数基数的选择定义了信息单位。如果使用2进制,单位是位,这是约翰·w·图基(John W. Tukey)建议的二进制数字的首字母缩写,它也可以是一出戏或一句话,也可以表示一条信息。信息传输非正式地以位/秒为单位,但有人提出用一个单位来向提出这一概念的科学家表示敬意;它被简称为香农或[Sh]。这与频率单位赫兹或[Hz]直接对应,表示周期每秒,国际单位制(SI)采用了该单位${ }^1$。

计算机代写|密码学与网络安全代写cryptography and network security代考|Information Measurement

.信息测量


本节的目的是用概率论建立一个离散系统的信息含量的度量。考虑一个离散的随机实验,例如一个符号的出现及其相关的样本空间$\Omega$,其中$X$是一个实随机变量(Reza, 1961)。
随机变量$X$可以假设以下值:
$$
\begin{aligned}
&X=\left{x_1, x_2, \ldots, x_n\right}, \
&\text { in which } \bigcup_{k=1}^N x_k=\Omega,
\end{aligned}
$$
与集合中的概率$P$
$$
\begin{gathered}
P=\left{p_1, p_2, \ldots, p_n\right}, \
\text { in which } \sum_{k=1}^N p_k=1 .
\end{gathered}
$$
与特定事件相关的信息由
$$
I\left(x_i\right)=\log \left(\frac{1}{p_i}\right),
$$
给出,这是有意义的,因为确定事件有概率1和0信息,根据对数的性质,而不可能事件有0概率和无限信息

计算机代写|密码学与网络安全代写cryptography and network security代考 请认准statistics-lab™

金融工程代写

非参数统计代写

广义线性模型代考

有限元方法代写

随机分析代写

时间序列分析代写

回归分析代写

MATLAB代写

R语言代写问卷设计与分析代写
PYTHON代写回归分析与线性模型代写
MATLAB代写方差分析与试验设计代写
STATA代写机器学习/统计学习代写
SPSS代写计量经济学代写
EVIEWS代写时间序列分析代写
EXCEL代写深度学习代写
SQL代写各种数据建模与可视化代写