计算机代写|密码学与网络安全代写cryptography and network security代考|CS499

statistics-lab™ 为您的留学生涯保驾护航 在代写密码学与网络安全cryptography and network security方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写密码学与网络安全cryptography and network security代写方面经验极为丰富，各种代写密码学与网络安全cryptography and network security相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• Advanced Probability Theory 高等概率论
• Advanced Mathematical Statistics 高等数理统计学
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

计算机代写|密码学与网络安全代写cryptography and network security代考|Prefix Codes

For a prefix code, no codeword is a prefix, of the first part, of another codeword. Therefore, the code shown in Table $3.7$ is a prefix. On the other hand, the code shown in Table $3.8$ is not the prefix because the binary word 10 , for instance, is a prefix for the codeword 100.

To decode a sequence of binary words produced by a prefix encoder, the decoder hegins at the first hinary digit of the sequence, and decodes a codeword at a time. It is similar to a decision tree, which is a representation of the codewords of a given source code.

Figure $3.3$ illustrates the decision tree for the prefix code pointed in Table 3.9.

The tree has one initial state and four final states, which correspond to the symbols $x_1, x_2$, and $x_3$. From the initial state, for each received bit, the decoder searches the tree until a final state is found.

The decoder then emits a corresponding decoded symbol and returns to the initial state. Therefore, from the initial state, after receiving a 1 , the source decoder decodes symbol $x_1$ and returns to the initial state. If it receives a 0 , the decoder moves to the lower part of the tree; in the following, after receiving another 0 , the decoder moves further to the lower part of the tree and, after receiving a 1 , the decoder retrieves $x_2$ and returns to the initial state.
Considering the code from Table $3.9$, with the decoding tree from Figure 3.9, the binary sequence 011100010010100101 is decoded into the output sequence $x_1 x_0 x_0 x_3 x_0 x_2 x_1 x_2 x_1$.

By construction, a prefix code is always unequivocally decodable, which is important to avoid any confusion at the receiver.

Consider a code that has been constructed for a discrete source with alphabet $\left{x_1, x_2, \ldots, x_K\right}$. Let $\left{p_1, p_1, \ldots, p_K\right}$ be the source statistics and $l_k$ be the codeword length for symbol $x_k, k=1, \ldots, K$. If the binary code constructed for the source is a prefix one, then one can use the Kraft-McMillan inequality
$$\sum_{k=1}^K 2^{-l_k} \leq 1,$$
in which factor 2 is the radix, or number of symbols, of the binary alphabet.
For a memoryless discrete source with entropy $H(X)$, the codeword average length of a prefix code is limited to
$$H(X) \leq \bar{L}<H(X)+1$$
The left-hand side equality is obtained on the condition that symbol $x_k$ be emitted from the source with probability $p_k=2^{-l_k}$, in which $l_k$ is the length of the codeword assigned to symbol $x_k$.

计算机代写|密码学与网络安全代写cryptography and network security代考|The Information Unit

There is some confusion between the binary digit, abbreviated as bit, and the information particle, also baptized as bit by John Tukey and Claude Shannon.
In a meeting of the Institute of Electrical and Electronics Engineers (IEEE), the largest scientific institution in the world, the author of this book proposed the shannon [Sh] as a unit of information transmission, which is equivalent to bit per second. It is instructive to say that the bit, as used today, is not a unit of information because it is not approved by the International System of Units (SI).

What is curious about that meeting was the misunderstanding that surrounded the units, in particular, regarding the difference between the concepts of information unit and digital logic unit (Alencar, 2007).

To make things clear, the binary digit is associated with a certain state of a digital system, and not to information. A binary digit “1” can refer to 5 volts, in TTL logic, or 12 volts, for CMOS logic.

The information bit exists independent of any association with a particular voltage level. It can be associated, for example, with a discrete information or with the quantization of an analog information.

For instance, the information bits recorded on the surface of a compact disk are stored as a series of depressions on the plastic material, which are read by an optical beam, generated by a semiconductor laser. But, obviously, the depressions are not the information. They represent a means for the transmission of information, a material substrate that carries the data.

In the same way, the information can exist, even if it is not associated with light or other electromagnetic radiation. It can be transported by several means, including paper, and materializes itself when it is processed by a computer or by a human being.

计算机代写|密码学与网络安全代写cryptography and network security代考|Prefix Codes

$$\sum_{k=1}^K 2^{-l_k} \leq 1,$$
，其中因子2是二进制字母表的基数或符号数。对于熵为$H(X)$的无记忆离散源，前缀码的码字平均长度被限制在
$$H(X) \leq \bar{L}<H(X)+1$$
，在符号$x_k$以$p_k=2^{-l_k}$的概率从源发射的条件下得到左侧等式，其中$l_k$是分配给符号$x_k$的码字长度

.信息单元

计算机代写|密码学与网络安全代写cryptography and network security代考|COMP431

statistics-lab™ 为您的留学生涯保驾护航 在代写密码学与网络安全cryptography and network security方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写密码学与网络安全cryptography and network security代写方面经验极为丰富，各种代写密码学与网络安全cryptography and network security相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• Advanced Probability Theory 高等概率论
• Advanced Mathematical Statistics 高等数理统计学
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

计算机代写|密码学与网络安全代写cryptography and network security代考|Requirements for an Information Metric

A few fundamental properties are necessary for the entropy in order to obtain an axiomatic approach to base the information measurement (Reza, 1961).

• If the event probabilities suffer a small change, the associated measure must change in accordance, in a continuous manner, which provides a physical meaning to the metric
$H\left(p_1, p_2, \ldots, p_N\right)$ is continuous in $p_k, k=1,2, \ldots, N$, (3.7) $0 \leq p_k \leq 1$.
• The information measure must be symmetric in relation to the probability set $P$. The is, the entropy is invariant to the order of events
$$H\left(p_1, p_2, p_3, \ldots, p_N\right)=H\left(p_1, p_3, p_2, \ldots, p_N\right)$$ The maximum of the entropy is obtained when the events are equally probable. That is, when nothing is known about the set of events, or about what message has been produced, the assumption of a uniform distribution gives the highest information quantity that corresponds to the highest level of uncertainty
• Maximum of $H\left(p_1, p_2, \ldots, p_N\right)=H\left(\frac{1}{N}, \frac{1}{N}, \ldots, \frac{1}{N}\right)$.
• Example: Consider two sources that emit four symbols. The first source symbols, shown in Table 3.2, have equal probabilities, and the second source symbols, shown in Table 3.3, are produced with unequal probabilities.
• The mentioned property indicates that the first source attains the highest level of uncertainty, regardless of the probability values of the second source, as long as they are different.
• Consider that an adequate measure for the average uncertainty has been found $H\left(p_1, p_2, \ldots, p_N\right)$ associated with a set of events. Assume that event $\left{x_N\right}$ is divided into $M$ disjoint sets, with probabilities $q_k$, such that
• $$• p_N=\sum_{k=1}^M q_k, •$$ and the probabilities associated with the new events can be normalized in such a way that
• $$• \frac{q_1}{p_n}+\frac{q_2}{p_n}+\cdots+\frac{q_m}{p_n}=1 . •$$

计算机代写|密码学与网络安全代写cryptography and network security代考|Source Coding

The efficient representation of data produced by a discrete source is called source coding. For a source coder to obtain a good performance, it is necessary to take the symbol statistics into account. If the symbol probabilities are different, it is useful to assign short codewords to probable symbols and long ones to infrequent symbols. This produces a variable length code, such as the Morse code.
Two usual requirements to build an efficient code are:

1. The codewords generated by the coder are binary.
2. The codewords are unequivocally decodable, and the original message sequence can be reconstructed from the binary coded sequence.
Consider Figure 3.2, which shows a memoryless discrete source, whose output $x_k$ is converted by the source coder into a sequence of 0 s and $1 \mathrm{~s}$, denoted $b_k$. Assume that the source alphabet has $K$ different symbol and that the $k$-ary symbol, $x_k$, occurs with probability $p_k, k=0,1, \ldots, K-1$.

Let $l_k$ be the average length, measured in bits, of the binary word assigned to symbol $x_k$. The avcrage length of the words produccd by the source coder is defined as (Haykin, 1988)
$$\bar{L}s=\sum{k=1}^K p_k l_k .$$
The parameter $\bar{L}$ represents the average number of bits per symbol from those that are used in the source coding process. Let $L_{\min }$ be the smallest possible value of $\bar{L}$. The source coding efficiency is defined as (Haykin, 1988)
$$\eta=\frac{L_{\min }}{\bar{L}} .$$
Because $\bar{L} \geq L_{\min }$, then $\eta \leq 1$. The source coding efficiency increases as $\eta$ approaches 1 .

Shannon’s first theorem, or source coding theorem, provides a means to determine $L_{\min }$ (Haykin, 1988).

Given a memoryless discrete source with entropy $H(X)$, the average length of the codewords is limited by
$$\bar{L} \geq H(X) .$$
Entropy $H(X)$, therefore, represents a fundamental limit for the average number of bits per source symbol $\bar{L}$, that are needed to represent a memoryless discrete source, and this number can be as small as, but never smaller than, the source entropy $H(X)$.

计算机代写|密码学与网络安全代写密码学和网络安全代考|信息指标要求

• 如果事件概率发生了微小的变化，那么相关的度量必须以连续的方式进行相应的变化，这为度量
提供了物理意义$H\left(p_1, p_2, \ldots, p_N\right)$ 是连续的 $p_k, k=1,2, \ldots, N$， (3.7) $0 \leq p_k \leq 1$
• 信息度量必须与概率集对称 $P$。是，熵对事件
的顺序是不变的$$H\left(p_1, p_2, p_3, \ldots, p_N\right)=H\left(p_1, p_3, p_2, \ldots, p_N\right)$$ 当事件等可能时，得到熵的最大值。也就是说，当对事件集或已产生的消息一无所知时，均匀分布的假设给出了与最高不确定性水平对应的最高信息量
• 的最大值 $H\left(p_1, p_2, \ldots, p_N\right)=H\left(\frac{1}{N}, \frac{1}{N}, \ldots, \frac{1}{N}\right)$
• 示例:考虑两个发出四个符号的源。表3.2所示的第一个源符号具有等概率，表3.3所示的第二个源符号具有等概率。上述属性表明，无论第二个源的概率值如何，只要它们不同，第一个源的不确定性水平最高。
• 考虑已经找到了平均不确定度的适当测度 $H\left(p_1, p_2, \ldots, p_N\right)$ 与一组事件相关联。假设那个事件 $\left{x_N\right}$ 分为 $M$ 不相交集，有概率 $q_k$，使
• $$• p_N=\sum_{k=1}^M q_k, •$$ 与新事件相关的概率可以标准化为
• $$• \frac{q_1}{p_n}+\frac{q_2}{p_n}+\cdots+\frac{q_m}{p_n}=1 . •$$

计算机代写|密码学与网络安全代写cryptography and network security代考|Source Coding

.源编码

1. 编码器生成的码字为二进制。
2. 码字是明确可解码的，由二进制编码序列可以重构出原始的消息序列。
考虑图3.2，其中显示了一个无记忆的离散源，其输出$x_k$由源代码编码器转换为0 s和$1 \mathrm{~s}$组成的序列，表示为$b_k$。假设源字母表有$K$不同的符号，$k$ -ary符号$x_k$以$p_k, k=0,1, \ldots, K-1$的概率出现。

$$\bar{L}s=\sum{k=1}^K p_k l_k .$$

$$\eta=\frac{L_{\min }}{\bar{L}} .$$

$$\bar{L} \geq H(X) .$$

计算机代写|密码学与网络安全代写cryptography and network security代考|CS388H

statistics-lab™ 为您的留学生涯保驾护航 在代写密码学与网络安全cryptography and network security方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写密码学与网络安全cryptography and network security代写方面经验极为丰富，各种代写密码学与网络安全cryptography and network security相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• Advanced Probability Theory 高等概率论
• Advanced Mathematical Statistics 高等数理统计学
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

计算机代写|密码学与网络安全代写cryptography and network security代考|Information Theory

Information theory is a branch of probability theory which has application and correlation with many areas, including communication systems, communication theory, physics, language and meaning, cybernetics, psychology, art, and complexity theory (Pierce, 1980). The basis for the theory was established by Harry Theodor Nyquist (1889-1976) (Nyquist, 1924), also known as Harry Nyquist, and Ralph Vinton Lyon Hartley (18881970), who invented the Hartley oscillator (Hartley, 1928). They published the first articles on the subject, in which the factors that influenced the transmission of information were discussed.

The seminal article by Claude E. Shannon (1916-2001) extended the theory to include new factors, such as the noise effect in the channel and the savings that could be obtained as a function of the statistical structure of the original message and the information receiver characteristics (Shannon, 1948b). Shannon defined the fundamental communication problem as the possibility of, exactly or approximately, reproducing, at a certain point, a message that has been chosen at another one.

The main semantic aspects of the communication, initially established by Charles Sanders Peirce (1839-1914), a philosopher and creator of semiotic theory, are not relevant for the development of the Shannon information theory. What is important is to consider that a particular message is selected from a set of possible messages.

Of course, as mentioned by John Robinson Pierce (1910-2002), quoting the philosopher Alfred Jules Ayer (1910-1989), it is possible to communicate not only information but also knowledge, errors, opinions, ideas, experiences, desires, commands, emotions, and feelings. Heat and movement can be communicated, as well as force, weakness, and disease (Pierce, 1980).

Hartley has found several reasons as to why the natural information should measure the logarithm:

• It is a practical metric in engineering, considering that various parameters, such as time and bandwidth, are proportional to the logarithm of the number of possibilities.
• From a mathematical point of view, it is an adequate measure because several limit operations are simply stated in terms of logarithms.
• It has an intuitive appeal, as an adequate metric, because, for instance, two binary symbols have four possibilities of occurrence.

The choice of the logarithm base defines the information unit. If base 2 is used, the unit is the bit, an acronym suggested by John W. Tukey for binary digit, that is also a play or words, that can also mean a piece of information. The information transmission is informally given in bit/s, but a unit has been proposed to pay tribute to the scientist who developed the concept; it is called the shannon or [Sh] for short. This has a direct correspondence with the unit for frequency, hertz or [Hz], for cycles per second, which was adopted by the International System of Units (SI) ${ }^1$.

计算机代写|密码学与网络安全代写cryptography and network security代考|Information Measurement

The objective of this section is to establish a measure for the information content of a discrete system, using probability theory. Consider a discrete random experiment, such as the occurrence of a symbol, and its associated sample space $\Omega$, in which $X$ is a real random variable (Reza, 1961).
The random variable $X$ can assume the following values:
\begin{aligned} &X=\left{x_1, x_2, \ldots, x_n\right}, \ &\text { in which } \bigcup_{k=1}^N x_k=\Omega, \end{aligned}
with probabilities in the set $P$
$$\begin{gathered} P=\left{p_1, p_2, \ldots, p_n\right}, \ \text { in which } \sum_{k=1}^N p_k=1 . \end{gathered}$$
The information associated with a particular event is given by
$$I\left(x_i\right)=\log \left(\frac{1}{p_i}\right),$$
which is meaningful because the sure event has probability one and zero information, by a property of the logarithm, and the impossible event has zero probability and infinite information.

计算机代写|密码学与网络安全代写密码学与网络安全代考|信息论

Hartley发现了几个原因，为什么自然信息应该测量对数:

• 在工程上是一个实用的度量，考虑到各种参数，如时间和带宽，与可能性的对数成正比。从数学的角度来看，这是一种适当的度量，因为几种极限运算都是用对数表示的。

计算机代写|密码学与网络安全代写cryptography and network security代考|Information Measurement

.信息测量

\begin{aligned} &X=\left{x_1, x_2, \ldots, x_n\right}, \ &\text { in which } \bigcup_{k=1}^N x_k=\Omega, \end{aligned}

$$\begin{gathered} P=\left{p_1, p_2, \ldots, p_n\right}, \ \text { in which } \sum_{k=1}^N p_k=1 . \end{gathered}$$

$$I\left(x_i\right)=\log \left(\frac{1}{p_i}\right),$$