## 数学代写|编码理论代写Coding theory代考|MTH 4107

statistics-lab™ 为您的留学生涯保驾护航 在代写编码理论Coding theory方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写编码理论Coding theory代写方面经验极为丰富，各种代写编码理论Coding theory相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 数学代写|编码理论代写Coding theory代考|MULTIPLICATIVE INVERSION

Let us now consider the problem of finding the multiplicative inverse of an element in the field of residue classes mod an irreducible binary polynomial $M(x)$ of degree $m$. Given the residue class containing $r(x)$, a polynomial of degree $<m$, we wish to find the polynomial $p(x)$ of degree $<m$ such that the product satisfies
$$r(x) p(x) \equiv 1 \bmod M(x)$$
or equivalently, $r(x) p(x)+M(x) q(x)=1$ for some polynomial $q(x)$. Since. $M(x)$ is irreducible, the ged of $M$ and $r$ is 1 . We may therefore apply the continued-fractions version of Euclid’s algorithm as described in Sec. 2.1. Starting with $r^{(-2)} \equiv M, r^{(-1)} \equiv r, p^{(-2)} \equiv 0, p^{(-1)} \equiv 1$, $q^{(-2)}=1, q^{(-1)}=0$, we use the division algorithm to find $a^{(k)}$ and $r^{(k)}$ such that
$$r^{(k-2)}=a^{(k)} r^{(k-1)}+r^{(k)} \quad \operatorname{deg} r^{(k)}<\operatorname{deg} r^{(k-1)}$$
We then set
\begin{aligned} &q^{(k)}=a^{(k)} q^{(k-1)}+q^{(k-2)} \ &p^{(k)}=a^{(k)} p^{(k-1)}+p^{(k-2)} \end{aligned}

The iteration is to be continued until $r^{(n)}=0$. The solution is then given by $q=q^{(n-1)}, p=p^{(n-1)}$ with $\operatorname{deg} q<\operatorname{deg} r, \operatorname{deg} p<\operatorname{deg} M=$ $m$. Since we wish to find only $p$ (and do not particularly care about $q$ ), we may dispense with the $q$ ‘s entirely.

Before designing the logical circuits, let us work an example. Suppose $r(x)=x^{4}+x+1$ and $M(x)=x^{5}+x^{2}+1$. One method of computing successive $a$ ‘s and $r$ ‘s and $p^{\prime}$ ‘s follows.

## 数学代写|编码理论代写Coding theory代考|MULTIPLICATION

When considering the multiplication of residue classes mod $M(x)$, where $M(x)$ is an irreducible binary polynomial of degree $m$, it is helpful to introduce the symbol $\alpha$ to denote the residue class containing $x$. Then $\alpha^{2}$ represents the residue class containing $x^{2}$, and, in general, if $r(x)$ is any polynomial, then $r(\alpha)$ represents the residue class containing $r(x)$. Since $M(x) \equiv 0 \bmod M(x)$, we must have $M(\alpha)=0$. The element represented by the symbol $\alpha$ is therefore a root of the polynomial $M(x)$. Hence, we have an obvious isomorphism between the field containing the $2^{m}$ residue classes $\bmod M(x)$ and the field containing the binary field and all polynomials in $\alpha$, where $\alpha$ is a root of the irreducible binary polynomial $M(x)$.

Any element $Y$ in this field may be expressed uniquely as a polynomial of degree $<m$ in $\alpha, Y=\sum_{i=0}^{m-1} Y_{i} \alpha^{i}$, where the $Y_{i}$ are binary numbers. The element $Y$ may be conveniently stored in an $m$-bit register, whose components contain the binary numbers $Y_{m-1}, Y_{m-2}, \ldots, Y_{0}$.

## 数学代写|编码理论代写Coding theory代考|MULTIPLICATION OF A REGISTER BY A WIRED CONSTANT

Let us first consider the multiplication of the field element in the $Y$ register by a constant field element $A$. We may assume that $A$ is represented by some binary polynomial in $\alpha$. Since $Y=\sum_{i=0}^{m-1} Y_{i} \alpha^{i}$, we have $Y A=\sum_{i=0}^{m-1} Y_{i}\left(A \alpha^{i}\right)$. Expressing $A \alpha^{i}$ as a polynomial of degree $<m$ in $\alpha$ gives $A \alpha^{i}=\sum_{j=0}^{m-1} A_{i, j} \alpha^{j}$, so that
\begin{aligned} Y A &=\sum_{i=0}^{m-1} Y_{i} \sum_{j=0}^{m-1} A_{i, j} \alpha^{j} \ &=\sum_{j=0}^{m-1}\left(\sum_{i=0}^{m-1} Y_{i} A_{i, j}\right) \alpha^{j} \end{aligned}
Thus, multiplication of the field element $Y$ by the field element $A$ is equivalent to multiplication of the $m$-dimensional binary row vector $\mathbf{Y}=\left[Y_{m-1}, Y_{m-2}, \ldots, Y_{0}\right]$ by the $m \times m$ matrix whose components are $A_{i, j}$. The rows of this matrix represent the products $A \alpha^{m-1}, A \alpha^{m-2}$, $\cdots, A$.

For example, let $M(x)=x^{5}+x^{2}+1$. Suppose we wish to multiply the contents of the $Y$ register by the field element $A=\alpha^{3}+\alpha$. We first compute
\begin{aligned} A \alpha &=\alpha^{4}+\alpha^{2} \ A \alpha^{2} &=\alpha^{5}+\alpha^{3}=\alpha^{3}+\alpha^{2}+1 \ A \alpha^{3} &=\alpha^{4}+\alpha^{3}+\alpha \ A \alpha^{4} &=\alpha^{5}+\alpha^{4}+\alpha^{2}=\alpha^{4}+1 \end{aligned}
The multiplication $Z=Y A$ is equivalent to
$$\left[Z_{4}, Z_{3}, Z_{2}, Z_{1}, Z_{0}\right]=\left[Y_{4}, Y_{3}, Y_{2}, Y_{1}, Y_{0}\right]\left[\begin{array}{ccccc} 1 & 0 & 0 & 0 & 1 \ 1 & 1 & 0 & 1 & 0 \ 0 & 1 & 1 & 0 & 1 \ 1 & 0 & 1 & 0 & 0 \ 0 & 1 & 0 & 1 & 0 \end{array}\right]$$
This multiplication may readily be accomplished by the circuit of Fig. 2.11.

## 数学代写|编码理论代写Coding theory代考|MULTIPLICATIVE INVERSION

$$r(x) p(x) \equiv 1 \bmod M(x)$$

$r^{(-2)} \equiv M, r^{(-1)} \equiv r, p^{(-2)} \equiv 0, p^{(-1)} \equiv 1, q^{(-2)}=1, q^{(-1)}=0$ ，我们使用除法算法找到 $a^{(k)}$ 和 $r^{(k)}$ 这样
$$r^{(k-2)}=a^{(k)} r^{(k-1)}+r^{(k)} \quad \operatorname{deg} r^{(k)}<\operatorname{deg} r^{(k-1)}$$

$$q^{(k)}=a^{(k)} q^{(k-1)}+q^{(k-2)} \quad p^{(k)}=a^{(k)} p^{(k-1)}+p^{(k-2)}$$

## 数学代写|编码理论代写Coding theory代考|MULTIPLICATION OF A REGISTER BY A WIRED CONSTANT

$$Y A=\sum_{i=0}^{m-1} Y_{i} \sum_{j=0}^{m-1} A_{i, j} \alpha^{j} \quad=\sum_{j=0}^{m-1}\left(\sum_{i=0}^{m-1} Y_{i} A_{i, j}\right) \alpha^{j}$$

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

## 数学代写|编码理论代写Coding theory代考|ELEN90030

statistics-lab™ 为您的留学生涯保驾护航 在代写编码理论Coding theory方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写编码理论Coding theory代写方面经验极为丰富，各种代写编码理论Coding theory相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 数学代写|编码理论代写Coding theory代考|MANIPULATIVE INTRODUCTION TO DOUBLE-ERROR-CORRECTING BCH CODES

We have seen that a linear code is characterized by its parity-check matrix $3 C$. We have also seen that the syndrome of the received sequence is the sum of the columns of $\mathcal{F C}$ corresponding to the error positions. Hence, a linear code is capable of correcting all single-error patterns iff all columns of $3 C$ are different and nonzero. If $\exists C$ has $m$ rows and can correct single errors, then $n \leq 2^{m}-1$. The Hamming codes achieve this bound.

Each digit of a Hamming code may be labeled by a nonzero binary $m$-luple, which is equal to the corresponding column of the $\mathfrak{B C}$ matrix. The $m$ syndrome digits then reveal directly the label of the error (if there is only one) or the binary vector sum of the labels (if there are several).

This labeling idea is so useful that we shall continue to assume that $n=2^{m}-1$
and that the columns of $\Im C$ have been labeled accordingly. Now suppose that we wish to correct all patterns of two or fewer errors. Obviously we need a greater redundancy; that is, $\mathcal{B C}$ must have more rows. Proceeding naĩvely, we suspect that we may need about twice as many parity checks to correct two errors as we need to correct one, so we shall try to find a parity-check matrix $\xi c$ with $2^{m}-1$ columns and $2 m$ rows.

## 数学代写|编码理论代写Coding theory代考|A CLOSER LOOK AT EUCLID’S ALGORITHM

In the previous section we indicated that the decoding of binary $\mathrm{BCH}$ codes requires arithmetic operations in the field of binary polynomials mod some irreducible binary polynomial $M(x)$. From both the theoretical and practical standpoints, Euclid’s algorithm plays a key role in this development.

From the theoretical standpoint, Euclid’s algorithm is used to prove that the factorization of polynomials into irreducible polynomials is unique (except for scalar multiples) over any field and that a polynomial of degree $d$ cannot have more than $d$ roots in any field. This fact is needed to prove that the error locator polynomial $\sigma(z)$ cannot have more roots than its degree. If it did, then the entire decoding procedure sketched in Sec. $1.4$ would be invalid, for several different pairs of error locations might conceivably be reciprocal roots of the same quadratic equation.

From the practical standpoint, Euclid’s algorithm is important because one of its modifications, the method of convergents of continued fractions, provides the basis for one of the most efficient methods for implementing division in finite fields. This method, apparently new, will be detailed in this section and the next.

Euclid’s algorithm is based on the observation that any divisor of $R$ and $r$ must also divide their sum and their difference. Furthermore, since any divisor of $r$ also divides any nonzero multiple of $r$, such as $a r$, then any divisor of $R$ and $r$ must also divide $R \pm a r$. Conversely, any divisor of $r$ and $R \pm a r$ must also divide $(R \pm a r) \mp a r=R$. Hence, if we let $(R, r)$ denote the greatest common divisor (hereafter called ged) of $R$ and $r$, then we have $(R, r)=(r, R \pm a r)$. Consequently, starting from an original pair of elements $R$ and $r$, we can find a new pair of elements which have the same ged. If the multiplier $a$ is judiciously chosen, the problem of finding the ged of the new pair of elements will be easier than the original problem.

## 数学代写|编码理论代写Coding theory代考|LOGICAL CIRCUITRY

The three basic elements used in logical design are the AND gate, the OR gate, and the inverter, which are represented as shown in Fig. 2.01. The AND and OR gates may have several inputs, each of which carries a binary signal having either the value 0 or the value 1 . The output of the AND gate is zero unless all its inputs are ones, in which case the output of the AND gate is also one. The output of the OR gate is one unless all of its inputs are zero, in which case the output of the OR gate is also zero. The inverter, in contrast to the AND and OR gates, has only one input, and its output is the opposite of its input. If its input signal has value 0 , the output has value 1 ; if the input signal has value 1 , the output has value 0 .

In practice, circuits having the logical properties of these three elements may be constructed out of transistors, resistors, diodes, vacuum tubes, and/or other components. Depending on the detailed properties t Starred sections of this book may be skimmed or omitted on first reading.of these components, the overall design will be subject to certain restrictions, called design constraints. For example, there will be maximum numbers of inputs to AND and OR gates and a maximum number of elements through which signals can propagate successively without additional amplification. Typically, every inverter is equipped with an amplifier, but AND and OR gates are not. Design constraints then specify how many AND and/or OR gates may be successively encountered between inverters and in what orders. Since the design constraints depend heavily on the properties of the components, we shall not consider design constraints much further here. If some of our circuits do not satisfy particular design constraints, it may be necessary to insert additional amplifiers (or pairs of successive inverters) into the circuits at certain crucial points.

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

## 数学代写|编码理论代写Coding theory代考|COMP2610

statistics-lab™ 为您的留学生涯保驾护航 在代写编码理论Coding theory方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写编码理论Coding theory代写方面经验极为丰富，各种代写编码理论Coding theory相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 数学代写|编码理论代写Coding theory代考|REPETITION CODES AND SINGLE-PARITY-CHECK CODES

Suppose that we wish to transmit a sequence of binary digits across a noisy channel. If we send a one, a one will probably be rcecivcd; if we send a zero, a zero will probably be received. Occasionally, however, the channel noise will cause a transmitted one to be mistakenly interpreted as a zero or a transmitted zero to be mistakenly interpreted as a one. Although we are unable to prevent the channel from causing such errors, we can reduce their undesirable effects with the use of coding. The basic idea is simple. We take a set of $k$ message digits which we wish to transmit, annex to them $r$ check digits, and transmit the entire block of $n=k+r$ channel digits. Assuming that the channcl noise changes sufficiently few of these $n$ transmitted channel digits, the $r$ check digits may provide the receiver with sufficient information to enable him to detect and correct the channel errors.

Given any particular sequence of $k$ message digits, the transmitter must have some rule for selecting the $r$ check digits. This is called the encoding problem. Any particular scquence of $n$ digits which the encoder might transmit is called a codeword. Although there are $2^{n}$ different binary sequences of length $n$, only $2^{k}$ of these sequences are codewords, because the $r$ check digits within any codeword are completely determined by the $k$ message digits. The set consisting of these $2^{k}$ codewords of length $n$ is called the code.

No matter which codeword is transmitted, any of the $2^{\text {n }}$ possible binary sequences of length $n$ may be received if the channel is sufficiently noisy. Given the $n$ received digits, the decoder must attempt to decide which of the $2^{k}$ possible codewords was transmitted.

## 数学代写|编码理论代写Coding theory代考|LINEAR CODES

In a code containing several message digits and several check digits, each check digit must be some function of the message digits. In the simple case of single-parity-check codes, the single parity check was chosen to be the binary sum of all the message digits. If there are several parity checks, it is wise to set each check digit equal to the binary sum of some subset of the message digits. For example, we construct a binary code of block length $n=6$, having $k=3$ message digits and $r=3$ check digits. We shall label the three message digits $C_{1}, C_{2}$, and $C_{3}$ and the three check digits $C_{4}, C_{5}$, and $C_{6}$. We choose these check digits from the message digits according to the following rules:
$C_{4}=C_{1}+C_{2}$
$C_{5}=C_{1}+C_{3}$
$C_{6}=C_{2}+C_{3}$
or, in matrix notation,
$$\left[\begin{array}{l} C_{4} \ C_{5} \ C_{6} \end{array}\right]=\left[\begin{array}{lll} 1 & 1 & 0 \ 1 & 0 & 1 \ 0 & 1 & 1 \end{array}\right]\left[\begin{array}{l} C_{1} \ C_{2} \ C_{3} \end{array}\right]$$
The full codcword coneists of the digits $C_{1}, C_{2}, C_{3}, C_{4}, C_{8}, C_{6}$. Every codeword must satigfy the parity=eheck equations or, in matrix notation,
$$\left[\begin{array}{llllll} 1 & 1 & 0 & 1 & 0 & 0 \ 1 & 0 & 1 & 0 & 1 & 0 \ 0 & 1 & 1 & 0 & 0 & 1 \end{array}\right] \quad \mathbf{C}^{t}=\left[\begin{array}{l} 0 \ 0 \ 0 \end{array}\right]$$

## 数学代写|编码理论代写Coding theory代考|HAMMING CODES

At extremely low rates or extremely high rates, it is relatively easy to find good linear codes. In order to interpolate between these two extremes, we might adopt either of two approaches: (1) start with the low-rate codes and gradually increase $k$ by adding more and more codewords, attempting to maintain a large error-correction capability, or (2) start with good high=rate codes and gradually increase the error= correction capability, attempting to add only a few additional paritycheck constraints.

Historically, the second approach has proved more successful.
† All of the perfect singlc-error-correcting binary group codes were first discovered by Hamming. The Hamming code of length 7 was first published as an example in the paper by Shannon (1948). The generalization of this example was mentioned by Golay (1949) prior to the appearance of the paper by Hamming (1950). The Hamming codes had been anticipated by Fisher (1942) in a different context.

This is the approach we shall follow. We begin by constructing certain codes to correct single errors, the Hamming codes.

The syndrome of a linear code is related to the error pattern by the equation $\mathbf{s}^{t}=\tilde{F} E^{t}$. In general, the right side of this equation may be written as $E_{1}$ times the first column of the $F C$ matrix, plus $E_{2}$ times the second column of the $F C$ matrix, plus $E_{3}$ times the third column of the FC matrix, plus …. For example, if
$$\mathbf{s}^{t}=\left[\begin{array}{cccccc} 1 & 1 & 0 & 1 & 0 & 0 \ 1 & 0 & 1 & 0 & 1 & 0 \ 0 & 1 & 1 & 0 & 0 & 1 \end{array}\right]\left[E_{1}, E_{2}, E_{3}, E_{4}, E_{5}, E_{6}\right]^{t}$$
then
$$\left[\begin{array}{l} s_{1} \ s_{2} \ s_{3} \end{array}\right]=E_{1}\left[\begin{array}{l} 1 \ 1 \ 0 \end{array}\right]+E_{2}\left[\begin{array}{l} 1 \ 0 \ 1 \end{array}\right]+E_{3}\left[\begin{array}{l} 0 \ 1 \ 1 \end{array}\right]+E_{4}\left[\begin{array}{l} 1 \ 0 \ 0 \end{array}\right]+E_{5}\left[\begin{array}{l} 0 \ 1 \ 0 \end{array}\right]+E_{6}\left[\begin{array}{l} 0 \ 0 \ 1 \end{array}\right]$$

## 数学代写|编码理论代写Coding theory代考|LINEAR CODES

C4=C1+C2
C5=C1+C3
C6=C2+C3

[C4 C5 C6]=[110 101 011][C1 C2 C3]

[110100 101010 011001]C吨=[0 0 0]

## 数学代写|编码理论代写Coding theory代考|HAMMING CODES

† 所有完美的单次纠错二进制群码都是由 Hamming 首次发现的。长度为 7 的汉明码首先在 Shannon (1948) 的论文中作为示例发表。在 Hamming (1950) 的论文出现之前，Golay (1949) 已经提到了这个例子的推广。Fisher (1942) 在不同的背景下已经预料到了汉明码。

s吨=[110100 101010 011001][和1,和2,和3,和4,和5,和6]吨

[s1 s2 s3]=和1[1 1 0]+和2[1 0 1]+和3[0 1 1]+和4[1 0 0]+和5[0 1 0]+和6[0 0 1]

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

## 数学代写|信息论代写information theory代考|ECE4042

statistics-lab™ 为您的留学生涯保驾护航 在代写信息论information theory方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写信息论information theory代写方面经验极为丰富，各种代写信息论information theory相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 数学代写|信息论代写information theory代考|Definition of entropy of a continuous random variable

Up to now we have assumed that a random variable $\xi$, with entropy $H_{\xi}$, can take values from some discrete space consisting of either a finite or a countable number of elements, for instance, messages, symbols, etc. However, continuous variables are also widespread in engineering, i.e. variables (scalar or vector), which can take values from a continuous space $X$, most often from the space of real numbers. Such a random variable $\xi$ is described by the probability density function $p(\xi)$ that assigns the probability
$$\Delta P=\int_{\xi \varepsilon \Delta X} p(\xi) d \xi \approx p(A) \Delta V \quad(A \in \Delta X)$$
of $\xi$ appearing in region $\Delta X$ of the specified space $X$ with volume $\Delta V(d \xi=d V$ is a differential of the volume).

How can we define entropy $H_{\xi}$ for such a random variable? One of many possible formal ways is the following: In the formula
$$H_{\xi}=-\sum_{\xi} P \xi \ln P(\xi)=-\mathbb{E}[\ln P(\xi)]$$
appropriate for a discrete variable we formally replace probabilities $P(\xi)$ in the argument of the logarithm by the probability density and, thereby, consider the expression
$$H_{\xi}=-\mathbb{E}[\ln p(\xi)]=-\int_{x} p(\xi) \ln p(\xi) d \xi .$$
This way of defining entropy is not well justified. It remains unclear how to define entropy in the combined case, when a continuous distribution in a continuous space coexists with concentrations of probability at single points, i.e. the probability density contains delta-shaped singularities. Entropy (1.6.2) also suffers from the drawback that it is not invariant, i.e. it changes under a non-degenerate transformation of variables $\eta=f(\xi)$ in contrast to entropy (1.6.1), which remains invariant under such transformations.

## 数学代写|信息论代写information theory代考|Properties of entropy in the generalized version

Entropy (1.6.13), (1.6.16) defined in the previous section possesses a set of properties, which are analogous to the properties of an entropy of a discrete random variable considered earlier. Such an analogy is quite natural if we take into account the interpretation of entropy (1.6.13) (provided in Section 1.6) as an asymptotic case (for large $N$ ) of entropy (1.6.1) of a discrete random variable.

The non-negativity property of entropy, which was discussed in Theorem $1.1$, is not always satisfied for entropy (1.6.13), (1.6.16) but holds true for sufficiently large $N$. The constraint
$$H_{\xi}^{P / Q} \leqslant \ln N$$
results in non-negativity of entropy $H_{\xi}$.
Now we move on to Theorem $1.2$, which considered the maximum value of entropy. In the case of entropy (1.6.13), when comparing different distributions $P$ we need to keep measure $v$ fixed. As it was mentioned, quantity (1.6.17) is non-negative and, thus, (1.6.16) entails the inequality
$$H_{\xi} \leqslant \ln N .$$
At the same time, if we suppose $P=Q$, then, evidently, we will have
$$H_{\xi}=\ln N .$$
This proves the following statement that is an analog of Theorem $1.2$.

## 数学代写|信息论代写information theory代考|Encoding of discrete information

The definition of the amount of information, given in Chapter 1, is justified when we deal with a transformation of information from one kind into another, i.e. when considering encoding of information. It is essential that the law of conservation of information amount holds under such a transformation. It is very useful to draw an analogy with the law of conservation of energy. The latter is the main argument for introducing the notion of energy. Of course, the law of conservation of information is more complex than the law of conservation of energy in two respects. The law of conservation of energy establishes an exact equality of energies, when one type of energy is transformed into another. However, in transforming information we have a more complex relation, namely ‘not greater’ $(\leqslant)$, i.e. the amount of information cannot increase. The equality sign corresponds to optimal encoding. Thus, when formulating the law of conservation of information, we have to point out that there possibly exists such an encoding, for which the equality of the amounts of information occurs.

The second complication is that the equality is not exact. It is approximate, asymptotic, valid for complex (large) messages and for composite random variables. The larger a system of messages is, the more exact such a relation becomes. The exact equality sign takes place only in the limiting case. In this respect, there is an analogy with the laws of statistical thermodynamics, which are valid for large thermodynamic systems consisting of a large number (of the order of the Avogadro number) of molecules.

When conducting encoding, we assume that a long sequence of messages $\xi_{1}, \xi_{2}$, … is given together with their probabilities, i.e. a sequence of random variables. Therefore, the amount of information (entropy $H$ ) corresponding to this sequence can be calculated. This information can be recorded and transmitted by different realizations of the sequence. If $M$ is the number of such realizations, then the law of conservation of information can be expressed by the equality $H=\ln M$, which is complicated by the two above-mentioned factors (i.e. actually. $H \leqslant \ln M$ ).

Two different approaches may be used for solving the encoding problem. One can perform encoding of an infinite sequence of messages, i.e. online (or ‘sliding’) encoding. The inverse procedure, i.e. decoding, will be performed analogously.

## 数学代写|信息论代写information theory代考|Definition of entropy of a continuous random variable

$$\Delta P=\int_{\xi \varepsilon \Delta X} p(\xi) d \xi \approx p(A) \Delta V \quad(A \in \Delta X)$$

$$H_{\xi}=-\sum_{\xi} P \xi \ln P(\xi)=-\mathbb{E}[\ln P(\xi)]$$

$$H_{\xi}=-\mathbb{E}[\ln p(\xi)]=-\int_{x} p(\xi) \ln p(\xi) d \xi$$

## 数学代写|信息论代写information theory代考|Properties of entropy in the generalized version

$$H_{\xi}^{P / Q} \leqslant \ln N$$

$$H_{\xi} \leqslant \ln N \text {. }$$

$$H_{\xi}=\ln N .$$

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

## 数学代写|信息论代写information theory代考|ELEN90030

statistics-lab™ 为您的留学生涯保驾护航 在代写信息论information theory方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写信息论information theory代写方面经验极为丰富，各种代写信息论information theory相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 数学代写|信息论代写information theory代考|Conditional entropy. Hierarchical additivity

Let us generalize formulae (1.2.1), (1.2.3) to the case of conditional probabilities. Let $\xi_{1}, \ldots, \xi_{n}$ be random variables described by the joint distribution $P\left(\xi_{1}, \ldots, \xi_{n}\right)$. The conditional probabilities
$$P\left(\xi_{k}, \ldots, \xi_{n} \mid \xi_{1}, \ldots, \xi_{k-1}\right)=\frac{P\left(\xi_{1}, \ldots, \xi_{n}\right)}{P\left(\xi_{1}, \ldots, \xi_{k-1}\right)} \quad(k \leqslant n)$$
are associated with the random conditional entropy
$$H\left(\xi_{k}, \ldots, \xi_{n} \mid \xi_{1}, \ldots, \xi_{k-1}\right)=-\ln P\left(\xi_{k}, \ldots, \xi_{n} \mid \xi_{1}, \ldots, \xi_{k-1}\right)$$
Let us introduce a special notation for the result of averaging (1.3.1) over $\xi_{k}, \ldots, \xi_{n}$ :
\begin{aligned} H_{\xi_{k} \ldots \xi_{n}}\left(\mid \xi_{1}, \ldots \xi_{k-1}\right)=-\sum_{\xi_{k} \ldots \xi_{n}} P\left(\xi_{k}, \ldots\right.&\left., \xi_{n} \mid \xi_{1}, \ldots, \xi_{k-1}\right) \times \ & \times \ln P\left(\xi_{k}, \ldots, \xi_{n} \mid \xi_{1}, \ldots, \xi_{k-1}\right) \end{aligned} and also for the result of total averaging:
\begin{aligned} H_{\xi_{k}, \ldots, \xi_{n}} \mid \xi_{1}, \ldots, \xi_{k-1} &=\mathbb{E}\left[H\left(\xi_{k}, \ldots, \xi_{n} \mid \xi_{1}, \ldots, \xi_{k-1}\right)\right] \ &=-\sum_{\xi_{1} \ldots \xi_{n}} P\left(\xi_{1} \ldots \xi_{n}\right) \ln P\left(\xi_{k}, \ldots, \xi_{n} \mid \xi_{1}, \ldots, \xi_{k-1}\right) \end{aligned}
If, in addition, we vary $k$ and $n$, then we will form a large number of different entropies, conditional and non-conditional, random and non-random. They are related by identities that will be considered below.

Before we formulate the main hierarchical equality (1.3.4), we show how to introduce a hierarchical set of random variables $\xi_{1}, \ldots, \xi_{n}$, even if there was just one random variable $\xi$ initially.

Let $\xi$ take one of $M$ values with probabilities $P(\xi)$. The choice of one realization will be made in several stages. At the first stage, we indicate which subset (from a full ensemble of non-overlapping subsets $E_{1}, \ldots, E_{m_{1}}$ ) the realization belongs to. Let $\xi_{1}$ be the index of such a subset. At the second stage, each subset is partitioned into smaller subsets $E_{\xi_{1} \xi_{2}}$. The second random variable $\xi_{2}$ points to which smaller subset the realization of the random variable belongs to. In turn, those smaller subsets are further partitioned until we obtain subsets consisting of a single element. Apparently, the number of nontrivial partitioning stages $n$ cannot exceed $M-1$. We can juxtapose a fixed partitioning scheme with a ‘decision tree’ depicted on Figure 1.1. Further considerations will be associated with a particular selected ‘tree’.

## 数学代写|信息论代写information theory代考|Asymptotic equivalence of non-equiprobable

The idea that the general case of non-equiprobable outcomes can be asymptotically reduced to the case of equiprobable outcomes is fundamental for information theory in the absence of noise. This idea belongs to Ludwig Boltzmann who derived formula (1.2.3) for entropy. Claude Shannon revived this idea and broadly used it for derivation of new results.

In considering this question here, we shall not try to reach generality, since these results form a particular case of more general results of Section 1.5. Consider the set of independent realizations $\eta=\left(\xi_{1}, \ldots, \xi_{n}\right)$ of a random variable $\xi=\xi_{j}$, which assumes one of two values 1 or 0 with probabilities $P[\xi=1]=p<1 / 2 ; P[\xi=$ $0]=1-p=q$. Evidently, the number of such different combinations (realizations) is equal to $2^{n}$. Let realization $\eta_{n_{1}}$ contain $n_{1}$ ones and $n-n_{1}=n_{0}$ zeros. Then its probability is given by
$$P\left(\eta_{n_{1}}\right)=p^{n_{1}} q^{n-n_{1}}$$
Of course, these probabilities are different for different $n_{1}$. The ratio $P\left(\eta_{0}\right) / P\left(\eta_{n}\right)=$ $(q / p)^{n}$ of the largest probability to the smallest one is big and increases fast with a growth of $n$. What equiprobability can we talk about then? The thing is that due to the Law of Large Numbers the number of ones $n_{1}=\xi_{1}+\cdots+\xi_{n}$ has a tendency to take values, which are close to its mean
$$\mathbb{E}\left[n_{1}\right]=\sum_{j=1}^{n} \mathbb{E}\left[\xi_{j}\right]=n \mathbb{E}\left[\xi_{j}\right]=n p$$

## 数学代写|信息论代写information theory代考|Asymptotic equiprobability and entropic stability

1. The ideas of preceding section concerning asymptotic equivalence of nonequiprobable and equiprobable outcomes can be extended to essentially more general cases of random sequences and processes. It is not necessary for random variables $\xi_{j}$ forming the sequence $\eta^{n}=\left(\xi_{1}, \ldots, \xi_{n}\right)$ to take only one of two values and to have the same distribution law $P\left(\xi_{j}\right)$. There is also no need for $\xi_{j}$ to be statistically independent and even for $\eta^{n}$ to be the sequence $\left(\xi_{1}, \ldots, \xi_{n}\right)$. So what is really necessary, the asymptotic equivalence?

In order to state the property of asymptotic equivalence of non-equiprobable and equiprobable outcomes in general terms we should use the notion of entropic stability of family of random variables.

A family of random variables $\left{\eta^{n}\right}$ is entropically stable if the ratio $H\left(\eta^{n}\right) / H_{\eta^{n}}$ converges in probability to one as $n \rightarrow \infty$. This means that whatever $\varepsilon>0, \eta>0$ are, there exists $N(\varepsilon, \eta)$ such that the inequality
$$P\left{\left|H\left(\eta^{n}\right) / H_{\eta^{n}}-1\right| \geqslant \varepsilon\right}<\eta$$
is satisfied for every $n \geqslant N(\varepsilon, \eta)$.
The above definition implies that $0<H_{\eta^{n}}<\infty$ and $H_{\eta^{n}}$ does not decrease with
$n$. Usually $H_{\eta^{n}} \rightarrow \infty$.
Asymptotic equiprobability can be expressed in terms of entropic stability in the form of the following general theorem.

Theorem 1.9. If a family of random variables $\left{\eta^{n}\right}$ is entropically stable, then the set of realizations of each random variable can be partitioned into two subsets $A_{n}$ and $B_{n}$ in such a way that

1. The total probability of realizations from subset $A_{n}$ vanishes:
$$P\left(A_{n}\right) \rightarrow 0 \quad \text { as } \quad n \rightarrow \infty$$

## 数学代写|信息论代写information theory代考|Conditional entropy. Hierarchical additivity

$$P\left(\xi_{k}, \ldots, \xi_{n} \mid \xi_{1}, \ldots, \xi_{k-1}\right)=\frac{P\left(\xi_{1}, \ldots, \xi_{n}\right)}{P\left(\xi_{1}, \ldots, \xi_{k-1}\right)} \quad(k \leqslant n)$$

$$H\left(\xi_{k}, \ldots, \xi_{n} \mid \xi_{1}, \ldots, \xi_{k-1}\right)=-\ln P\left(\xi_{k}, \ldots, \xi_{n} \mid \xi_{1}, \ldots, \xi_{k-1}\right)$$

$$H_{\xi_{k}, \ldots \xi_{n}}\left(\mid \xi_{1}, \ldots \xi_{k-1}\right)=-\sum_{\xi_{k} \ldots \xi_{n}} P\left(\xi_{k}, \ldots, \xi_{n} \mid \xi_{1}, \ldots, \xi_{k-1}\right) \times \quad \times \ln P\left(\xi_{k}, \ldots, \xi_{n} \mid \xi_{1}, \ldots, \xi_{k-1}\right)$$

$$H_{\xi_{k}, \ldots, \xi_{n}} \mid \xi_{1}, \ldots, \xi_{k-1}=\mathbb{E}\left[H\left(\xi_{k}, \ldots, \xi_{n} \mid \xi_{1}, \ldots, \xi_{k-1}\right)\right]=-\sum_{\xi_{1} \ldots \xi_{n}} P\left(\xi_{1} \ldots \xi_{n}\right) \ln P\left(\xi_{k}, \ldots, \xi_{n}\right.$$

## 数学代写|信息论代写information theory代考|Asymptotic equivalence of non-equiprobable

$P[\xi=1]=p<1 / 2 ; P[\xi=0]=1-p=q$. 显然，这种不同组合（实现）的数量等于 $2^{n}$. 让实现 $\eta_{n_{1}}$ 包含 $n_{1}$ 一个和 $n-n_{1}=n_{0}$ 零。那么它的概率由下式给出
$$P\left(\eta_{n_{1}}\right)=p^{n_{1}} q^{n-n_{1}}$$

$$\mathbb{E}\left[n_{1}\right]=\sum_{j=1}^{n} \mathbb{E}\left[\xi_{j}\right]=n \mathbb{E}\left[\xi_{j}\right]=n p$$

## 数学代写|信息论代写information theory代考|Asymptotic equiprobability and entropic stability

1. 上一节关于非等概率和等概率结果的渐近等价的想法可以扩展到更一般的随机序列和过程的情况。随机变量 不是必需的 $\xi_{j}$ 形成序列 $\eta^{n}=\left(\xi_{1}, \ldots, \xi_{n}\right)$ 只取两个值之一并具有相同的分布规律 $P\left(\xi_{j}\right)$. 也没有必要 $\xi_{j}$ 在统 计上是独立的，甚至对于 $\eta^{n}$ 成为序列 $\left(\xi_{1}, \ldots, \xi_{n}\right)$. 那么什么是真正必要的，渐近等价呢?
为了概括地说明非等概率和等概率结果的渐近等价性质，我们应该使用随机变量族的嫡稳定性的概念。
随机变量族 Veft {leta^{n}\right } } \text { 如果比率是樀稳定的 } H ( \eta ^ { n } ) / H _ { \eta ^ { n } } \text { 在概率上收敛为 } 1 n \rightarrow \infty \text { . 这意味着无论 } $\varepsilon>0, \eta>0$ 是，存在 $N(\varepsilon, \eta)$ 这样不等式
P\left{\eft $\mid H \backslash l e f t(\mathrm{~ l e t a}$
满足于每一个 $n \geqslant N(\varepsilon, \eta)$.
上述定义意味着 $0<H_{\eta^{n}}<\infty$ 和 $H_{\eta^{n}}$ 不减少
$n$. 通常 $H_{\eta^{n}} \rightarrow \infty$.
渐近等概率可以用以下一般定理的形式用嫡稳定性表示。
$\mathrm{~ 定 理 ~ 1 . 9 。 如 果 一 个 随 机 变 量 族 l l e f t : n e t a ^ { n }}$ $A_{n}$ 和 $B_{n}$ 以这样的方式
2. 子集实现的总概率 $A_{n}$ 消失:
$$P\left(A_{n}\right) \rightarrow 0 \quad \text { as } \quad n \rightarrow \infty$$

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

## 数学代写|信息论代写information theory代考|COMP2610

statistics-lab™ 为您的留学生涯保驾护航 在代写信息论information theory方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写信息论information theory代写方面经验极为丰富，各种代写信息论information theory相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 数学代写|信息论代写information theory代考|Definition of information and entropy in the absence of noise

In modern science, engineering and public life, a big role is played by information and operations associated with it: information reception, information transmission, information processing, storing information and so on. The significance of information has seemingly outgrown the significance of the other important factor, which used to play a dominant role in the previous century, namely, energy.

In the future, in view of a complexification of science, engineering, economics and other fields, the significance of correct control in these areas will grow and, therefore, the importance of information will increase as well.

What is information? Is a theory of information possible? Are there any general laws for information independent of its content that can be quite diverse? Answers to these questions are far from obvious. Information appears to be a more difficult concept to formalize than, say, energy, which has a certain, long established place in physics.

There are two sides of information: quantitative and qualitative. Sometimes it is the total amount of information that is important, while other times it is its quality, its specific content. Besides, a transformation of information from one format into another is technically a more difficult problem than, say, transformation of energy from one form into another. All this complicates the development of information theory and its usage. It is quite possible that the general information theory will not bring any benefit to some practical problems, and they have to be tackled by independent engineering methods.

Nevertheless, general information theoory exists, and so dō standärd situations and problems, in which the laws of general information theory play the main role. Therefore, information theory is important from a practical standpoint, as well as in fundamental science, philosophy and expanding the horizons of a researcher.

From this introduction one can gauge how difficult it was to discover the laws of information theory. In this regard, the most important milestone was the work of Claude Shannon $[44,45]$ published in 1948-1949 (the respective English originals are $[38,39]$ ). His formulation of the problem and results were both perceived as a surprise. However, on closer investigation one can see that the new theory extends and develops former ideas, specifically, the ideas of statistical thermodynamics due to Boltzmann. The deep mathematical similarities between these two directions are not accidental. It is evidenced in the use of the same formulae (for instance, for entropy of a discrete random variable). Besides that, a logarithmic measure for the amount of information, which is fundamental in Shannon’s theory, was proposed for problems of communication as early as 1928 in the work of R. Hartley [19] (the English original is [18]).

In the present chapter, we introduce the logarithmic measure of the amount of information and state a number of important properties of information, which follow from that measure, such as the additivity property.

The notion of the amount of information is closely related to the notion of entropy, which is a measure of uncertainty. Acquisition of information is accompanied by a decrease in uncertainty, so that the amount of information can be measured by the amount of uncertainty or entropy that has disappeared.

In the case of a discrete message, i.e. a discrete random variable, entropy is defined by the Boltzmann formula
$$H_{\xi}=-\sum_{\xi} P(\xi) \ln P(\xi),$$
where $\xi$ is a random variable, and $P(\xi)$ is its probability distribution.

## 数学代写|信息论代写information theory代考|Definition of entropy in the case of equiprobable outcomes

Suppose we have $M$ equiprobable outcomes of an experiment. For example, when we roll a standard die, $M=6$. Of course, we cannot always perform the formalization of conditions so easily and accurately as in the case of a die. We assume though that the formalization has been performed and, indeed, one of $M$ outcomes is realized, and they are equivalent in probabilistic terms. Then there is a priori uncertainty directly connected with $M$ (i.e. the greater the $M$ is, the higher the uncertainty is). The quantity measuring the above uncertainty is called entropy and is denoted by $H:$
$$H=f(M),$$
where $f(\cdot)$ is some increasing non-negative function defined at least for natural numbers.

When rolling a dice and observing the outcome number, we obtain information whose amount is denoted by $I$. After that (i.e. a posteriori) there is no uncertainty left: the a posteriori number of outcomes is $M=1$ and we must have $H_{\mathrm{ps}}=f(1)=0$. It is natural to measure the amount of information received by the value of disappeared uncertainty:
$$I=H_{\mathrm{pr}}-H_{\mathrm{ps}} .$$
Here, the subscript ‘pr’ means ‘a priori’, whereas ‘ps’ means ‘a posteriori’.
We see that the amount of received information $I$ coincides with the initial entropy. In other cases (in particular, for formula (1.2.3) given below) a message having entropy $H$ can also transmit the amount of information $I$ equal to $H$.

In order to determine the form of function $f(\cdot)$ in (1.1.1) we employ very natural additivity principle. In the case of a die it reads: the entropy of two throws of a die is twice as large as the entropy of one throw, the entropy of three throws of a die is three times as large as the entropy of one throw, etc. Applying the additivity principle to other cases means that the entropy of several independent systems is equal to the sum of the entropies of individual systems. However, the number $M$ of outcomes for a complex system is equal to the product of the numbers $m$ of outcomes for each one of the ‘simple’ (relative to the total system) subsystems. For two throws of dice, the number of various pairs $\left(\xi_{1}, \xi_{2}\right)$ (where $\xi_{1}$ and $\xi_{2}$ both take one out of six values) equals to $36=6^{2}$. Generally, for $n$ throws the number of equivalent outcomes is $6^{n}$. Applying formula (1.1.1) for this number, we obtain entropy $f\left(6^{n}\right)$. According to the additivity principle, we find that
$$f\left(6^{n}\right)=n f(6)$$

## 数学代写|信息论代写information theory代考|Entropy and its properties in the case of non-equiprobable outcomes

1. Suppose now the probabilities of different outcomes are unequal. If, as earlier, the number of outcomes equals to $M$, then we can consider a random variable $\xi$, which takes one of $M$ values. Considering an index of the corresponding outcome as $\xi$, we obtain that those values are nothing else but $1, \ldots, M$. Probabilities $P(\xi)$ of those values are non-negative and satisfy the normalization constraint: $\sum_{\xi} P(\xi)=1$.

If we formally apply equality (1.1.8) to this case, then each $\xi$ should have its own entropy
$$H(\xi)=-\ln P(\xi) .$$
Thus, we attribute a certain value of entropy to each realization of the variable $\xi$. Since $\xi$ is a random variable, we can also regard this entropy as a random variable.
As in Section 1.1, the a posteriori entropy, which remains after the realization of $\xi$ becomes known, is equal to zero. That is why the information we obtain once the realization is known is numerically equal to the initial entropy
$$I(\xi)=H(\xi)=-\ln P(\xi)$$
Similar to entropy $H(\xi)$, information $I$ depends on the actual realization (on the value of $\xi$ ), i.e., it is a random variable. One can see from the latter formula that information and entropy are both large when a posteriori probability of the given realization is small and vice versa. This observation is quite consistent with intuitive ideas.

Example 1.1. Suppose we would like to know whether a certain student has passed an exam or not. Let the probabilities of these two events be
$$P(\text { pass })=7 / 8, \quad P(\text { fail })=1 / 8$$
One can see from these probabilities that the student is quite strong. If we were informed that the student had passed the exam, then we could say: ‘Your message has not given me a lot of information. I have already expected that the student passed the exam’. According to formula (1.2.2) the information of this message is quantitatively equal to
$$I(\text { pass })=\log {2}(8 / 7)=0.193 \text { bits. }$$ If we were informed that the student had failed, then we would say ‘Really?’ and would feel that we have improved our knowledge to a greater extent. The amount of information of such a message is equal to $$I(\text { fail })=\log {2}(8)=3 \text { bits. }$$

## 数学代写|信息论代写information theory代考|Definition of information and entropy in the absence of noise

$$H_{\xi}=-\sum_{\xi} P(\xi) \ln P(\xi)$$

## 数学代写|信息论代写information theory代考|Definition of entropy in the case of equiprobable outcomes

$$H=f(M),$$

$$I=H_{\mathrm{pr}}-H_{\mathrm{ps}} .$$

$$f\left(6^{n}\right)=n f(6)$$

## 数学代写|信息论代写information theory代考|Entropy and its properties in the case of non-equiprobable outcomes

1. 假设现在不同结果的概率不相等。如果如前所述，结果的数量等于 $M$ ，那么我们可以考虑一个随机变量 $\xi$ ，它需要 一个 $M$ 价值观。将相应结果的索引视为 $\xi$ ，我们得到这些值只不过是 $1, \ldots, M$. 概率 $P(\xi)$ 这些值中的一个是非负 的并且满足规范化约束: $\sum_{\xi} P(\xi)=1$.
如果我们正式将等式 (1.1.8) 应用于这种情况，那么每个 $\xi$ 应该有自己的熵
$$H(\xi)=-\ln P(\xi) .$$
因此，我们将某个熵值赋予变量的每个实现 $\xi$. 自从 $\xi$ 是一个随机变量，我们也可以把这个熵看作一个随机变量。 与第 $1.1$ 节一样，后验樀在实现 $\xi$ 变得已知，等于零。这就是为什么我们在实现已知后获得的信息在数值上等于初始樀
$$I(\xi)=H(\xi)=-\ln P(\xi)$$
类似于樀 $H(\xi)$ ，信自 $I$ 取决于实际实现（在价值 $\xi$ ，即它是一个随机变量。从后一个公式可以看出，当给定实现的后验 概率很小时，信息和樀都很大，反之亦然。这种观眎与直觉的想法是相当一致的。
例 1.1。假设我们想知道某个学生是否通过了考试。让这两个事件的概率为
$$P(\text { pass })=7 / 8, \quad P(\text { fail })=1 / 8$$
从这些概率可以看出，学生的实力相当强。如果我们被告知学生通过了考试，那么我们可以说：’你的消自没有给我很多 信息。我已经预料到学生会通过考试。”根据公式 (1.2.2)，这条消息的信息量等于
$$I(\text { pass })=\log 2(8 / 7)=0.193 \text { bits. }$$
如果殘们被告知学生失败了，那么我们会说“真的吗? “并且会觉得我们在更大程度上提高了我们的知识。这样一条消息 的信息量等于
$$I(\text { fail })=\log 2(8)=3 \text { bits. }$$

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

## 数学代写|编码理论代写Coding theory代考|MTH4107

statistics-lab™ 为您的留学生涯保驾护航 在代写编码理论Coding theory方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写编码理论Coding theory代写方面经验极为丰富，各种代写编码理论Coding theory相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 数学代写|编码理论代写Coding theory代考|Constructions of Codes with Prescribed Automorphisms

Huffman and Yorgov (see $[999,1928,1929]$ ) developed a method for constructing binary self-dual codes via an automorphism of odd prime order. Their method was extended by other authors for automorphisms of odd composite order and for automorphisms of order 2 $[272,281,616]$.

Huffman has also studied the properties of the linear codes over $\mathbb{F}{q}$, having an automorphism of prime order $p$ coprime with $q[1000]$. Further, he has continued with Hermitian and additive self-dual codes over $\mathbb{F}{4}[1001,1006]$, and with self-dual codes over rings $[1004,1007]$.
Let $\mathcal{C}$ be a binary self-dual code of length $n$ with an automorphism $\sigma$ of prime order $p \geq 3$ with exactly $c$ independent $p$-cycles and $f=n-c p$ fixed points in its decomposition. We may assume that
$$\sigma=(1,2, \cdots, p)(p+1, p+2, \cdots, 2 p) \cdots((c-1) p+1,(c-1) p+2, \cdots, c p)$$
and say that $\sigma$ is of type $p-(c, f)$. We present the main theorems about the structure of such a code. This structure has been used by many authors in order to construct optimal self-dual codes with different parameters.

Theorem 4.4.20 ([999]) Let $\mathcal{C}$ be a binary $[n, n / 2]$ code with automorphism $\sigma$ from (4.3). Let $\Omega_{1}={1,2, \ldots, p}, \ldots, \Omega_{c}={(c-1) p+1,(c-1) p+2, \ldots, c p}$ denote the cycles of $\sigma$, and let $\Omega_{c+1}={c p+1}, \ldots, \Omega_{c+f}={c p+f=n}$ be the fixed points of $\sigma$. Define
\begin{aligned} &F_{\sigma}(\mathcal{C})={\mathbf{v} \in \mathcal{C} \mid \sigma(\mathbf{v})=\mathbf{v}} \ &E_{\sigma}(\mathcal{C})=\left{\mathbf{v} \in \mathcal{C} \mid \mathbf{w t}{\mathrm{H}}\left(\mathbf{v} \mid \Omega{i}\right) \equiv 0 \quad(\bmod 2), i=1,2, \ldots, c+f\right} \end{aligned}
where $\mathbf{v}{\mid \Omega{i}}$ is the restriction of $\mathbf{v}$ on $\Omega_{i} .$ Then $\mathcal{C}=F_{\sigma}(\mathcal{C}) \oplus E_{\sigma}(\mathcal{C}), \operatorname{dim}\left(F_{\sigma}(\mathcal{C})\right)=\frac{c+f}{2}$, and $\operatorname{dim}\left(E_{\sigma}(\mathcal{C})\right)=\frac{c(p-1)}{2} .$

Theorem 4.4.21 ([1928]) Let $\mathcal{C}$ be a binary $[n, n / 2]$ code with automorphism $\sigma$ from (4.3).
Let $\pi: F_{\sigma}(\mathcal{C}) \rightarrow \mathbb{F}{2}^{c+f}$ be the projection map, where, for $\mathbf{v} \in F{\sigma}(\mathcal{C}),(\pi(\mathbf{v})){i}=v{j}$ for some $j \in \Omega_{i}, i=1,2, \ldots, c+f$. Let $\mathcal{E}$ (respectively $\mathcal{P}$ ) be the set of all even-weight vectors in $\mathbb{F}{2}^{p}$ (respectively even-weight polynomials in $\left.\mathbb{F}{2}[x] /\left\langle x^{p}-1\right\rangle\right)$. Define $\varphi^{\prime}: \mathcal{E} \rightarrow \mathcal{P} b y$ $\varphi^{\prime}\left(v_{0} v_{1} \cdots v_{p-1}\right)=v_{0}+v_{1} x+\cdots+v_{p-1} x^{p-1} .$ Let $E_{\sigma}(\mathcal{C})^{}$ be $E_{\sigma}(\mathcal{C})$ punctured on all the fixed points of $\sigma$. Define $\varphi: E_{\sigma}(\mathcal{C})^{} \rightarrow \mathcal{P}^{c}$ by $\varphi(\mathbf{v})=\left(\varphi^{\prime}\left(\mathbf{v}{\mid \Omega{1}}\right), \varphi^{\prime}\left(\mathbf{v}{\mid \Omega{2}}\right), \ldots, \varphi^{\prime}\left(\mathbf{v} \mid \Omega_{c}\right)\right)$ for $\mathbf{v} \in E_{\sigma}(\mathcal{C})^{} \subseteq \mathcal{E}^{c}$. Then $\mathcal{C}$ is self-dual if and only if the following two conditions hold: (a) $\mathcal{C}{\pi}=\pi\left(F{\sigma}(\mathcal{C})\right)$ is a binary self-dual code of length $c+f$, and
(b) for every two vectors $\mathbf{u}, \mathbf{v} \in \mathcal{C}{\varphi}=\varphi\left(E{\sigma}(\mathcal{C})^{}\right)$, we have $\sum_{i=1}^{c} u_{i}(x) v_{i}\left(x^{-1}\right)=0$ where $u_{i}(x)=\varphi^{\prime}\left(\mathbf{u}{\mid \Omega{i}}\right)$ and $v_{i}(x)=\varphi^{\prime}\left(\mathbf{v}{\mid \Omega{i}}\right)$ for $i=1,2, \ldots, c .$

## 数学代写|编码理论代写Coding theory代考|Enumeration and Classification

Remark 4.5.1 The main tool to classify self-dual codes is based on the so-called mass formula which gives the possibility of checking whether the classification is correct. The number of the self-dual binary codes of even length $n$ is $N(n)=\prod_{i=1}^{n / 2-1}\left(2^{i}+1\right)$. If $\mathcal{C}$ has length $n$, then the number of codes equivalent to $\mathcal{C}$ is $n ! /|\operatorname{PAut}(\mathcal{C})|$. To classify binary selfdual codes of length $n$, it is necessary to find inequivalent self-dual codes $\mathcal{C}{1}, \ldots, \mathcal{C}{r}$ so that the following mass formula holds:
$$N(n)=\sum_{i=1}^{r} \frac{n !}{\left|\operatorname{PAut}\left(\mathcal{C}{i}\right)\right|} .$$ There are such formulas for all families of self-dual and also of self-orthogonal codes. Detailed information is presented in [1008, 1555]. See also Proposition 7.5.1. Theorem 4.5.2 We have the following mass formulas. (a) For self-dual binary codes of even length $n$, $$\sum{j} \frac{n !}{\left|\operatorname{PAut}\left(\mathcal{C}{j}\right)\right|}=\prod{i=1}^{n / 2-1}\left(2^{i}+1\right)$$
(b) For doubly-even self-dual binary codes of length $n \equiv 0(\bmod 8)$,
$$\sum_{j} \frac{n !}{\left|\operatorname{PAut}\left(\mathcal{C}{j}\right)\right|}=\prod{i=1}^{n / 2-2}\left(2^{i}+1\right)$$

(c) For self-dual ternary codes of length $n \equiv 0(\bmod 4)$,
$$\sum_{j} \frac{2^{n} n !}{\left|\operatorname{MAut}\left(\mathcal{C}{j}\right)\right|}=2 \prod{i=1}^{n / 2-1}\left(3^{i}+1\right)$$
(d) For Hermitian self-dual codes over $\mathbb{F}{4}$ of even length $n$, $$\sum{j} \frac{2 \cdot 3^{n} n !}{\left|\Gamma \operatorname{Aut}\left(\mathcal{C}{j}\right)\right|}=\prod{i=1}^{n / 2-1}\left(2^{2 i+1}+1\right)$$
In each case, the summation is over all $j$, where $\left{\mathcal{C}{j}\right}$ is a complete set of representatives of inequivalent codes of the given type. The automorphism group $\operatorname{CAut}\left(\mathcal{C}{j}\right)$ is the set of all semi-linear monomial transformations from $\mathbb{F}{4}^{n}$ to $\mathbb{F}{4}^{n}$ that fix $\mathcal{C}_{j}$; see [1008, Section 1.7].

## 数学代写|编码理论代写Coding theory代考|Designs Supported by Codes

The support of a nonzero vector $\mathbf{x}=x_{1} \cdots x_{n} \in \mathbb{F}{q}^{n}$ is the set of indices of its nonzero coordinates: $\operatorname{supp}(\mathbf{x})=\left{i \mid x{i} \neq 0\right}$

Definition 5.2.1 A design $D$ is supported by a block code $\mathcal{C}$ of length $n$ if the points of $D$ are labeled by the $n$ coordinates of $\mathcal{C}$, and every block of $D$ is the support of some nonzero codeword of $\mathcal{C}$.

Remark 5.2.2 If $\mathcal{C}$ is a linear code over a finite field of order $q>2$, and $\mathbf{c}$ is a codeword of weight $w>0$, all $q-1$ nonzero scalar multiples of $\mathbf{c}$ have the same support. To avoid repeated blocks, we associate only one block with all scalar multiples of c. Suppose that $D$ is a $t-(n, w, \lambda)$ design supported by a linear $q$-ary code $\mathcal{C}$. It follows that the number of blocks $b$ of $D$ is smaller than or equal to $A_{w} /(q-1)$, where $A_{w}$ is the number of codewords of weight $w$. If the support of every codeword of weight $w$ is a block of $D$, then we have and the parameter $\lambda$ can be computed using $(5.2)$ and (5.3):
$$\lambda=\frac{A_{w}}{q-1} \cdot \frac{\left(\begin{array}{c} w \ t \end{array}\right)}{\left(\begin{array}{c} n \ t \end{array}\right)} .$$
Theorem 5.2.3 If a code is invariant under a monomial group that acts t-transitively or $t$-homogeneously on the set of coordinates, the supports of the codewords of any nonzero weight form a t-design.

Corollary 5.2.4 If $\mathcal{C}$ is a cyclic code of length $n$, the supports of all codewords of any nonzero weight $w$ form a 1-design.

## 数学代写|编码理论代写Coding theory代考|Constructions of Codes with Prescribed Automorphisms

σ=(1,2,⋯,p)(p+1,p+2,⋯,2p)⋯((C−1)p+1,(C−1)p+2,⋯,Cp)

\begin{aligned}&F_{\sigma}(\mathcal{C})={\mathbf{v}\in \mathcal{C}\mid \sigma(\mathbf{v})=\mathbf{v}}\ &E_{\sigma}(\mathcal{C})=\left{\mathbf{v} \in \mathcal{C} \mid \mathbf{wt}{\mathrm{H}}\left(\mathbf{v} \mid \Omega{i}\right)\equiv 0\quad(\bmod2),i=1.2,\ldots,c+f\right}\end{aligned}\begin{aligned}&F_{\sigma}(\mathcal{C})={\mathbf{v}\in \mathcal{C}\mid \sigma(\mathbf{v})=\mathbf{v}}\ &E_{\sigma}(\mathcal{C})=\left{\mathbf{v} \in \mathcal{C} \mid \mathbf{wt}{\mathrm{H}}\left(\mathbf{v} \mid \Omega{i}\right)\equiv 0\quad(\bmod2),i=1.2,\ldots,c+f\right}\end{aligned}

(b) 对于每两个向量在,在∈C披=披(和σ(C))， 我们有∑一世=1C在一世(X)在一世(X−1)=0在哪里在一世(X)=披′(在∣Ω一世)和在一世(X)=披′(在∣Ω一世)为了一世=1,2,…,C.

## 数学代写|编码理论代写Coding theory代考|Enumeration and Classification

ñ(n)=∑一世=1rn!|帕特⁡(C一世)|.对于所有自对偶和自正交码族都有这样的公式。详细信息见 [1008, 1555]。另见提案 7.5.1。定理 4.5.2 我们有以下质量公式。(a) 对于偶数长度的自对偶二进制码n,

∑jn!|帕特⁡(Cj)|=∏一世=1n/2−1(2一世+1)
(b) 对于长度的双偶自对偶二进制码n≡0(反对8),

∑jn!|帕特⁡(Cj)|=∏一世=1n/2−2(2一世+1)

(c) 对于长度的自对偶三进制码n≡0(反对4),

∑j2nn!|毛⁡(Cj)|=2∏一世=1n/2−1(3一世+1)
(d) 对于 Hermitian 自对偶码F4等长n,

∑j2⋅3nn!|Γ或者⁡(Cj)|=∏一世=1n/2−1(22一世+1+1)

## 数学代写|编码理论代写Coding theory代考|Designs Supported by Codes

λ=一个在q−1⋅(在 吨)(n 吨).

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

## 数学代写|编码理论代写Coding theory代考|ELEC7604

statistics-lab™ 为您的留学生涯保驾护航 在代写编码理论Coding theory方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写编码理论Coding theory代写方面经验极为丰富，各种代写编码理论Coding theory相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 数学代写|编码理论代写Coding theory代考|Perfect Codes

Perfect codes were considered in the very first scientific papers in coding theory. We have already seen two types of perfect codes in Sections $1.10$ and 1.13. Hamming codes [895] have parameters
$$\left[n=\left(q^{m}-1\right) /(q-1), n-m, 3\right]{q}$$ and exist for $m \geq 2$ and prime powers $q$. Golay codes [820] have parameters $$[23,12,7]{2} \text { and }[11,6,5]{3} \text {. }$$ There are also some families of trivial perfect codes: codes containing one word, codes containing all codewords in the space, and $(n, 2, n){2}$ codes for odd $n$. If the order of the alphabet $q$ is a prime power, these are in fact the only sets of parameters for which (linear and unrestricted) perfect codes exist $[1805,1949]$.

Theorem 3.3.1 The nontrivial perfect linear codes over $\mathbb{F}{q}$, where $q$ is a prime power, are precisely the Hamming codes with parameters (3.1) and the Golay codes with parameters (3.2). A nontrivial perfect unrestricted code (over $\mathbb{F}{q}, q$ a prime power) that is not equivalent to a linear code has the same length, size, and minimum distance as a Hamming code (3.1).
Although the remarkable Theorem 3.3.1 gives us a rather solid understanding of perfect codes, there are still many open problems in this area, including the following (a code with different alphabet sizes for different coordinates is called mixed):

Research Problem 3.3.2 Solve the existence problem for perfect codes when the size of the alphabet is not a prime power.
Research Problem 3.3.3 Solve the existence problem for perfect mixed codes.
Research Problem 3.3.4 Classify perfect codes, especially for the parameters covered by Theorem 3.3.1.

Since Theorem 3.3.1 covers alphabet sizes that are prime powers, that is, exactly the sizes for which finite fields and linear codes exist, Research Problems 3.3.2 to $3.3 .4$ are essentially about unrestricted codes (although many codes studied for Research Problem $3.3 .3$ have clear algebraic structures and close connections to linear codes).

## 数学代写|编码理论代写Coding theory代考|MDS Codes

Maximum distance separable (MDS) codes are not only of theoretical interest, but rather important families of codes are of this type, such as Reed-Solomon codes (Section 1.14). An entire chapter is devoted to MDS codes in the book by MacWilliams and Sloane [1323. Chap. 11].

MDS codes are closely connected to many other structures in combinatorics and geometry. For example, an $[n, k, n-k+1]_{q}$ MDS code with dimension $k \geq 3$ corresponds to an $n$-arc in the projective geometry $\mathrm{PG}(k-1, q)$; see Chapter 14. Finite geometry is indeed a commonly used framework for studying MDS codes. In combinatorics, MDS codes correspond to certain orthogonal arrays.

Definition 3.3.18 An orthogonal array of size $N$, with $m$ constraints, $s$ levels, and strength $t$, denoted $\mathrm{OA}(N, m, s, t)$, is an $m \times N$ matrix with entries from $\mathbb{F}_{s}$, having the property that in every $t \times N$ submatrix, every $t \times 1$ column vector appears $\lambda=N / s^{t}$ (called the index) times.

Theorem 3.3.19 An $n \times q^{k}$ matrix with columns formed by the codewords of a linear $[n, k, n-k+1]{q} M D S$ code or an unvestricted $\left(n, q^{k}, n-k+1\right){q} M D S$ code is an $\mathrm{OA}\left(q^{k}, n, q, k\right)$, which has index $\lambda=1$.

Remark 3.3.20 As the codewords of an MDS code with dimension $k$ form an orthogonal array with strength $k$ and index 1 , such codes are systematic and any $k$ coordinates can be used for the message symbols.

In a paper [319] published by Bush in 1952 , the framework of orthogonal arrays is used to construct objects that we now know as Reed-Solomon codes. In that study it is also shown that for linear codes over $\mathbb{F}{q}$ with $k>q, n \leq k+1$ is a necessary condition for an $[n, k, n-k+1]{q}$ MDS code to exist, and that there are $[k+1, k, 2]{q}$ MDS codes. Such codes, and generally codes with parameters $[n, 1, n]{q},[n, n-1,2]{q}$, and $[n, n, 1]{q}$, are called trivial MDS codes.

For $k \leq q$, on the other hand, the following MDS Conjecture related to a question by Segre $[1638]$ in 1955 is still open.

Conjecture 3.3.21 (MDS) If $k \leq q$, then a linear $[n, k, n-k+1]_{q}$ MDS code exists exactly when $n \leq q+1$ unless $q=2^{h}$ and $k=3$ or $k=q-1$, in which case it exists exactly when $n \leq q+2$.

Remark 3.3.22 MDS codes are typically discussed in the linear case, but the parameters of the codes in Conjecture $3.3 .21$ conjecturally also cover the parameters for which unrestricted MDS codes exist.

## 数学代写|编码理论代写Coding theory代考|Weight Enumerators

The Hamming weight enumerator is defined in Definition $1.15 .1$ in Chapter 1. Recall that
$$\operatorname{Hwe}(x, y)=\sum_{i=0}^{n} A_{i}(\mathcal{C}) x^{i} y^{n-i}$$
Definition 4.2.1 A linear code $\mathcal{C}$ is called formally self-dual if $\mathcal{C}$ and its dual code $\mathcal{C}^{\perp}$ have the same weight enumerator, $\operatorname{Hwe} \mathcal{C}(x, y)=\operatorname{Hwe}_{\mathcal{C}}(x, y)$. A linear code is isodual if it is equivalent to its dual code.

Remark 4.2.2 Any isodual code is also formally self-dual, but there are formally self-dual codes that are neither isodual nor self-dual. The smallest length for which a formally selfdual code is not isodual is 14 , and there are 28 such codes amongst 6 weight enumerators [867]. Any self-dual code is also isodual and formally self-dual.
Example 4.2.3 The $[6,3,3]$ binary code $\mathcal{C}$ with a generator matrix
$$\left[\begin{array}{ll} 100 & 111 \ 010 & 110 \ 001 & 101 \end{array}\right]$$
is isodual. Its weight enumerator is $\operatorname{Hwe}_{\mathcal{C}}(x, y)=y^{6}+4 x^{3} y^{3}+3 x^{4} y^{2}$, and its automorphism group has order 24. Obviously, this code is not self-dual as it contains codewords with odd weight.

## 数学代写|编码理论代写Coding theory代考|Perfect Codes

[n=(q米−1)/(q−1),n−米,3]q并且存在米≥2和主要权力q. Golay 码 [820] 有参数

[23,12,7]2 和 [11,6,5]3. 还有一些平凡完美码族：包含一个词的码，包含空间中所有码字的码，以及(n,2,n)2奇数的代码n. 如果按字母顺序q是一个主要的力量，这些实际上是唯一存在（线性和无限制）完美代码的参数集[1805,1949].

## 数学代写|编码理论代写Coding theory代考|MDS Codes

MDS 代码与组合学和几何学中的许多其他结构密切相关。例如，一个[n,ķ,n−ķ+1]q带尺寸的 MDS 代码ķ≥3对应一个n- 射影几何中的弧磷G(ķ−1,q); 见第 14 章。有限几何确实是研究 MDS 代码的常用框架。在组合学中，MDS 码对应于某些正交数组。

## 数学代写|编码理论代写Coding theory代考|Weight Enumerators

[100111 010110 001101]

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

## 数学代写|编码理论代写Coding theory代考|MTH3018

statistics-lab™ 为您的留学生涯保驾护航 在代写编码理论Coding theory方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写编码理论Coding theory代写方面经验极为丰富，各种代写编码理论Coding theory相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 数学代写|编码理论代写Coding theory代考|Equivalence and Isomorphism

The concepts of equivalence and isomorphism of codes are briefly discussed in Section 1.8. Generally, the term symmetry covers both of those concepts, especially when considering maps from a code onto itself, that is, automorphisms. Namely, such maps lead to groups under composition, and groups are essentially about symmetries. The group formed by all automorphisms of a code is, whenever the type of automorphisms is understood, simply called the automorphism group of the code. A subgroup of the automorphism group is called a group of automorphisms.

Symmetries play a central role when constructing as well as classifying codes: several types of constructions are essentially about prescribing symmetries, and one core part of classification is about dealing with maps and symmetries.

On a high level of abstraction, the same questions are asked for linear and unrestricted codes and analogous techniques are used. On a detailed level, however, there are significant differences between those two types of codes.

Consider codes of length $n$ over $\mathbb{F}{q}$. We have seen in Definition $1.8 .8$ that equivalence of unrestricted codes is about permuting coordinates and the elements of the alphabet, individually within each coordinate. All such maps form a group that is isomorphic to the wreath product $\mathrm{S}{q} \geq \mathrm{S}{n}$. For linear codes on the other hand, the concepts of permutation equivalence, monomial equivalence, and equivalence lead to maps that form groups isomorphic to $\mathrm{S}{n}, \mathbb{F}{q}^{}\left\langle\mathrm{~S}{n}\right.$, and the semidirect product $\left(\mathbb{F}{q}^{}\left\langle\mathrm{~S}{n}\right) \rtimes_{\theta}\right.$ Aut $\left(\mathbb{F}{q}\right)$, respectively, where $\mathbb{F}{q}^{}$ is the multiplicative group of $\mathbb{F}{q}$ and $\theta: \operatorname{Aut}\left(\mathbb{F}{q}\right) \rightarrow \operatorname{Aut}\left(\mathbb{F}{q}^{} \backslash \mathrm{S}{n}\right)$ is a group homomorphism.

## 数学代写|编码理论代写Coding theory代考|Prescribing Symmetries

A code of size $M$ is a subset of $M$ vectors from the $n$-dimensional vector space over $\mathbb{F}{q}$ which fulfills some requirements depending on the type of code. The number of ways to choose $M$ arbitrary vectors from such a space is $\left({ }^{q}\right)$, which becomes astronomically large already for rather small parameters. (This is obviously the total number of $(n, M){q}$ codes.) Although no general conclusion regarding the hardness of solving construction and classification problems can be drawn from this number, the number does give a clue that the limit of what is feasible might be reached quite early. Indeed, this is what happens, but perhaps not as early as one would think.

Example 3.2.2 In some special cases – in particular, for perfect codes quite large unrestricted codes have been classified, such as the $(23,4096,7){2}$ code (the binary Golay code is unique $[1732]$; see also $[525]$ ) and the $(15,2048,3){2}$ codes (with the parameters of a Hamming code; there are 5983 such codes [1472]).

But what can be done if we go beyond parameters for which the size of an optimal code can be determined and the optimal codes can be classified? Analytical upper bounds and constructive lower bounds on the size of codes can still be used. One way to speed up computer-aided constructive techniques-some of which are discussed in Chapter 23 -is to restrict the search by imposing a structure on the codes. This is a two-edged sword: the search space is reduced, but good codes might not have that particular structure. Hence some experience is of great help in tuning the search. A very common approach is that of prescribing symmetries (automorphisms).

Remark 3.2.3 In the discussion of groups in the context of automorphism groups of codes, we are not only interested in the abstract group but in the group and its action. This is implicitly understood in the sequel when talking about one particular group or all groups of certain orders. For example, “prescribing a group” means “prescribing a group and its action” and “considering all groups” means “considering all groups and all possible actions of those groups”.

By prescribing a group $G$, the $n$-dimensional vector space is partitioned into orbits of vectors. The construction problem then becomes a problem of finding a set of those orbits rather than finding a set of individual vectors. It must further be checked that the orbits themselves are feasible; an orbit whose codewords do not fulfill the minimum distance criterion can be discarded immediately.

Remark 3.2.4 An $[n, k]_{q}$ linear code can be viewed as an unrestricted code which contains the all-zero codeword and has a particular group of automorphisms $G$ of order $q^{k}$, which only permutes elements of the alphabet, individually within each coordinate.

## 数学代写|编码理论代写Coding theory代考|Some Central Classes of Codes

By Definition 1.9.1, the maximum size of error-correcting codes with length $n$ and minimum distance $d$ are given by the functions $A_{q}(n, d)$ and $B_{q}(n, d)$ for unrestricted and linear codes, respectively. Most general bounds on these functions, such as those in Section 1.9,

consider upper bounds and are about nonexistence of codes. Lower bounds, on the other hand, are typically obtained by constructing explicit codes. Especially for small parameters, many best known codes have been obtained on a case-by-case basis. One possible approach for finding such codes is that of prescribing symmetries as discussed in Section $3.2 .1-$ and carrying out a computer search; see Chapter $23 .$

In some rare situations, there exist codes that attain some general upper bounds. For such parameters, the problem of finding the size of an optimal code is then settled. When this occurs and the upper bound is the Sphere Packing Bound, we get perfect codes (Definition 1.9.8), and when the upper bound is the Singleton Bound, we get maximum distance separable (MDS) codes (Definition 1.9.12). In this section we will take a glance at these two types of codes as well as general binary linear and unrestricted codes.

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

## 数学代写|编码理论代写Coding theory代考|ELEC5507

statistics-lab™ 为您的留学生涯保驾护航 在代写编码理论Coding theory方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写编码理论Coding theory代写方面经验极为丰富，各种代写编码理论Coding theory相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 数学代写|编码理论代写Coding theory代考|Punctured Generalized Reed-Muller Codes

Binary Reed-Muller codes were introduced in Section 1.11. It is known that these codes are equivalent to the extended codes of some cyclic codes. In other words, after puncturing the binary Reed-Muller codes at a proper coordinate, the obtained codes are permutation equivalent to some cyclic codes. The purpose of this section is to introduce a family of cyclic codes of length $n=q^{m}-1$ over $\mathbb{F}{q}$ whose extended codes are the generalized Reed-Muller code over $\mathbb{F}{q}$

Let $q$ be a prime power as before. For any integer $j=\sum_{i=0}^{m-1} j_{i} q^{i}$, where $0 \leq j_{i} \leq q-1$ for all $0 \leq i \leq m-1$ and $m$ is a positive integer, we define
$$\omega_{q}(j)=\sum_{i=0}^{m-1} j_{i}$$
where the sum is taken over the ring of integers, and is called the $q$-weight of $j$.
Let $\ell$ be a positive integer with $1 \leq \ell<(q-1) m$. The $\ell^{\text {th }}$ order punctured generalized Reed-Muller code $\mathcal{R} \mathcal{M}{q}(\ell, m)^{*}$ over $\mathbb{F}{q}$ is the cyclic code of length $n=q^{m}-1$ with generator polynomial
$$g(x)=\sum_{\substack{1 \leq \leq \leq n-1 \ w_{q}(j)<(q-1) m-t}}\left(x-\alpha^{j}\right),$$
where $\alpha$ is a generator of $F_{q^{m}}$. Since $\omega_{q}(j)$ is a constant function on each $q$-cyclotomic coset modulo $n=q^{m}-1, g(x)$ is a polynomial over $\mathbb{F}_{q}$.

The parameters of the punctured generalized Reed-Muller code $\mathcal{R} \mathcal{M}_{q}(\ell, m)^{*}$ are known and summarized in the next theorem [71, Section 5.5].

Theorem 2.8.1 For any $\ell$ with $0 \leq \ell<(q-1) m, \mathcal{R} \mathcal{M}{q}(\ell, m)^{*}$ is a cyclic code over $\mathbb{F}{q}$ with length $n=q^{m}-1$, dimension
$$\kappa=\sum_{i=0}^{\ell} \sum_{j=0}^{m}(-1)^{j}\left(\begin{array}{c} m \ j \end{array}\right)\left(\begin{array}{c} i-j q+m-1 \ i-j q \end{array}\right)$$
and minimum weight $d=\left(q-\ell_{0}\right) q^{m-\ell_{1}-1}-1$, where $\ell=\ell_{1}(q-1)+\ell_{0}$ and $0 \leq \ell_{0}<q-1$.

## 数学代写|编码理论代写Coding theory代考|Another Generalization of the Punctured Binary Reed-Muller Codes

The punctured generalized Reed-Muller codes are a generalization of the classical punctured binary Reed-Muller codes, and were introduced in the previous section. A new generalization of the classical punctured binary Reed-Muller codes was given recently in [561]. The task of this section is to introduce the newly generalized cyclic codes.

Let $n=q^{m}-1$. For any integer $a$ with $0 \leq a \leq n-1$, we have the following $q$-adic expansion
$$a=\sum_{j=0}^{m-1} a_{j} q^{j}$$
where $0 \leq a_{j} \leq q-1$. The Hamming weight of $a$, denoted by wt $\mathrm{H}{\mathrm{H}}(a)$, is the number of nonzero coordinates in the vector $\left(a{0}, a_{1}, \ldots, a_{m-1}\right)$.
Let $\alpha$ be a generator of $\mathbb{F}{q^{m}}$. For any $1 \leq h \leq m$, we define a polynomial $$g{(q, m, h)}(x)=\prod_{\substack{1 \leq a \leq n-1 \ 1 \leq w^{t} H(a) \leq h}}\left(x-\alpha^{\alpha}\right)$$
Since $\mathrm{wt}{\mathrm{H}}(a)$ is a constant function on each $q$-cyclotomic coset modulo $n, g{(q, m, h)}(x)$ is a polynomial over $\mathbb{F}{q}$. By definition, $g{(q, m, h)}(x)$ is a divisor of $x^{n}-1$.

Let $\delta(q, m, h)$ denote the cyclic code over $\mathbb{F}{q}$ with length $n$ and generator polynomial $g{(m, q, h)}(x)$. By definition, $g_{(q, m, m)}(x)=\left(x^{n}-1\right) /(x-1)$. Therefore, the code $2(q, m, m)$ is trivial, as it has parameters $[n, 1, n]$ and is spanned by the all-1 vector. Below we consider the code $₹(q, m, h)$ for $1 \leq h \leq m-1$ only.

Theorem 2.9.1 Let $m \geq 2$ and $1 \leq h \leq m-1$. Then $\delta(q, m, h)$ has parameters $\left[q^{m}-\right.$ $1, \kappa, d]$, where
$$\kappa=q^{m}-\sum_{i=0}^{h}\left(\begin{array}{c} m \ i \end{array}\right)(q-1)^{i}$$
and
$$\frac{q^{h+1}-1}{q-1} \leq d \leq 2 q^{h}-1$$
When $q=2$, the code $\tau(q, m, h)$ clearly becomes the classical punctured binary ReedMuller code $\mathcal{R} \mathcal{M}(m-1-h, m) *$. Hence, $\mathcal{S}(q, m, h)$ is indeed a generalization of the original punctured binary Reed-Muller code. In addition, when $q=2$, the lower bound and the upper bound in (2.3) become identical. It is conjectured that the lower bound on $d$ is the actual minimum distance.

## 数学代写|编码理论代写Coding theory代考|Reversible Cyclic Codes

Definition 2.10.1 A linear code $\mathcal{C}$ is reversible ${ }^{1}$ if $\left(c_{0}, c_{1}, \ldots, c_{n-1}\right) \in \mathcal{C}$ implies that $\left(c_{n-1}, c_{n-2}, \ldots, c_{0}\right) \in \mathcal{C}$

Reversible cyclic codes were considered in $[1346,1347]$. A cryptographic application of reversible cyclic codes was proposed in [353]. A well rounded treatment of reversible cyclic codes was given in [1236]. The objective of this section is to deliver a basic introduction to reversible cyclic codes.

Definition 2.10.2 A polynomial $f(x)$ over $\mathbb{F}_{q}$ is called self-reciprocal if it equals its reciprocal $f^{\perp}(x)$.

The conclusions of the following theorem are known in the literature [1323, page 206] and are easy to prove.

Theorem 2.10.3 Let $\mathcal{C}$ be a cyclic code of length $n$ over $\mathbb{F}{q}$ with generator polynomial $g(x)$. Then the following statements are equivalent. (a) $\mathcal{C}$ is reversible. (b) $g(x)$ is self-reciprocal. (c) $\beta^{-1}$ is a root of $g(x)$ for every root $\beta$ of $g(x)$ over the splitting field of $g(x)$. Furthermore, if $-1$ is a power of $q$ mod $n$, then every cyclic code over $\mathbb{F}{q}$ of length $n$ is reversible.

Now we give an exact count of reversible cyclic codes of length $n=q^{m}-1$ for odd primes $m$. Recall the $q$-cyclotomic cosets $C_{a}$ modulo $n$ given in Definition 1.12.7. It is straightforward that $-a=n-a \in C_{a}$ if and only if $a\left(1+q^{j}\right) \equiv 0(\bmod n)$ for some integer $j$. The following two lemmas are straightforward and hold whenever $\operatorname{gcd}(n, q)=1$.

Lemma 2.10.4 The irreducible polynomial $M_{\alpha^{a}}(x)$ is self-reciprocal if and only if $n-a \in$ $C_{a}$

Lemma 2.10.5 The least common multiple $\operatorname{lcm}\left(M_{\alpha^{a}}(x), M_{\alpha^{n-a}}(x)\right)$ is self-reciprocal for every $a \in \mathbb{Z}_{n}$.

## 数学代写|编码理论代写Coding theory代考|Punctured Generalized Reed-Muller Codes

1.11 节介绍了二进制 Reed-Muller 码。众所周知，这些码相当于一些循环码的扩展码。换句话说，在适当的坐标处对二进制 Reed-Muller 码进行穿孔后，得到的码是等价于一些循环码的置换。本节的目的是介绍一系列长度为的循环码n=q米−1超过Fq其扩展码是广义 Reed-Muller 码Fq

ωq(j)=∑一世=0米−1j一世

G(X)=∑1≤≤≤n−1 在q(j)<(q−1)米−吨(X−一个j),

ķ=∑一世=0ℓ∑j=0米(−1)j(米 j)(一世−jq+米−1 一世−jq)

## 数学代写|编码理论代写Coding theory代考|Another Generalization of the Punctured Binary Reed-Muller Codes

G(q,米,H)(X)=∏1≤一个≤n−1 1≤在吨H(一个)≤H(X−一个一个)

ķ=q米−∑一世=0H(米 一世)(q−1)一世

qH+1−1q−1≤d≤2qH−1

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。