### 数学代写|编码理论作业代写Coding Theory代考|Code Parameters

statistics-lab™ 为您的留学生涯保驾护航 在代写编码理论Coding Theory方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写编码理论Coding Theory代写方面经验极为丰富，各种代写编码理论Coding Theory相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 数学代写|编码理论作业代写Coding Theory代考|Code Parameters

Channel codes are characterised by so-called code parameters. The most important code parameters of a general $(n, k)$ block code that are introduced in the following are the code rate and the minimum Hamming distance (Bossert, 1999; Lin and Costello, 2004; Ling and Xing, 2004). With the help of these code parameters, the efficiency of the encoding process and the error detection and error correction capabilities can be evaluated for a given $(n, k)$ block code.

Code Rate
Under the assumption that each information symbol $u_{i}$ of the $(n, k)$ block code can assume $q$ values, the number of possible information words and code words is given by ${ }^{2}$
$$M=q^{k} .$$
Since the code word length $n$ is larger than the information word length $k$, the rate at which information is transmitted across the channel is reduced by the so-called code rate
$$R=\frac{\log {q}(M)}{n}=\frac{k}{n} .$$ For the simple binary triple repetition code with $k=1$ and $n=3$, the code rate is $R=$ $\frac{k}{n}=\frac{1}{3} \approx 0,3333$. Weight and Hamming Distance Each code word $\mathbf{b}=\left(b{0}, b_{1}, \ldots, b_{n-1}\right)$ can be assigned the weight wt(b) which is defined as the number of non-zero components $b_{i} \neq 0$ (Bossert, 1999), i.e. ${ }^{3}$
$$\operatorname{wt}(\mathbf{b})=\left|\left{i: b_{i} \neq 0,0 \leq i<n\right}\right| .$$
Accordingly, the distance between two code words $\mathbf{b}=\left(b_{0}, b_{1}, \ldots, b_{n-1}\right)$ and $\mathbf{b}^{\prime}=\left(b_{0}^{\prime}\right.$, $\left.b_{1}^{\prime}, \ldots, b_{n-1}^{\prime}\right)$ is given by the so-called Hamming distance (Bossert, 1999)
$$\operatorname{dist}\left(\mathbf{b}, \mathbf{b}^{\prime}\right)=\left|\left{i: b_{i} \neq b_{i}^{\prime}, 0 \leq i<n\right}\right| .$$
The Hamming distance dist $\left(\mathbf{b}, \mathbf{b}^{\prime}\right)$ provides the number of different components of $\mathbf{b}$ and $\mathbf{b}^{\prime}$ and thus measures how close the code words $\mathbf{b}$ and $\mathbf{b}^{\prime}$ are to each other. For a code $\mathbb{B}$ consisting of $M$ code words $\mathbf{b}{1}, \mathbf{b}{2}, \ldots, \mathbf{b}{M}$, the minimum Hamming distance is given by $$d=\min {\mathbf{b} \mathbf{b} \neq \mathbf{b}^{\prime}} \operatorname{dist}\left(\mathbf{b}, \mathbf{b}^{\prime}\right) .$$
We will denote the $(n, k)$ block code $\mathrm{B}=\left{\mathbf{b}{1}, \mathbf{b}{2}, \ldots, \mathbf{b}{M}\right}$ with $M=q^{k} q$-nary code words of length $n$ and minimum Hamming distance $d$ by $\mathrm{B}(n, k, d)$. The minimum weight of the block code $\mathrm{B}$ is defined as $\min {\mathbf{b} \neq \mathbf{0}}$ wt(b). The code parameters of $\mathbb{B}(n, k, d)$ are summarised in Figure 2.4.

## 数学代写|编码理论作业代写Coding Theory代考|Maximum Likelihood Decoding

Channel codes are used in order to decrease the probability of incorrectly received code words or symbols. In this section we will derive a widely used decoding strategy. To this end, we will consider a decoding strategy to be optimal if the corresponding word error probability
$$p_{\mathrm{err}}=\operatorname{Pr}{\hat{\mathbf{u}} \neq \mathbf{u}}=\operatorname{Pr}{\hat{\mathbf{b}} \neq \mathbf{b}}$$
is minimal (Bossert, 1999). The word error probability has to be distinguished from the symbol error probability
$$p_{\mathrm{sym}}=\frac{1}{k} \sum_{i=0}^{k-1} \operatorname{Pr}\left{\hat{u}{i} \neq u{i}\right}$$

which denotes the probability of an incorrectly decoded information symbol $u_{i}$. In general, the symbol error probability is harder to derive analytically than the word error probability. However, it can be bounded by the following inequality (Bossert, 1999)
$$\frac{1}{k} p_{\mathrm{err}} \leq p_{\mathrm{sym}} \leq p_{\mathrm{err}} .$$
In the following, a $q$-nary channel code $\mathrm{B} \in \mathrm{F}{q}^{n}$ with $M$ code words $\mathbf{b}{1}, \mathbf{b}{2}, \ldots, \mathbf{b}{M}$ in the code space $\mathrm{F}{q}^{n}$ is considered. Let $\mathbf{b}{j}$ be the transmitted code word. Owing to the noisy channel, the received word $\mathbf{r}$ may differ from the transmitted code word $\mathbf{b}{j}$. The task of the decoder in Figure $2.6$ is to decode the transmitted code word based on the sole knowledge of $\mathbf{r}$ with minimal word error probability $p{\text {err }}$.

This decoding step can be written according to the decoding rule $\mathbf{r} \mapsto \hat{\mathbf{b}}=\hat{\mathbf{b}}(\mathbf{r})$. For hard-decision decoding the received word $\mathbf{r}$ is an element of the discrete code space $\mathbb{F}{q}^{n}$. To each code word $\mathbf{b}{j}$ we assign a corresponding subspace $\mathrm{D}{j}$ of the code space $\mathbb{F}{q}^{n}$, the so-called decision region. These non-overlapping decision regions create the whole code space $\mathbb{F}{q}^{n}$, i.e. $\bigcup{j=1}^{M} \mathbb{D}{j}=\mathbb{F}{q}^{n}$ and $\mathbb{D}{i} \cap \mathbb{D}{j}=\emptyset$ for $i \neq j$ as illustrated in Figure 2.7. If the received word $\mathbf{r}$ lies within the decision region $\mathbb{D}{i}$, the decoder decides in favour of the code word $\mathbf{b}{i}$. That is, the decoding of the code word $\mathbf{b}{i}$ according to the decision rule $\hat{\mathbf{b}}(\mathbf{r})=\mathbf{b}{i}$ is equivalent to the event $\mathbf{r} \in \mathbb{D}{i}$. By properly choosing the decision regions $\mathbb{D}{i}$, the decoder can be designed. For an optimal decoder the decision regions are chosen such that the word error probability $p_{\mathrm{err}}$ is minimal.

The probability of the event that the code word $\mathbf{b}=\mathbf{b}{j}$ is transmitted and the code word $\hat{\mathbf{b}}(\mathbf{r})=\mathbf{b}{i}$ is decoded is given by
$$\operatorname{Pr}\left{\left(\hat{\mathbf{b}}(\mathbf{r})=\mathbf{b}{i}\right) \wedge\left(\mathbf{b}=\mathbf{b}{j}\right)\right}=\operatorname{Pr}\left{\left(\mathbf{r} \in \mathbb{D}{i}\right) \wedge\left(\mathbf{b}=\mathbf{b}{j}\right)\right} .$$
We obtain the word error probability $p_{\text {err }}$ by averaging over all possible events for which the transmitted code word $\mathbf{b}=\mathbf{b}{j}$ is decoded into a different code word $\hat{\mathbf{b}}(\mathbf{r})=\mathbf{b}{i}$ with

$i \neq j$. This leads to (Neubauer, 2006b)
\begin{aligned} p_{\mathrm{err}} &=\operatorname{Pr}{\hat{\mathbf{b}}(\mathbf{r}) \neq \mathbf{b}} \ &=\sum_{i=1}^{M} \sum_{j \neq i} \operatorname{Pr}\left{\left(\hat{\mathbf{b}}(\mathbf{r})=\mathbf{b}{i}\right) \wedge\left(\mathbf{b}=\mathbf{b}{j}\right)\right} \ &=\sum_{i=1}^{M} \sum_{j \neq i} \operatorname{Pr}\left{\left(\mathbf{r} \in \mathbb{D}{i}\right) \wedge\left(\mathbf{b}=\mathbf{b}{j}\right)\right} \ &=\sum_{i=1}^{M} \sum_{j \neq i} \sum_{\mathbf{r} \in \mathrm{D}{i}} \operatorname{Pr}\left{\mathbf{r} \wedge\left(\mathbf{b}=\mathbf{b}{j}\right)\right} \end{aligned}
With the help of Bayes’ rule $\operatorname{Pr}\left{\mathbf{r} \wedge\left(\mathbf{b}=\mathbf{b}{j}\right)\right}=\operatorname{Pr}\left{\mathbf{b}=\mathbf{b}{j} \mid \mathbf{r}\right} \operatorname{Pr}{\mathbf{r}}$ and by changing the order of summation, we obtain
\begin{aligned} p_{e r r} &=\sum_{i=1}^{M} \sum_{\mathbf{r} \in \mathrm{D}{i}} \sum{j \neq i} \operatorname{Pr}\left{\mathbf{r} \wedge\left(\mathbf{b}=\mathbf{b}{j}\right)\right} \ &=\sum{i=1}^{M} \sum_{\mathbf{r} \in \mathrm{D}{i}} \sum{j \neq i} \operatorname{Pr}\left{\mathbf{b}=\mathbf{b}_{j} \mid \mathbf{r}\right} \operatorname{Pr}{\mathbf{r}} \end{aligned}

## 数学代写|编码理论作业代写Coding Theory代考|Binary Symmetric Channel

In Section $1.2 .3$ we defined the binary symmetric channel as a memoryless channel with the conditional probabilities
$$\operatorname{Pr}\left{r_{i} \mid b_{i}\right}=\left{\begin{array}{cc} 1-\varepsilon, & r_{i}=b_{i} \ \varepsilon, & r_{i} \neq b_{i} \end{array}\right.$$
with channel bit error probability $\varepsilon$. Since the binary symmetric channel is assumed to be memoryless, the conditional probability $\operatorname{Pr}{\mathbf{r} \mid \mathbf{b}}$ can be calculated for code word $\mathbf{b}=$

$\left(b_{0}, b_{1}, \ldots, b_{n-1}\right)$ and received word $\mathbf{r}=\left(r_{0}, r_{1}, \ldots, r_{n-1}\right)$ according to
$$\operatorname{Pr}{\mathbf{r} \mid \mathbf{b}}=\prod_{i=0}^{n-1} \operatorname{Pr}\left{r_{i} \mid b_{i}\right}$$
If the words $\mathbf{r}$ and $\mathbf{b}$ differ in $\operatorname{dist}(\mathbf{r}, \mathbf{b})$ symbols, this yields
$$\operatorname{Pr}{\mathbf{r} \mid \mathbf{b}}=(1-\varepsilon)^{n-\operatorname{dist}(\mathbf{r}, \mathbf{b})} \varepsilon^{\operatorname{dist}(\mathbf{r}, \mathbf{b})}=(1-\varepsilon)^{n}\left(\frac{\varepsilon}{1-\varepsilon}\right)^{\operatorname{dist}(\mathbf{r}, \mathbf{b})}$$
Taking into account $0 \leq \varepsilon<\frac{1}{2}$ and therefore $\frac{\varepsilon}{1-\varepsilon}<1$, the MLD rule is given by
$$\hat{\mathbf{b}}(\mathbf{r})=\underset{\mathbf{b} \in \mathrm{B}}{\operatorname{argmax}} \operatorname{Pr}{\mathbf{r} \mid \mathbf{b}}=\underset{\mathbf{b} \in \mathrm{B}}{\operatorname{argmax}}(1-\varepsilon)^{n}\left(\frac{\varepsilon}{1-\varepsilon}\right)^{\operatorname{dist}(\mathbf{r}, \mathbf{b})}=\underset{\mathbf{b} \in \mathrm{B}}{\operatorname{argmin}} \operatorname{dist}(\mathbf{r}, \mathbf{b}),$$
i.e. for the binary symmetric channel the optimal maximum likelihood decoder (Bossert, 1999)
$$\hat{\mathbf{b}}(\mathbf{r})=\underset{\mathbf{b} \in \mathbb{B}}{\operatorname{argmin}} \operatorname{dist}(\mathbf{r}, \mathbf{b})$$
emits that particular code word which differs in the smallest number of components from the received word $\mathbf{r}$, i.e. which has the smallest Hamming distance to the received word $\mathbf{r}$ (see Figure 2.9). This decoding rule is called minimum distance decoding. This minimum distance decoding rule is also optimal for a $q$-nary symmetric channel (Neubauer, 2006b). We now turn to the error probabilities for the binary symmetric channel during transmission before decoding. The probability of $w$ errors at $w$ given positions within the $n$-dimensional binary received word $\mathbf{r}$ is given by $\varepsilon^{w}(1-\varepsilon)^{n-w}$. Since there are $\left(\begin{array}{l}n \ w\end{array}\right)$ different possibilities

of choosing $w$ out of $n$ positions, the probability of $w$ errors at arbitrary positions within an $n$-dimensional binary received word follows the binomial distribution
$$\operatorname{Pr}{w \text { errors }}=\left(\begin{array}{c} n \ w \end{array}\right) \varepsilon^{w}(1-\varepsilon)^{n-w}$$
with mean $n \varepsilon$. Because of the condition $\varepsilon<\frac{1}{2}$, the probability $\operatorname{Pr}{w$ errors $}$ decreases with increasing number of errors $w$, i.e. few errors are more likely than many errors.

The probability of error-free transmission is $\operatorname{Pr}{0$ errors $}=(1-\varepsilon)^{n}$, whereas the probability of a disturbed transmission with $\mathbf{r} \neq \mathbf{b}$ is given by
$$\operatorname{Pr}{\mathbf{r} \neq \mathbf{b}}=\sum_{w=1}^{n}\left(\begin{array}{l} n \ w \end{array}\right) \varepsilon^{w}(1-\varepsilon)^{n-w}=1-(1-\varepsilon)^{n} .$$

## 数学代写|编码理论作业代写Coding Theory代考|Code Parameters

R=日志⁡q(米)n=ķn.对于简单的二进制三重重复代码ķ=1和n=3，码率为R= ķn=13≈0,3333. 权重和汉明距离每个码字b=(b0,b1,…,bn−1)可以分配权重 wt(b)，定义为非零分量的数量b一世≠0（博塞特，1999），即3

\operatorname{wt}(\mathbf{b})=\left|\left{i: b_{i} \neq 0,0 \leq i<n\right}\right| .\operatorname{wt}(\mathbf{b})=\left|\left{i: b_{i} \neq 0,0 \leq i<n\right}\right| .

\operatorname{dist}\left(\mathbf{b}, \mathbf{b}^{\prime}\right)=\left|\left{i: b_{i} \neq b_{i}^{\prime }, 0 \leq i<n\right}\right| .\operatorname{dist}\left(\mathbf{b}, \mathbf{b}^{\prime}\right)=\left|\left{i: b_{i} \neq b_{i}^{\prime }, 0 \leq i<n\right}\right| .

d=分钟bb≠b′距离⁡(b,b′).

## 数学代写|编码理论作业代写Coding Theory代考|Maximum Likelihood Decoding

p和rr=公关⁡在^≠在=公关⁡b^≠b

p_{\mathrm{sym}}=\frac{1}{k} \sum_{i=0}^{k-1} \operatorname{Pr}\left{\hat{u}{i} \neq u{我}\右}p_{\mathrm{sym}}=\frac{1}{k} \sum_{i=0}^{k-1} \operatorname{Pr}\left{\hat{u}{i} \neq u{我}\右}

1ķp和rr≤ps是米≤p和rr.

\operatorname{Pr}\left{\left(\hat{\mathbf{b}}(\mathbf{r})=\mathbf{b}{i}\right) \wedge\left(\mathbf{b}= \mathbf{b}{j}\right)\right}=\operatorname{Pr}\left{\left(\mathbf{r} \in \mathbb{D}{i}\right) \wedge\left(\ mathbf{b}=\mathbf{b}{j}\right)\right} 。\operatorname{Pr}\left{\left(\hat{\mathbf{b}}(\mathbf{r})=\mathbf{b}{i}\right) \wedge\left(\mathbf{b}= \mathbf{b}{j}\right)\right}=\operatorname{Pr}\left{\left(\mathbf{r} \in \mathbb{D}{i}\right) \wedge\left(\ mathbf{b}=\mathbf{b}{j}\right)\right} 。

\begin{aligned} p_{\mathrm{err}} &=\operatorname{Pr}{\hat{\mathbf{b}}(\mathbf{r}) \neq \mathbf{b}} \ &=\sum_ {i=1}^{M} \sum_{j \neq i} \operatorname{Pr}\left{\left(\hat{\mathbf{b}}(\mathbf{r})=\mathbf{b} {i}\right) \wedge\left(\mathbf{b}=\mathbf{b}{j}\right)\right} \ &=\sum_{i=1}^{M} \sum_{j \ neq i} \operatorname{Pr}\left{\left(\mathbf{r} \in \mathbb{D}{i}\right) \wedge\left(\mathbf{b}=\mathbf{b}{j }\right)\right} \ &=\sum_{i=1}^{M} \sum_{j \neq i} \sum_{\mathbf{r} \in \mathrm{D}{i}} \operatorname {Pr}\left{\mathbf{r} \wedge\left(\mathbf{b}=\mathbf{b}{j}\right)\right} \end{对齐}\begin{aligned} p_{\mathrm{err}} &=\operatorname{Pr}{\hat{\mathbf{b}}(\mathbf{r}) \neq \mathbf{b}} \ &=\sum_ {i=1}^{M} \sum_{j \neq i} \operatorname{Pr}\left{\left(\hat{\mathbf{b}}(\mathbf{r})=\mathbf{b} {i}\right) \wedge\left(\mathbf{b}=\mathbf{b}{j}\right)\right} \ &=\sum_{i=1}^{M} \sum_{j \ neq i} \operatorname{Pr}\left{\left(\mathbf{r} \in \mathbb{D}{i}\right) \wedge\left(\mathbf{b}=\mathbf{b}{j }\right)\right} \ &=\sum_{i=1}^{M} \sum_{j \neq i} \sum_{\mathbf{r} \in \mathrm{D}{i}} \operatorname {Pr}\left{\mathbf{r} \wedge\left(\mathbf{b}=\mathbf{b}{j}\right)\right} \end{对齐}

\begin{对齐} p_{e r r} &=\sum_{i=1}^{M} \sum_{\mathbf{r} \in \mathrm{D}{i}} \sum{j \neq i} \运算符名{Pr}\left{\mathbf{r} \wedge\left(\mathbf{b}=\mathbf{b}{j}\right)\right} \ &=\sum{i=1}^{M } \sum_{\mathbf{r} \in \mathrm{D}{i}} \sum{j \neq i} \operatorname{Pr}\left{\mathbf{b}=\mathbf{b}_{j } \mid \mathbf{r}\right} \operatorname{Pr}{\mathbf{r}} \end{aligned}\begin{对齐} p_{e r r} &=\sum_{i=1}^{M} \sum_{\mathbf{r} \in \mathrm{D}{i}} \sum{j \neq i} \运算符名{Pr}\left{\mathbf{r} \wedge\left(\mathbf{b}=\mathbf{b}{j}\right)\right} \ &=\sum{i=1}^{M } \sum_{\mathbf{r} \in \mathrm{D}{i}} \sum{j \neq i} \operatorname{Pr}\left{\mathbf{b}=\mathbf{b}_{j } \mid \mathbf{r}\right} \operatorname{Pr}{\mathbf{r}} \end{aligned}

## 数学代写|编码理论作业代写Coding Theory代考|Binary Symmetric Channel

$$\operatorname{Pr}\left{r_{i} \mid b_{i}\right}=\left{ 1−e,r一世=b一世 e,r一世≠b一世\正确的。$$

(b0,b1,…,bn−1)并收到消息r=(r0,r1,…,rn−1)根据

\operatorname{Pr}{\mathbf{r} \mid \mathbf{b}}=\prod_{i=0}^{n-1} \operatorname{Pr}\left{r_{i} \mid b_{i }\正确的}\operatorname{Pr}{\mathbf{r} \mid \mathbf{b}}=\prod_{i=0}^{n-1} \operatorname{Pr}\left{r_{i} \mid b_{i }\正确的}

b^(r)=最大参数b∈乙公关⁡r∣b=最大参数b∈乙(1−e)n(e1−e)距离⁡(r,b)=精氨酸b∈乙距离⁡(r,b),

b^(r)=精氨酸b∈乙距离⁡(r,b)

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。