### 数学代写|信息论作业代写information theory代考|Run Length Encoding

statistics-lab™ 为您的留学生涯保驾护航 在代写信息论information theory方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写信息论information theory代写方面经验极为丰富，各种代写信息论information theory相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 数学代写|信息论作业代写information theory代考|Run Length Encoding

Run-length Encoding, or RLE is a technique used to reduce the size of a repeating string of characters. This repeating string is called a run. Typically RLE encodes a run of symbols into two bytes, a count and a symbol. RLE can compress any type of data regardless of its information content, but the content of data to be compressed affects the compression ratio. RLE cannot achieve high compression ratios compared to other compression methods, but it is easy to implement and is quick to execute. Run-length encoding is supported by most bitmap file formats such as TIFF, JPG, BMP, PCX and fax machines.

We will restrict ourselves to that portion of the PCX data stream that actually contains the coded image, and not those parts that store the color palette and image information such as number of lines, pixels per line, file and the coding method.

The basic scheme is as follows. If a string of pixels are identical in color value, encode them as a special flag byte which contains the count followed by a byte with the value of the repeated pixel. If the pixel is not repeated, simply encode it as the byte itself. Such simple schemes can often become more complicated in practice. Consider that in the above scheme, if all 256 colors in a palette are used in an image, then, we need all 256 values of a byte to represent those colors. Hence if we are going to use just bytes as our basic code unit, we don’t have any possible unused byte values that can be used as a flag/count byte. On the other hand, if we use two bytes for every coded pixel to leave room for the flag/count combinations, we might double the size of pathological images instead of compressing them.
DID YOU The compromise in the PCX format is based on the belief of its designers than many user-created KNOW $=$ drawings (which was the primary intended output of their software) would not use all 256 colors. So, they optimized their compression scheme for the case of up to 192 colors only. Images with more colors will also probably get good compression, just not quite as good, with this scheme.

## 数学代写|信息论作业代写information theory代考|Rate Distortion Function

Although we live in an analog world, most of the communication takes place in the digital form. Since most natural sources (e.g. speech, video etc.) are analog, they are first sampled, quantized and then processed. However, the representation of an arbitrary real number requires an infinite number of bits. Thus, a finite representation of a continuous random variable can never be perfect. Consider an analog message waveform $x(t)$ which is a sample waveform of a stochastic process $X(t)$. Assuming $X(t)$ is a bandlimited, stationary process, it can be represented by a sequence of uniform samples taken at the Nyquist rate. These samples are quantized in amplitude and encoded as a sequence of binary digits. A simple encoding strategy can be to define $L$ levels and encode every sample using
\begin{aligned} &R=\log {2} L \text { bits if } L \text { is a power of } 2 \text {, or } \ &R=\left\lfloor\log {2} L\right\rfloor+1 \text { bits if } L \text { is not a power of } 2 \end{aligned}
If all levels are not equally probable we may use entropy coding for a more efficient representation. In order to represent the analog waveform more accurately, we need more number of levels, which would imply more number of bits per sample. Theoretically we need infinite bits per sample to perfectly represent an analog source. Quantization of amplitude results in data compression at the cost of signal distortion. It’s a form of lossy data compression. Distortion implies some measure of difference between the actual source samples $\left{x_{k}\right}$ and the corresponding quantized value $\left{\tilde{x}_{k}\right}$.

## 数学代写|信息论作业代写information theory代考|Optimum Quantizer Design

In this section, we look at optimum quantizers design. Consider a continuous amplitude signal whose amplitude is not uniformly distributed, but varies according to a certain probability density function, $p(x)$. We wish to design the optimum scalar quantizer that minimizes some function of the quantization error $q=\tilde{x}-x$, where $\tilde{x}$ is the quantized value of $x$. The distortion resulting due to the quantization can be expressed as
$$D=\int_{-\infty}^{\infty} f(\tilde{x}-x) p(x) d x$$
where $f(\tilde{x}-x)$ is the desired function of the error. An optimum quantizer is one that minimizes DID YOU $D$ by optimally selecting the output levels and the corresponding input range of each output
KNOW level. The resulting optimum quantizer is called the Lloyd-Max quantizer. For an L-level quantizer the distortion is given by
$$D=\sum_{k=1}^{L} \int_{x_{k-1}}^{x_{k}} f\left(\tilde{x}_{k}-x\right) p(x) d x$$

The necessary conditions for minimum distortion are obtained by differentiating $D$ with respect to $\left{x_{k}\right}$ and $\left{\tilde{x}{k}\right}$. As a result of the differentiation process we end up with the following system of equations $$\begin{array}{ll} f\left(\tilde{x}{k}-x_{k}\right)=f\left(\tilde{x}{k+1}-x{k}\right), & k=1,2, \ldots, L-1 \ \int_{x_{k-1}}^{x_{k}} f^{\prime}\left(\tilde{x}{k+1}-x\right) p(x) d x, & k=1,2, \ldots, L \end{array}$$ For $f(x)=x^{2}$, i.e., the mean square value of the distortion, the above equations simplify to $$\begin{array}{ll} x{k}=\frac{1}{2}\left(\tilde{x}{k}+\tilde{x}{k+1}\right), & k=1,2, \ldots, L-1 \ \int_{x_{k-1}}^{x_{k}}\left(\tilde{x}{k}-x\right) p(x) d x=0, & k=1,2, \ldots, L \end{array}$$ The non uniform quantizers are optimized with respect to the distortion. However, each quantized sample is represented by equal number of bits (say, $R$ bits/sample). It is possible to have a more efficient variable length coding. The discrete source outputs that result from quantization can be characterized by a set of probabilities $p{k^{*}}$. These probabilities can then be used to design efficient variable length codes (source coding). In order to compare the performance of different nonuniform quantizers, we first fix the distortion, $D$, and then compare the average number of bits required per sample.

## 数学代写|信息论作业代写information theory代考|Rate Distortion Function

R=日志⁡2大号 位如果 大号 是一种力量 2， 或者  R=⌊日志⁡2大号⌋+1 位如果 大号 不是一种力量 2

## 数学代写|信息论作业代写information theory代考|Optimum Quantizer Design

D=∫−∞∞F(X~−X)p(X)dX

D=∑ķ=1大号∫Xķ−1XķF(X~ķ−X)p(X)dX

F(X~ķ−Xķ)=F(X~ķ+1−Xķ),ķ=1,2,…,大号−1 ∫Xķ−1XķF′(X~ķ+1−X)p(X)dX,ķ=1,2,…,大号为了F(X)=X2，即失真的均方值，上述方程简化为

Xķ=12(X~ķ+X~ķ+1),ķ=1,2,…,大号−1 ∫Xķ−1Xķ(X~ķ−X)p(X)dX=0,ķ=1,2,…,大号非均匀量化器针对失真进行了优化。但是，每个量化样本都由相同数量的比特表示（例如，R位/样本）。可以有更有效的可变长度编码。量化产生的离散源输出可以用一组概率来表征pķ∗. 然后可以使用这些概率来设计有效的可变长度代码（源编码）。为了比较不同非均匀量化器的性能，我们首先修复失真，D，然后比较每个样本所需的平均位数。

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。