## 统计代写|统计推断代写Statistical inference代考|MAST90100

statistics-lab™ 为您的留学生涯保驾护航 在代写统计推断Statistical inference方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写统计推断Statistical inference代写方面经验极为丰富，各种代写统计推断Statistical inference相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• Advanced Probability Theory 高等概率论
• Advanced Mathematical Statistics 高等数理统计学
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 统计代写|统计推断代写Statistical inference代考|Experimental vs. Observational Data

In most sciences, such as physics, chemistry, geology, and biology, the observed data are often generated by the modelers themselves in well-designed experiments. In econometrics the modeler is often faced with observational as opposed to experimental data. This has two important implications for empirical modeling. First, the modeler needs to develop better skills in validating the model assumptions, because random (IID) sample realizations are rare with observational data. Second, the separation of the data collector and the data analyst requires the modeler to examine thoroughly the nature and structure of the data in question.

In economics, along with the constant accumulation of observational data collection grew the demand to analyze these data series with a view to a better understanding of economic phenomena such as inflation, unemployment, exchange rate fluctuations, and the business cycle, as well as improving our ability to forecast economic activity. A first step toward attaining these objectives is to study the available data by being able to answer questions such as:

(i) How were the data collected and compiled?
(ii) What is the subject of measurement and what do the numbers measure?
(iii) What are the measurement units and scale?
(iv) What is the measurement period?
(v) What is the link between the data and any corresponding theoretical concepts?

## 统计代写|统计推断代写Statistical inference代考|Observed Data and the Nature of a Statistical Model

A data set comprising $n$ observations will be denoted by $\mathbf{x}{0}:=\left(x{1}, x_{2}, \ldots, x_{n}\right)$.
REMARK: It is crucial to emphasize the value of mathematical symbolism when one is discussing probability theory. The clarity and concision this symbolism introduces to the discussion is indispensable.
It is common to classify economic data according to the observation units:
(i) Cross-section $\left{x_{k}, k=1,2, \ldots, n\right}, k$ denotes individuals (firms, states, etc.);
(ii) Time series $\left{x_{t}, t=1,2, \ldots, T\right}, t$ denotes time (weeks, months, years, etc.).
For example, observed data on consumption might refer to consumption of different households at the same point in time or aggregate consumption (consumers’ expenditure) over time. The first will constitute cross-section, the second time-series data. By combining these two (e.g. observing the consumption of the same households over time), we can define a third category:
(iii) Panel (longitudinal) $\left{x_{\mathbf{k}}, \mathbf{k}:=(k, t), k=1,2, \ldots, n, t=1,2, \ldots, T\right}$, where $k$ and $t$ denote the index for individuals and time, respectively.
NOTE: In this category the index $\mathbf{k}$ is two-dimensional but $x_{\mathbf{k}}$ is one-dimensional.

## 统计代写|统计推断代写Statistical inference代考|Experimental vs. Observational Data

(i) 数据是如何收集和编制的？
(ii) 测量的主题是什么？这些数字测量的是什么？
(iii) 计量单位和尺度是什么？
(iv) 测量周期是多少？
(v) 数据与任何相应的理论概念之间的联系是什么？

## 统计代写|统计推断代写Statistical inference代考|Observed Data and the Nature of a Statistical Model

(i) 横截面\left{x_{k}, k=1,2, \ldots, n\right}, k\left{x_{k}, k=1,2, \ldots, n\right}, k表示个人（公司、国家等）；
(ii) 时间序列\left{x_{t}, t=1,2, \ldots, T\right}, t\left{x_{t}, t=1,2, \ldots, T\right}, t表示时间（周、月、年等）。

（iii）面板（纵向）\left{x_{\mathbf{k}}, \mathbf{k}:=(k, t), k=1,2, \ldots, n, t=1,2, \ldots, T\right}\left{x_{\mathbf{k}}, \mathbf{k}:=(k, t), k=1,2, \ldots, n, t=1,2, \ldots, T\right}， 在哪里ķ和吨分别表示个人和时间的指数。

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

## 统计代写|统计推断代写Statistical inference代考|MAST20005

statistics-lab™ 为您的留学生涯保驾护航 在代写统计推断Statistical inference方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写统计推断Statistical inference代写方面经验极为丰富，各种代写统计推断Statistical inference相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• Advanced Probability Theory 高等概率论
• Advanced Mathematical Statistics 高等数理统计学
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 统计代写|统计推断代写Statistical inference代考|Chance Regularity Patterns and Real-World Phenomena

In the case of the experiment of casting two dice, the chance mechanism is explicit and most people will be willing to accept on faith that if this experiment is actually performed properly, then the chance regularity patterns of IID will be present. The question that naturally arises is whether data generated by real-world stochastic phenomena also exhibit such patterns. It is argued that the overwhelming majority of observable phenomena in many disciplines can be viewed as stochastic, and thus amenable to statistical modeling.

Example 1.4 Consider an example from economics where the t-plot of $X=\Delta \ln (E R)$, i.e. log-changes of the Canadian/US dollar exchange rate (ER), for the period 1973-1991 (weekly observations) is shown in Figure 1.6.

What is interesting about the data in Figure $1.6$ is the fact that they exhibit a number of chance regularity patterns very similar to those exhibited by the dice observations in Figure 1.1, but some additional patterns are also discernible. The regularity patterns exhibited by both sets of data are:
(a) the arithmetic average over the ordering (time) appears to be constant;
(b) the band of variation around this average appears to be relatively constant.
In contrast to the data in Figure 1.2, the distributional pattern exhibited by the data in Figure $1.5$ is not a triangular. Instead:
(c) the graph of the relative frequencies (histogram) in Figure $1.7$ exhibits a certain bellshaped symmetry. The Normal density is inserted in order to show that it does not fit well at the tails, in the mid-section, and the top, which is much higher than the Normal curve. As argued in Chapter 5, Student’s $t$ provides a more appropriate distribution for this data; see Figures $3.23$ and 3.24. In addition, the data in Figure $1.6$ exhibit another regularity pattern:
(d) there is a sequence of clusters of small and big changes in succession.
At this stage the reader might not have been convinced that the features noted above are easily discernible from t-plots. An important dimension of modeling in this book is to discuss how to read systematic information in data plots, which will begin in chapter $5 .$

## 统计代写|统计推断代写Statistical inference代考|Chance Regularities and Statistical Models

Motivated by the desire to account for (model) these chance regularities, we look to probability theory to find ways to formalize them in terms of probabilistic concepts. In particular, the stable relative frequencies regularity pattern (Tables $1.3-1.5$ ) will be formalized using the concept of a probability distribution (see Chapter 5). The unpredictability pattern will be related to the concept of Independence ([2]), and the approximate “sameness” pattern to the Homogeneity (ID) concept ([3]). To render statistical model specification easier, the probabilistic concepts aiming to “model” the chance regularities can be viewed as belonging to three broad categories:

These broad categories can be seen as defining the basic components of a statistical model in the sense that every statistical model is a blend of components from all three categories. The first recommendation to keep in mind in empirical modeling is:

1. A statistical model is simply a set of (internally) consistent probabilistic assumptions from the three broad categories (D), $(M)$, and $(\mathrm{H})$ defining a stochastic generating mechanism that could have given rise to the particular data.

The statistical model is chosen to represent a description of a chance mechanism that accounts for the systematic information (the chance regularities) in the data. The distinguishing feature of a statistical model is that it specifies a situation, a mechanism, or a process in terms of a certain probabilistic structure. The main objective of Chapters 2-8 is to introduce numerous probabilistic concepts and ideas that render the choice of an appropriate statistical model an educated guess and not a hit-or-miss selection.

## 统计代写|统计推断代写Statistical inference代考|Chance Regularity Patterns and Real-World Phenomena

（a）排序（时间）上的算术平均值似乎是恒定的；
(b) 围绕这个平均值的变化带似乎是相对恒定的。

（c）图中的相对频率图（直方图）1.7呈现出一定的钟形对称性。插入法线密度是为了表明它在尾部、中间部分和顶部不能很好地拟合，这比法线曲线高得多。如第 5 章所述，学生的吨为这些数据提供更合适的分布；看图3.23和 3.24。此外，图中的数据1.6表现出另一种规律性模式：
(d) 有一系列大小连续变化的簇。

## 统计代写|统计推断代写Statistical inference代考|Chance Regularities and Statistical Models

1. 统计模型只是来自三大类 (D) 的一组（内部）一致的概率假设，(米)， 和(H)定义可能产生特定数据的随机生成机制。

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

## 统计代写|统计推断代写Statistical inference代考|STAT3923

statistics-lab™ 为您的留学生涯保驾护航 在代写统计推断Statistical inference方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写统计推断Statistical inference代写方面经验极为丰富，各种代写统计推断Statistical inference相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• Advanced Probability Theory 高等概率论
• Advanced Mathematical Statistics 高等数理统计学
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 统计代写|统计推断代写Statistical inference代考|Chance Regularity Patterns

The chance regularities denote patterns that are usually revealed using a variety of graphical techniques and careful preliminary data analysis. The essence of chance regularity, as suggested by the term itself, comes in the form of two entwined features:
chance an inherent uncertainty relating to the occurrence of particular outcomes; regularity discernible regularities associated with an aggregate of many outcomes.
TERMINOLOGY: The term “chance regularity” is used in order to avoid possible confusion with the more commonly used term “randomness.”

At first sight these two attributes might appear to be contradictory, since “chance” is often understood as the absence of order and “regularity” denotes the presence of order. However, there is no contradiction because the “disorder” exists at the level of individual outcomes and the order at the aggregate level. The two attributes should be viewed as inseparable for the notion of chance regularity to make sense.

A glance at Table $1.1$ suggests that the observed data constitute integers between 2 and 12 , but no real patterns are apparent, at least at first sight. To bring out any chance regularity patterns we use a graph as shown in Figure 1.1, t-plot: $\left{\left(t, x_{t}\right), t=1,2, \ldots, n\right}$.

The first distinction to be drawn is that between chance regularity patterns and deterministic regularities that is easy to detect.

## 统计代写|统计推断代写Statistical inference代考|From Chance Regularities to Probabilities

The question that naturally arises is whether the available substantive information pertaining to the mechanism that gave rise to the data in Figure $1.1$ would affect the choice of a statistical model. Common sense suggests that it should, but it is not clear what its role should be. Let us discuss that issue in more detail.

The actual data-generating mechanism (DGM). It turns out that the data in Table $1.1$ were generated by a sequence of $n=100$ trials of casting two dice and adding the dots of the two sides facing up. This game of chance was very popular in medieval times and a favorite pastime of soldiers waiting for weeks on end outside the walls of European cities they had under siege, looking for the right opportunity to assail them. After thousands of trials these illiterate soldiers learned empirically (folk knowledge) that the number 7 occurs more often than any other number and that 6 occurs less often than 7 but more often than $5 ; 2$ and 12 would occur the least number of times. One can argue that these soldiers had an instinctive understanding of the empirical relative frequencies summarized by the histogram in Figure 1.3.

In this subsection we will attempt to reconstruct how this intuition was developed into something more systematic using mathematization tools that eventually led to probability theory. Historically, the initial step from the observed regularities to their probabilistic formalization was very slow in the making, taking centuries to materialize; see Chapter $2 .$
The first crucial feature of the generating mechanism is its stochastic nature: at each trial (the casting of two dice), the outcome (the sum of the dots of the sides) cannot be predicted with any certainty. The only thing one can say with certainty is that the result of each trial will be one of the numbers ${2,3,4,5,6,7,8,9,10,11,12}$. It is also known that these numbers do not occur equally often in this game of chance.

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

## 统计代写|统计推断代写Statistical inference代考|STAT3923

statistics-lab™ 为您的留学生涯保驾护航 在代写统计推断Statistical inference方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写统计推断Statistical inference代写方面经验极为丰富，各种代写统计推断Statistical inference相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• Advanced Probability Theory 高等概率论
• Advanced Mathematical Statistics 高等数理统计学
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 统计代写|统计推断代写Statistical inference代考|Penalized ℓ1 recovery

Penalized $\ell_{1}$ recovery of signal $x$ from its observation (1.1) is
$$\widehat{x}{\text {pen }}(y) \in \underset{u}{\operatorname{Argmin}}\left{|u|{1}+\lambda\left|H^{T}(A u-y)\right|\right},$$
where $H \in \mathbf{R}^{m \times N}$, a norm $|\cdot|$ on $\mathbf{R}^{N}$, and a positive real $\lambda$ are parameters of the construction.

Theorem 1.5. Given $A$, positive integer s, and $q \in[1, \infty]$, assume that $(H,|\cdot|)$ satisfies the conditions $\mathbf{Q}{q}(s, \kappa)$ and $\mathbf{Q}{1}(s, \varkappa)$ with $\varkappa<1 / 2$ and $\kappa \geq \varkappa$. Then (i) Let $\lambda \geq 2$. Then for all $x \in \mathbf{R}^{n}, y \in \mathbf{R}^{m}$ it holds: $$\left|\widehat{x}{\text {pen }}(y)-x\right|{p} \leq \frac{4 \lambda^{\frac{1}{p}}}{1-2 \varkappa}\left[1+\frac{\kappa \lambda}{2 s}-\varkappa\right]^{\frac{q(p-1)}{p(q-1)}}\left[\left|H^{T}(A x-y)\right|+\frac{\left|x-x^{}\right|_{1}}{2 s}\right], 1 \leq p \leq q .$$ In particular, with $\lambda=2 s$ we have: $$\left|\widehat{x}{\text {pen }}(y)-x\right|{p} \leq \frac{4(2 s)^{\frac{1}{p}}}{1-2 x}[1+\kappa-x]^{\frac{q(p-1)}{p(q-1)}}\left[\left|H^{T}(A x-y)\right|+\frac{\left|x-x^{2}\right|_{1}}{2 s}\right], 1 \leq p \leq q .$$ (ii) Let $\rho \geq 0$, and let $\Xi_{\rho}$ be given by (1.14). Then for all $x \in \mathbf{R}^{n}$ and all $\eta \in \Xi_{\rho}$ one has: $\lambda \geq 2 s \quad \Rightarrow$ $\left|\widehat{x}{\text {pen }}(A x+\eta)-x\right|{p} \leq \frac{4 \lambda^{\frac{1}{p}}}{1-2 \varkappa}\left[1+\frac{\kappa \lambda}{2 s}-\varkappa\right]^{\frac{q(p-1)}{p(q-1)}}\left[\rho+\frac{\left|x-x^{}\right|_{1}}{2 s}\right], 1 \leq p \leq q ;$
$\lambda=2 s \quad \Rightarrow$
$\left|\widehat{x}{\text {pen }}(A x+\eta)-x\right|{p} \leq \frac{4(2 s)^{\frac{1}{p}}}{1-2 \varkappa}[1+\kappa-\varkappa]^{\frac{q(p-1)}{p(q-1)}}\left[\rho+\frac{\left|x-x^{x}\right|_{1}}{2 s}\right], 1 \leq p \leq q .$
For proof, see Section 1.5.2.

## 统计代写|统计推断代写Statistical inference代考|VERIFIABILITY AND TRACTABILITY ISSUES

The good news about $\ell_{1}$ recovery stated in Theorems $1.3,1.4$, and $1.5$ is “conditional” – we assume that we are smart enough to point out a pair $(H,|\cdot|)$ satisfying condition $\mathbf{Q}{1}(s, \varkappa)$ with $\varkappa<1 / 2$ (and condition $\mathbf{Q}{q}(s, \kappa)$ with a “moderate” $\varkappa^{8}$ ). The related issues are twofold:

1. First, we do not know in which range of $s, m$, and $n$ these conditions, or even the weaker than $\mathbf{Q}{1}(s, \varkappa), \varkappa<1 / 2$, nullspace property can be satisfied; and without the nullspace property, $\ell{1}$ minimization becomes useless, at least when we want to guarantee its validity whatever be the $s$-sparse signal we want to recover;
2. Second, it is unclear how to verify whether a given sensing matrix $A$ satisfies the nullspace property for a given $s$, or a given pair $(H,|\cdot|)$ satisfics the condition $\mathbf{Q}_{q}(s, \kappa)$ with given parameters.
What is known about these crucial issues can be outlined as follows.
3. It is known that for given $m, n$ with $m \ll n$ (say, $m / n \leq 1 / 2$ ), there exist $m \times n$ sensing matrices which are $s$-good for the values of $s$ “nearly as large as $m$,” specifically, for $s \leq O(1) \frac{m}{\ln (n / m)} \cdot{ }^{9}$ Moreover, there are natural families of matrices where this level of goodness “is a rule.” E.g., when drawing an $m \times n$ matrix at random from Gaussian or Rademacher distributions (i.e., when filling the matrix with independent realizations of a random variable which is either a standard (zero mean, unit variance) Gaussian one, or takes values $\pm 1$ with probabilities $0.5$ ), the result will be $s$-good, for the outlined value of $s$, with prohahility approashing 1 as $m$ and $n$ grow. All this remains true when instead of speaking about matrices $A$ satisfying “plain” nullspace properties, we are speaking about matrices $A$ for which it is easy to point out a pair $(H,|\cdot|)$ satisfying the condition $\mathbf{Q}_{2}(s, \varkappa)$ with, say, $\varkappa=1 / 4$.

The above results can be considered as a good news. A bad news is that we do not know how to check efficiently, given an $s$ and a sensing matrix $A$, that the matrix is s-good, just as we do not know how to check that $A$ admits good (i.e., satisfying $\mathbf{Q}_{1}(s, \varkappa)$ with $\left.\varkappa<1 / 2\right)$ pairs $(H,|\cdot|)$. Even worse: we do not know an efficient recipe allowing us to build, given $m$, an $m \times 2 m$ matrix $A^{m}$ which is provably $s$-good for $s$ larger than $O(1) \sqrt{m}$, which is a much smaller “level of goodness” than the one promised by theory for randomly generated matrices. ${ }^{10}$ The “common life” analogy of this situation would be as follows: you know that $90 \%$ of bricks in your wall are made of gold, and at the same time, you do not know how to tell a golden brick from a usual one.

## 统计代写|统计推断代写Statistical inference代考|Penalized ℓ1 recovery

$$|\widehat{x} \operatorname{pen}(y)-x| p \leq \frac{4 \lambda^{\frac{1}{p}}}{1-2 \varkappa}\left[1+\frac{\kappa \lambda}{2 s}-\varkappa\right]^{\frac{q(p-1)}{p(q-1)}}\left[\left|H^{T}(A x-y)\right|+\frac{|x-x|{1}}{2 s}\right], 1 \leq p \leq q .$$ 特别是，与 $\lambda=2 s$ 我们有: $$|\widehat{x} \operatorname{pen}(y)-x| p \leq \frac{4(2 s)^{\frac{1}{p}}}{1-2 x}[1+\kappa-x]^{\frac{q(p-1)}{p(q-1)}}\left[\left|H^{T}(A x-y)\right|+\frac{\left|x-x^{2}\right|{1}}{2 s}\right], 1 \leq p \leq q .$$
(ii) 让 $\rho \geq 0$ ，然后让 $\Xi_{\rho}$ 由 (1.14) 给出。那么对于所有人 $x \in \mathbf{R}^{n}$ 和所有 $\eta \in \Xi_{\rho}$ 一个有: $\lambda \geq 2 s \Rightarrow$ $|\widehat{x} \operatorname{pen}(A x+\eta)-x| p \leq \frac{4 \lambda^{\frac{1}{p}}}{1-2 \varkappa}\left[1+\frac{\kappa \lambda}{2 s}-\varkappa\right]^{\frac{q(p-1)}{p(q-1)}}\left[\rho+\frac{|x-x|{1}}{2 s}\right], 1 \leq p \leq q$ $\lambda=2 s \quad \Rightarrow$ $|\widehat{x} \operatorname{pen}(A x+\eta)-x| p \leq \frac{4(2 s)^{\frac{1}{p}}}{1-2 \varkappa}[1+\kappa-\varkappa]^{\frac{q(p-1)}{p(q-1)}}\left[\rho+\frac{\left|x-x^{x}\right|{1}}{2 s}\right], 1 \leq p \leq q$.

## 统计代写|统计推断代写Statistical inference代考|VERIFIABILITY AND TRACTABILITY ISSUES

1. 首先，我们不知道在哪个范围内 $s, m$ ，和 $n$ 这些条件，甚至弱于 $\mathbf{Q} 1(s, \varkappa), \varkappa<1 / 2$ ，可以满足零空间性 质；并且没有 nullspace 属性， $\ell$ 1最小化变得无用，至少当我们想要保证它的有效性时 $s$ – 我们想要恢复的 稀疏信号;
2. 二、不清楚如何验证给定的传感矩阵是否 $A$ 满足给定的零空间属性 $s$ ，或给定的一对 $(H,|\cdot|)$ 满足条件 $\mathbf{Q}_{q}(s, \kappa)$ 给定参数。
对这些关键问题的了解可以概括如下。
3. 众所周知，对于给定 $m, n$ 和 $m \ll n$ (说， $m / n \leq 1 / 2$ )， 存在 $m \times n$ 传感矩阵是 $s$ – 有利于价值观 $s^{\text {“差 }}$ 不多大 $m$,”具体来说，对于 $s \leq O(1) \frac{m}{\ln (n / m)} \cdot{ }^{9}$ 此外，在某些自然矩阵族中，这种良好程度“是一种规
则。例如，当绘制一个 $m \times n$ 从高斯或 Rademacher 分布中随机生成矩阵（即，当用随机变量的独立实 现填充矩阵时，该随机变量要么是标准 (零均值，单位方差) 高斯变量，要么取值 $\pm 1$ 有概率 $0.5)$ ，结果将 是 $s$-好，对于概述的价值 $s$, 概率接近 1 为 $m$ 和 $n$ 生长。当不谈论矩阵时，所有这些都是正确的 $A$ 满足“普通” 零空间属性，我们正在谈论矩阵 $A$ 很容易指出一对 $(H,|\cdot|)$ 满足条件 $\mathbf{Q}{2}(s, \varkappa)$ 与，说， $\varkappa=1 / 4$. 上述结果可以认为是一个好消息。一个坏消息是我们不知道如何有效地检查，给定一个 $s$ 和传感矩阵 $A$ ，矩阵是 s-good，就像我们不知道如何检查 $A$ 承认好（即满足 $\mathbf{Q}{1}(s, \varkappa)$ 和 $\left.\varkappa<1 / 2\right)$ 对 $(H,|\cdot|)$. 更糟糕的是：我们不知 道一个有效的配方允许我们构建，给定 $m ， 一$ 个 $m \times 2 m$ 矩阵 $A^{m}$ 这是可证明的 $s$ – 适合 $s$ 比大 $O(1) \sqrt{m}$ ，这是 一个比理论所承诺的随机生成矩阵小得多的“善良水平”。 ${ }^{10}$ 这种情况的“普通生活”类比如下：你知道 $90 \%$ 你墙上 的砖是金做的，同时，你不知道如何区分金砖和普通砖。

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

## 英国补考|统计推断代写Statistical inference代考|STAT3013

statistics-lab™ 为您的留学生涯保驾护航 在代写统计推断Statistical inference方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写统计推断Statistical inference代写方面经验极为丰富，各种代写统计推断Statistical inference相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• Advanced Probability Theory 高等概率论
• Advanced Mathematical Statistics 高等数理统计学
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 英国补考|统计推断代写Statistical inference代考|Compressed Sensing via ℓ1 minimization: Motivation

In principle there is nothing surprising in the fact that under reasonable assumption on the $m \times n$ sensing matrix $A$ we may hope to recover from noisy observations of $A x$ an $s$-sparse signal $x$, with $s \ll m$. Indeed, assume for the sake of simplicity that there are no observation errors, and let $\operatorname{Col}{j}[A]$ be $j$-th column in $A$. If we knew the locations $j{1}<j_{2}<\ldots<j_{s}$ of the nonzero entries in $x$, identifying $x$ could be reduced to solving the system of linear equations $\sum_{\ell=1}^{s} x_{i_{\ell}} \operatorname{Col}_{j \ell}[A]=y$ with $m$ equations and $s \ll m$ unknowns; assuming every $s$ columns in $A$ to be linearly independent (a quite unrestrictive assumption on a matrix with $m \geq s$ rows), the solution to the above system is unique, and is exactly the signal we are looking for. Of course, the assumption that we know the locations of nonzeros in $x$ makes the recovery problem completely trivial. However, it suggests the following course of action: given noiseless observation $y=A x$ of an $s$-sparse signal $x$, let us solve the combinatorial optimization problem
$$\min {z}\left{|z|{0}: A z=y\right},$$
where $|z|_{0}$ is the number of nonzero entries in $z$. Clearly, the problem has a solution with the value of the objective at most $s$. Moreover, it is immediately seen that if every $2 s$ columns in $A$ are linearly independent (which again is a very unrestrictive assumption on the matrix $A$ provided that $m \geq 2 s$ ), then the true signal $x$ is the unique optimal solution to (1.2).

## 英国补考|统计推断代写Statistical inference代考|Validity of ℓ1 minimization in the noiseless case

The minimal requirement on sensing matrix $A$ which makes $\ell_{1}$ minimization valid is to guarantee the correct recovery of exactly s-sparse signals in the noiseless case, and we start with investigating this property.
1.2.1.1 Notational convention
From now on, for a vector $x \in \mathbf{R}^{n}$

• $I_{x}=\left{j: x_{j} \neq 0\right}$ stands for the support of $x$; we also set
$$I_{x}^{+}=\left{j: x_{j}>0\right}, I_{x}^{-}=\left{j: x_{j}<0\right} \quad\left[\Rightarrow I_{x}=I_{x}^{+} \cup I_{x}^{-}\right]$$
• for a subset $I$ of the index set ${1, \ldots, n}, x_{I}$ stands for the vector obtained from $x$ by zeroing out entries with indices not in $I$, and $I^{\circ}$ for the complement of $I$ :
$$I^{o}={i \in{1, \ldots, n}: i \notin I}$$
• for $s \leq n, x^{s}$ stands for the vector obtained from $x$ by zeroing out all but the $s$
• entries largest in magnitude. ${ }^{5}$ Note that $x^{s}$ is the best $s$-sparse approximation of $x$ in all $\ell_{p}$ norms, $1 \leq p \leq \infty$;
• for $s \leq n$ and $p \in[1, \infty]$, we set
$$|x|_{s, p}=\left|x^{s}\right|_{p}$$
note that $|\cdot|_{s, p}$ is a norm.

## 英国补考|统计推断代写Statistical inference代考|Validity of ℓ1 minimization in the noiseless case

1.2.1.1 符号约定

• L_{x}=lleft{j: $x_{-}{{} \backslash$ \neq OIright $}$ 代表支持 $x$; 我们还设置
• 对于一个子集 $I$ 索引集的 $1, \ldots, n, x_{I}$ 代表从获得的向量 $x$ 通过将索引不在的条目清零 $I$ ，和 $I^{\circ}$ 为补 $I$ :
$$I^{o}=i \in 1, \ldots, n: i \notin I$$
• 为了 $s \leq n, x^{s}$ 代表从获得的向量 $x$ 通过清零除 $s$
• 数量级最大的条目。 ${ }^{5}$ 注意 $x^{s}$ 是最好的 $s$-稀疏近似 $x$ 在所有 $\ell_{p}$ 规范， $1 \leq p \leq \infty$;
• 为了 $s \leq n$ 和 $p \in[1, \infty]$ ，我们设置
$$|x|{s, p}=\left|x^{s}\right|{p}$$
注意 $|\cdot|_{s, p}$ 是一种规范。

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

## 统计代写|统计推断代写Statistical inference代考|STAT 3023

statistics-lab™ 为您的留学生涯保驾护航 在代写统计推断Statistical inference方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写统计推断Statistical inference代写方面经验极为丰富，各种代写统计推断Statistical inference相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• Advanced Probability Theory 高等概率论
• Advanced Mathematical Statistics 高等数理统计学
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 统计代写|统计推断代写Statistical inference代考|Signal Recovery Problem

One of the basic problems in Signal Processing is the problem of recovering a signal $x \in \mathbf{R}^{n}$ from noisy observations
$$y=A x+\eta$$
of a linear image of the signal under a given sensing mapping $x \mapsto A x: \mathbf{R}^{n} \rightarrow \mathbf{R}^{m}$; in (1.1), $\eta$ is the observation error. Matrix $A$ in (1.1) is called sensing matrix.
Recovery problems of the outlined types arise in many applications, including, but by far not reducing to,

• communications, where $x$ is the signal sent by the transmitter, $y$ is the signal recorded by the receiver, and $A$ represents the communication channel (reflecting, e.g., dependencies of decays in the signals’ amplitude on the transmitter-receiver distances); 7 here typically is modeled as the standard (zero mean, unit covariance matrix) $m$-dimensional Gaussian noise; ${ }^{1}$
• image reconstruction, where the signal $x$ is an image a $2 \mathrm{D}$ array in the usual photography, or a 3D array in tomography – and $y$ is data acquired by the imaging device. Here $\eta$ in many cases (although not always) can again be modeled as the standard Gaussian noise;
• linear regression, arising in a wide range of applications. In linear regression, one is given $m$ pairs “input $a^{i} \in \mathbf{R}^{n n}$ to a “black box,” with output $y_{i} \in \mathbf{R}$. Sometimes we have reason to believe that the output is a corrupted by noise version of the “existing in nature,” but unobservable, “ideal output” $y_{i}^{*}=x^{T} a^{i}$ which is just a linear function of the input (this is called “linear regression model,” with inputs $a^{i}$ called “regressors”). Our goal is to convert actual observations $\left(a^{i}, y_{i}\right), 1 \leq i \leq m$, into estimates of the unknown “true” vector of parameters $x$. Denoting by $A$ the matrix with the rows $\left[a^{i}\right]^{T}$ and assembling individual observations $y_{i}$ into a single observation $y=\left[y_{1}, \ldots, y_{m}\right] \in \mathbf{R}^{m}$, we arrive at the problem of recovering vector $x$ from noisy observations of $A x$. Here again the most popular model for $\eta$ is the standard Gaussian noise.

## 统计代写|统计推断代写Statistical inference代考|Signal Recovery: Parametric and nonparametric cases

Recovering signal $x$ from observation $y$ would be easy if there were no observation noise $(\eta=0)$ and the rank of matrix $A$ were equal to the dimension $n$ of the signals. In this case, which arises only when $m \geq n$ (“more observations than unknown parameters”), and is typical in this range of $m$ and $n$, the desired $x$ would be the unique solution to the system of linear equations, and to find $x$ would be a simple problem of Linear Algebra. Aside from this trivial “enough observations, no noise” case, people over the years have looked at the following two versions of the recovery problem:

Parametric case: $m \gg n, \eta$ is nontrivial noise with zero mean, say, standard Gaussian. This is the classical statistical setup with the emphasis on how to use numerous available observations in order to suppress in the recovery, to the extent possible, the influence of observation noise.

Nonparametric case: $m \ll n .^{2}$ If addressed literally, this case seems to be senseless: when the number of observations is less that the number of unknown parameters, even in the noiseless case we arrive at the necessity to solve an undetermined (fewer equations than unknowns) system of linear equations. Linear Algebra says that if solvable, the system has infinitely many solutions. Moreover, the solution set (an affine subspace of positive dimension) is unbounded, meaning that the solutions are in no sense close to each other. A typical way to make the case of $m \ll n$ meaningful is to add to the observations (1.1) some a priori information about the signal. In traditional Nonparametric Statistics, this additional information is summarized in a bounded convex set $X \subset \mathbf{R}^{n}$, given to us in advance, known to contain the true signal $x$. This set usually is such that every signal $x \in X$ can be approximated by a linear combination of $s=1,2, \ldots, n$ vectors from a properly selected basis known to us in advance (“dictionary” in the slang of signal processing) within accuracy $\delta(s)$, where $\delta(s)$ is a function, known in advance. approaching 0 as $s \rightarrow \infty$. In this situation, with appropriate $A$ (e.g., just the unit matrix, as in the denoising problem), we can select some $s \ll m$ and try to recover $x$ as if it were a vector from the linear span $E_{s}$ of the first $s$ vectors of the outlined basis $[54,86,124,112,208]$.

## 统计代写|统计推断代写Statistical inference代考|Signal Recovery Problem

$$y=A x+\eta$$

• 通讯，在哪里 $x$ 是发射机发送的信号， $y$ 是接收器记录的信号，并且 $A$ 表示通信信道 (反映，例如，信号幅 度衰减对发射机-接收机距离的依赖性) ；7 这里通常被建模为标准 (零均值，单位协方差矩阵) $m$-维高斯 噪声; 1
• 图像重建，其中信号 $x$ 是一个图像 $2 \mathrm{D}$ 通常摄影中的阵列，或断层扫描中的 3D 阵列 – 和 $y$ 是成像设备获取的 数据。这里 $\eta$ 在许多情况下 (尽管并非总是如此) 可以再次建模为标准高斯噪声；
• 线性回归，在广泛的应用中出现。在线性回归中，给出一个 $m$ 对“输入 $a^{i} \in \mathbf{R}^{n n}$ 到一个“黑匣子”，输出 $y_{i} \in \mathbf{R}$. 有时我们有理由相信输出是“存在于自然界”但不可观察的“理想输出”的噪声版本 $y_{i}^{*}=x^{T} a^{i}$ 这只是 输入的线性函数（这称为“线性回归模型”，输入 $a^{i}$ 称为“回归器”)。我们的目标是转换实际观察结果 $\left(a^{i}, y_{i}\right), 1 \leq i \leq m$, 估计末知的“真实”参数向量 $x$. 表示 $A$ 具有行的矩阵 $\left[a^{i}\right]^{T}$ 并收集个人观察结果 $y_{i}-$ 次观察 $y=\left[y_{1}, \ldots, y_{m}\right] \in \mathbf{R}^{m}$ ，我们得到了恢复向量的问题 $x$ 从嘈杂的观察 $A x$. 这里又是最受欢迎的模 型 $\eta$ 是标准高斯噪声。

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

## 统计代写|统计推断代写Statistical inference代考|STATS 2107

statistics-lab™ 为您的留学生涯保驾护航 在代写统计推断Statistical inference方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写统计推断Statistical inference代写方面经验极为丰富，各种代写统计推断Statistical inference相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• Advanced Probability Theory 高等概率论
• Advanced Mathematical Statistics 高等数理统计学
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 统计代写|统计推断代写Statistical inference代考|VERIFIABILITY AND TRACTABILITY ISSUES

The good news about $\ell_{1}$ recovery stated in Theorems $1.3,1.4$, and $1.5$ is “conditional”-we assume that we are smart enough to point out a pair $(H,|\cdot|)$ satisfying condition $\mathbf{Q}{1}(s, \varkappa)$ with $\varkappa<1 / 2$ (and condition $\mathbf{Q}{q}(s, \kappa)$ with a “moderate” $\varkappa{ }^{8}$ ). The related issues are twofold:

1. First, we do not know in which range of $s, m$, and $n$ these conditions, or even the weaker than $\mathrm{Q}{1}(s, \varkappa), \varkappa<1 / 2$, nullspace property can be satisfied; and without the nullspace property, $\ell{1}$ minimization becomes useless, at least when we want to guarantee its validity whatever be the s-sparse signal we want to recover;
2. Second, it is unclear how to verify whether a given sensing matrix $A$ satisfies the nullspace property for a given $s$, or a given pair $(H,|\cdot|)$ satisfies the condition $\mathbf{Q}_{q}(s, \kappa)$ with given parameters.
What is known about these crucial issues can be outlined as follows.
3. It is known that for given $m, n$ with $m \ll n$ (say, $m / n \leq 1 / 2$ ), there exist $m \times n$ sensing matrices which are $s$-good for the values of $s$ “nearly as large as $m, “$ specifically, for $s \leq O(1) \frac{m}{\ln (n / m)} \cdot{ }^{9}$ Moreover, there are natural families of matrices where this level of goodness “is a rule.” E.g., when drawing an $m \times n$ matrix at random from Gaussian or Rademacher distributions (i.e., when filling the matrix with independent realizations of a random variable which is either a standard (zero mean, unit variance) Gaussian one, or takes values $\pm 1$ with probabilities $0.5$ ), the result will be $s$-good, for the outlined value of $s$, with probability approaching 1 as $m$ and $n$ grow. All this remains true when instead of speaking about matrices $A$ satisfying “plain” nullspace properties, we are speaking about matrices $A$ for which it is easy to point out a pair $(H,|\cdot|)$ satisfying the condition $\mathrm{Q}_{2}(s, \varkappa)$ with, say, $\varkappa=1 / 4$.

The above results can be considered as a good news. A bad news is that we do not know how to check efficiently, given an $s$ and a sensing matrix $A$, that the matrix is s-good, just as we do not know how to check that $A$ admits good (i.e., satisfying $\mathbf{Q}_{1}(s, \psi)$ with $\left.\varkappa<1 / 2\right)$ pairs $(H,|\cdot|)$. Even worse: we do not know an efficient recipe allowing us to build, given $m$, an $m \times 2 m$ matrix $A^{m}$ which is provably s-good for $s$ larger than $O(1) \sqrt{m}$, which is a much smaller “level of goodness” than the one promised by theory for randomly generated matrices. 10 The “common life” analogy of this situation would be as follows: you know that $90 \%$ of bricks in your wall are made of gold, and at the same time, you do not know how to tell a golden brick from a usual one.

1. There exist verifiable sufficient conditions for $s$-goodness of a sensing matrix, similarly to verifiable sufficient conditions for a pair $(H,|\cdot|)$ to satisfy condition $\mathbf{Q}_{q}(s, \kappa)$. The bad news is that when $m \ll n$, these verifiable sufficient conditions can be satisfied only when $s \leq O(1) \sqrt{m}$ – once again, in a much more narrow range of values of $s$ than when typical randomly selected sensing matrices are $s$-good. In fact, $s=O(\sqrt{m})$ is so far the best known sparsity level for which we know individual $s$-good $m \times n$ sensing matrices with $m \leq n / 2$.

## 统计代写|统计推断代写Statistical inference代考|Restricted Isometry Property and s-goodness of random matrices

There are several sufficient conditions for $s$-goodness, equally difficult to verify, but provably satisfied for typical random sensing matrices. The best known of them is the Restricted Isometry Property (RIP) defined as follows:

Definition 1.6. Let $k$ be an integer and $\delta \in(0,1)$. We say that an $m \times n$ sensing matrix A possesses the Restricted Isometry Property with parameters $\delta$ and $k$, $\operatorname{RIP}(\delta, k)$, if for every $k$-sparse $x \in \mathbf{R}^{n}$ one has
$$(1-\delta)|x|_{2}^{2} \leq|A x|_{2}^{2} \leq(1+\delta)|x|_{2}^{2} .$$
It turns out that for natural ensembles of random $m \times n$ matrices, a typical matrix from the ensemble satisfies $\operatorname{RIP}(\delta, k)$ with small $\delta$ and $k$ “nearly as large as $m, “$ and that $\operatorname{RIP}\left(\frac{1}{6}, 2 s\right)$ implies the nullspace condition, and more. The simplest versions of the corresponding results are as follows.

Proposition 1.7. Given $\delta \in\left(0, \frac{1}{5}\right]$, with properly selected positive $c=c(\delta), d=$ $d(\delta), f=f(\delta)$ for all $m \leq n$ and all positive integers $k$ such that
$$k \leq \frac{m}{c \ln (n / m)+d}$$
the probability for a random $m \times n$ matrix $A$ with independent $\mathcal{N}\left(0, \frac{1}{m}\right)$ entries to satisfy $\operatorname{RIP}(\delta, k)$ is at least $1-\exp {-f m}$.
For proof, see Section 1.5.3.
Proposition 1.8. Let $A \in \mathbf{R}^{m \times n}$ satisfy $\operatorname{RIP}(\delta, 2 s)$ for some $\delta<1 / 3$ and positive integer s. Then
(i) The pair $\left(H=\frac{s^{-1 / 2}}{\sqrt{1-\delta}} I_{m},|\cdot|_{2}\right)$ satisfies the condition $\mathbf{Q}{2}\left(s, \frac{\delta}{1-\delta}\right)$ associated with $A$; (ii) The pair $\left(H=\frac{1}{1-\delta} A,|\cdot|{\infty}\right)$ satisfies the condition $\mathbf{Q}_{2}\left(s, \frac{\delta}{1-\delta}\right)$ associated with $A$.
For proof, see Section 1.5.4.

## 统计代写|统计推断代写Statistical inference代考|Verifiable sufficient conditions for Qq

When speaking about verifiable sufficient conditions for a pair $(H,|\cdot|)$ to satisfy $\mathbf{Q}{q}(s, \kappa)$, it is convenient to restrict ourselves to the case where $H$, like $A$, is an $m \times n$ matrix, and $|\cdot|=|\cdot|{\infty}$

Proposition 1.9. Let $A$ be an $m \times n$ sensing matrix, and $s \leq n$ be a sparsity level.

Given an $m \times n$ matrix $H$ and $q \in[1, \infty]$, let us set
$$\nu_{s, q}[H]=\max {j \leq n}\left|\operatorname{Col}{j}\left[I-H^{T} A\right]\right|_{s, q}$$
where $\mathrm{Col}{j}[C]$ is $j$-th column of matrix $C$. Then $$|w|{s, q} \leq s^{1 / q}\left|H^{T} A w\right|_{\infty}+\nu_{s, q}[H]|w|_{1} \forall w \in \mathbf{R}^{n}$$
implying that the pair $\left(H,|\cdot|_{\infty}\right)$ satisfies the condition $\mathbf{Q}{q}\left(s, s^{1-\frac{1}{q}} \nu{s, q}[H]\right)$.
Proof is immediate. Setting $V=I-H^{T} A$, we have
\begin{aligned} &|w|_{s, q}=\left|\left[H^{T} A+V\right] w\right|_{s, q} \leq\left|H^{T} A w\right|_{s, q}+|V w|_{s, q} \ &\leq s^{1 / q}\left|H^{T} A w\right|_{\infty}+\sum_{j} \mid w_{j}\left|\operatorname{Col}{j}[V]\right|{s, q} \leq s^{1 / q}\left|H^{T} A\right|_{\infty}+\nu_{s, q}[H]|w|_{1} \end{aligned}
Observe that the function $\nu_{s, q}[H]$ is an efficiently computable convex function of $H$, so that the set
$$\mathcal{H}{s, q}^{\kappa}=\left{H \in \mathbf{R}^{m \times n}: \nu{s, q}[H] \leq s^{\frac{1}{q}-1} \kappa\right}$$
is a computationally tractable convex set. When this set is nonempty for some $\kappa<1 / 2$, every point $H$ in this set is a contrast matrix such that $\left(H,\left|^{-}\right|_{\infty}\right)$ satisfies the condition $\mathbf{Q}{q}(s, \kappa)$, that is, we can find contrast matrices making $\ell{1}$ minimization valid. Moreover, we can design contrast matrix, e.g., by minimizing over $\mathcal{H}{s, q}^{\kappa}$ the function $|H|{1,2}$, thus optimizing the sensitivity of the corresponding $\ell_{1}$ recoveries to Gaussian observation noise; see items $\mathbf{C}, \mathbf{D}$ in Section 1.2.5.

Explanation. The sufficient condition for s-goodness of $A$ stated in Proposition $1.9$ looks as if coming out of thin air; in fact it is a particular case of a simple and general construction as follows. Let $f(x)$ be a real-valued convex function on $\mathbf{R}^{n}$, and $X \subset \mathbf{R}^{n}$ be a nonempty bounded polytope represented as
$$X=\left{x \in \operatorname{Conv}\left{g_{1}, \ldots, g_{N}\right}: A x=0\right},$$
where $\operatorname{Conv}\left{g_{1}, \ldots, g_{N}\right}=\left{\sum_{i} \lambda_{i} g_{i}: \lambda \geq 0, \sum_{i} \lambda_{i}=1\right}$ is the convex hull of vectors $g_{1}, \ldots, g_{N}$. Our goal is to upper-bound the maximum Opt $=\max {x \in X} f(x)$; this is a meaningful problem, since precisely maximizing a convex function over a polyhedron typically is a computationally intractable task. Let us act as follows: clearly, for any matrix $H$ of the same size as $A$ we have $\max {x \in X} f(x)=$ $\max {x \in X} f\left(\left[I-H^{T} A\right] x\right)$, since on $X$ we have $\left[I-H^{T} A\right] x=x$. As a result, \begin{aligned} \text { Opt } &:=\max {x \in X} f(x)=\max {x \in X} f\left(\left[I-H^{T} A\right] x\right) \ & \leq \max {x \in \operatorname{Conv}\left{g_{1}, \ldots, g_{N}\right}} f\left(\left[I-H^{T} A\right] x\right) \ &=\max {j \leq N} f\left(\left[I-H^{T} A\right] g{j}\right) \end{aligned}
We get a parametric – the parameter being $H$ – upper bound on Opt, namely, the bound $\max {j \leq N} f\left(\left[I-H^{T} A\right] g{j}\right)$. This parametric bound is convex in $H$, and thus is well suited for minimization over this parameter.

## 统计代写|统计推断代写Statistical inference代考|VERIFIABILITY AND TRACTABILITY ISSUES

1. 首先，我们不知道在哪个范围内s,米， 和n这些条件，甚至弱于问1(s,ε),ε<1/2, 可以满足零空间性质；并且没有 nullspace 属性，ℓ1最小化变得无用，至少当我们想要保证它的有效性时，无论我们想要恢复的 s-sparse 信号是什么；
2. 二、不清楚如何验证给定的传感矩阵是否一个满足给定的零空间属性s，或给定的一对(H,|⋅|)满足条件问q(s,ķ)给定参数。
对这些关键问题的了解可以概括如下。
3. 众所周知，对于给定米,n和米≪n（说，米/n≤1/2）， 存在米×n传感矩阵是s- 有利于价值观s“差不多大米,“具体来说，对于s≤○(1)米ln⁡(n/米)⋅9此外，在某些自然矩阵族中，这种良好程度“是一种规则”。例如，当绘制一个米×n从高斯或 Rademacher 分布中随机生成矩阵（即，当用随机变量的独立实现填充矩阵时，该随机变量要么是标准（零均值，单位方差）高斯变量，要么取值±1有概率0.5)，结果将是s-好，对于概述的价值s, 概率接近 1 为米和n生长。当不谈论矩阵时，所有这些都是正确的一个满足“普通”零空间属性，我们正在谈论矩阵一个很容易指出一对(H,|⋅|)满足条件问2(s,ε)与，说，ε=1/4.

1. 存在可验证的充分条件s- 感知矩阵的优度，类似于一对可验证的充分条件(H,|⋅|)满足条件问q(s,ķ). 坏消息是，当米≪n, 这些可验证的充分条件只有在s≤○(1)米– 再一次，在一个更窄的值范围内s比当典型的随机选择的传感矩阵是s-好的。实际上，s=○(米)是迄今为止我们知道的最知名的稀疏度水平s-好的米×n传感矩阵米≤n/2.

## 统计代写|统计推断代写Statistical inference代考|Restricted Isometry Property and s-goodness of random matrices

(1−d)|X|22≤|一个X|22≤(1+d)|X|22.

ķ≤米Cln⁡(n/米)+d

(i) 对(H=s−1/21−d我米,|⋅|2)满足条件问2(s,d1−d)有关联一个; (ii) 对(H=11−d一个,|⋅|∞)满足条件问2(s,d1−d)有关联一个.

## 统计代写|统计推断代写Statistical inference代考|Verifiable sufficient conditions for Qq

νs,q[H]=最大限度j≤n|科尔⁡j[我−H吨一个]|s,q

|在|s,q≤s1/q|H吨一个在|∞+νs,q[H]|在|1∀在∈Rn

|在 $|s, q=|\left[H\right.$ 吨一个+在] 在 $|s, q \leq| H$ 吨一个在 $|s, q+|$ 在在 $|s, q \leq s 1 / q| H$ 吨一个在 $\left|\infty+\sum j\right|$ 在j $\mid$ 科尔 $j$ [在] $|s, q \leq s 1 / q| H$ 吨 一个 $|\infty+\mathrm{VS}, \mathrm{q}[\mathrm{H}]|$ 在| 1

$\backslash$ mathcal ${H}{\mathrm{s}, \mathrm{q}} \wedge{\backslash k a p p a}=\backslash \operatorname{left}\left{\mathrm{H} \backslash\right.$ in $\backslash$ mathbf ${\mathrm{R}} \wedge{\mathrm{m} \backslash$ Itimes $\mathrm{n}}: \backslash \operatorname{Inu}{\mathrm{s}, \mathrm{q}}[\mathrm{H}] \backslash$ leq $\mathrm{s}^{\wedge}{\backslash$ frac ${1}{\mathrm{q}}-1}$
$\backslash$ kappalright $}$ \mathcal ${H}{s, q} \wedge{\backslash k a p p a}=\backslash$ eft ${H \backslash$ in $\backslash$ mathbf ${R} \wedge{m \backslash t i m e s ~ n}:$ lkappatright

X=\left{x \in \operatorname{Conv}\left{g_{1}, \ldots, g_{N}\right}: A x=0\right},X=\left{x \in \operatorname{Conv}\left{g_{1}, \ldots, g_{N}\right}: A x=0\right},

\begin{aligned} \text { Opt } &:=\max {x \in X} f(x)=\max {x \in X} f\left(\left[IH^{T} A\right] x\right) \ & \leq \max {x \in \operatorname{Conv}\left{g_{1}, \ldots, g_{N}\right}} f\left(\left[IH^{T} A\right] x\right) \ &=\max {j \leq N} f\left(\left[IH^{T} A\right] g{j}\right) \end{aligned}\begin{aligned} \text { Opt } &:=\max {x \in X} f(x)=\max {x \in X} f\left(\left[IH^{T} A\right] x\right) \ & \leq \max {x \in \operatorname{Conv}\left{g_{1}, \ldots, g_{N}\right}} f\left(\left[IH^{T} A\right] x\right) \ &=\max {j \leq N} f\left(\left[IH^{T} A\right] g{j}\right) \end{aligned}

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

## 统计代写|统计推断代写Statistical inference代考|MAST30020

statistics-lab™ 为您的留学生涯保驾护航 在代写统计推断Statistical inference方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写统计推断Statistical inference代写方面经验极为丰富，各种代写统计推断Statistical inference相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• Advanced Probability Theory 高等概率论
• Advanced Mathematical Statistics 高等数理统计学
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 统计代写|统计推断代写Statistical inference代考|Validity of ℓ1 minimization in the noiseless case

The minimal requirement on sensing matrix $A$ which makes $\ell_{1}$ minimization valid is to guarantee the correct recovery of exactly s-sparse signals in the noiseless case, and we start with investigating this property.
1.2.1.1 Notational convention
From now on, for a vector $x \in \mathbf{R}^{n}$

• $I_{x}=\left{j: x_{j} \neq 0\right}$ stands for the support of $x$; we also set
$$I_{x}^{+}=\left{j: x_{j}>0\right}, I_{x}^{-}=\left{j: x_{j}<0\right} \quad\left[\Rightarrow I_{x}=I_{x}^{+} \cup I_{x}^{-}\right]$$
• for a subset $I$ of the index set ${1, \ldots, n}, x_{I}$ stands for the vector obtained from $x$ by zeroing out entries with indices not in $I$, and $I^{o}$ for the complement of $I$ :
$$I^{o}={i \in{1, \ldots, n}: i \notin I}$$
• for $s \leq n, x^{s}$ stands for the vector obtained from $x$ by zeroing out all but the $s$
• entries largest in magnitude. ${ }^{5}$ Note that $x^{s}$ is the best $s$-sparse approximation of $x$ in all $\ell_{p}$ norms, $1 \leq p \leq \infty$;
• for $s \leq n$ and $p \in[1, \infty]$, we set
$$|x|_{s, p}=\left|x^{s}\right|_{p}$$
note that $|\cdot|_{s, p}$ is a norm.
$1.2 .1 .2 \mathrm{~s}$-Goodness
Definition of $s$-goodness. Let us say that an $m \times n$ sensing matrix $A$ is $s$-good if whenever the true signal $x$ underlying noiseless observations is $s$-sparse, this signal will be recovered exactly by $\ell_{1}$ minimization. In other words, $A$ is $s$-good if whenever $y$ in (1.4) is of the form $y=A x$ with s-sparse $x, x$ is the unique optimal solution to (1.4).

Nullspace property. There is a simply-looking necessary and sufficient condition for a sensing matrix $A$ to be $s$-good-the nullspace property originating from $[70]$. After this property is guessed, it is easy to see that it indeed is necessary and sufficient for $s$-goodness; we, however, prefer to derive this condition from the “first principles,” which can be easily done via Convex Optimization. Thus, in the case in question, as in many other cases, there is no necessity to be smart to arrive at the truth via a “lucky guess”; it suffices to be knowledgeable and use the standard tools.

## 统计代写|统计推断代写Statistical inference代考|Imperfect ℓ1 minimization

We have found a necessary and sufficient condition for $\ell_{1}$ minimization to recover exactly s-sparse signals in the noiseless case. More often than not, both these assumptions are violated: instead of $s$-sparse signals, we should speak about “nearly $s$-sparse” ones, quantifying the deviation from sparsity by the distance from the signal $x$ underlying the observations to its best $s$-sparse approximation $x^{s}$. Similarly, we should allow for nonzero observation noise. With noisy observations and/or imperfect sparsity, we cannot hope to recover the signal exactly. All we may hope for, is to recover it with some error depending on the level of observation noise and “deviation from s-sparsity,” and tending to zero as the level and deviation tend to 0 . We are about to quantify the nullspace property to allow for instructive “error analysis.”

By itself, the nullspace property says something about the signals from the kernel of the sensing matrix. We can reformulate it equivalently to say something important about all signals. Namely, observe that given sparsity $s$ and $\kappa \in(0,1 / 2)$, the nullspace property
$$|w|_{s, 1} \leq \kappa|w|_{1} \forall w \in \operatorname{Ker} A$$
is satisfied if and only if for a properly selected constant $C$ one has ${ }^{6}$
$$|w|_{s, 1} \leq C|A w|_{2}+\kappa|w|_{1} \forall w .$$
Indeed, (1.10) clearly implies (1.9); to get the inverse implication, note that for every $h$ orthogonal to Ker $A$ it holds
$$|A h|_{2} \geq \sigma|h|_{2},$$
where $\sigma>0$ is the minimal positive singular value of $A$. Now, given $w \in \mathbf{R}^{n}$, we can decompose $w$ into the sum of $\tilde{w} \in \operatorname{Ker} A$ and $h \in(\operatorname{Ker} A)^{\perp}$, so that
\begin{aligned} &|w|_{s, 1} \leq|\bar{w}|_{s, 1}+|h|_{s, 1} \leq \kappa|\bar{w}|_{1}+\sqrt{s}|h|_{s, 2} \leq \kappa\left[|w|_{1}+|h|_{1}\right]+\sqrt{s}|h|_{2} \ &\leq \kappa|w|_{1}+[\kappa \sqrt{n}+\sqrt{s}]|h|_{2} \leq \underbrace{\sigma^{-1}[\kappa \sqrt{n}+\sqrt{s}]}{C} \underbrace{|A h|{2}}{-|A w|{2}}+\kappa|w|_{1}, \end{aligned}
as required in (1.10).

## 统计代写|统计推断代写Statistical inference代考|Regular ℓ1 recovery

Given the observation scheme (1.1) with an $m \times n$ sensing matrix $A$, we define the regular $\ell_{1}$ recovery of $x$ via observation $y$ as
$$\widehat{x}{\text {reg }}(y) \in \underset{u}{\operatorname{Argmin}}\left{|u|{1}:\left|H^{T}(A u-y)\right| \leq \rho\right},$$
where the contrast matrix $H \in \mathbf{R}^{m \times N}$, the norm $|\cdot|$ on $\mathbf{R}^{N}$ and $\rho>0$ are parameters of the construction.
The role of $\mathbf{Q}$-conditions we have introduced is clear from the following
Theorem 1.3. Let $s$ be a positive integer, $q \in[1, \infty]$ and $\kappa \in(0,1 / 2)$. Assume that a pair $(H,|\cdot|)$ satisfies the condition $\mathbf{Q}{q}(s, \kappa)$ associated with $A$, and let $$\Xi{\rho}=\left{\eta:\left|H^{T} \eta\right| \leq \rho\right} .$$
Then for all $x \in \mathbf{R}^{n}$ and $\eta \in \Xi_{\rho}$ one has
$$\left|\widehat{x}{\text {reg }}(A x+\eta)-x\right|{p} \leq \frac{4(2 s)^{\frac{1}{p}}}{1-2 \kappa}\left[\rho+\frac{\left|x-x^{s}\right|_{1}}{2 s}\right], 1 \leq p \leq q .$$
The above result can be slightly strengthened by replacing the assumption that $(H,|\cdot|)$ satisfies $\mathbf{Q}{q}(s, \kappa)$ with some $\kappa<1 / 2$, with a weaker-by observation $\mathbf{A}$ from Section 1.2.2.1 – assumption that $(H,|\cdot|)$ satisfies $\mathbf{Q}{1}(s, \varkappa)$ with $\varkappa<1 / 2$ and satisfies $\mathbf{Q}_{q}(s, \kappa)$ with some (perhaps large) $\kappa$ :

Theorem 1.4. Given $A$, integer $s>0$, and $q \in[1, \infty]$, assume that $(H,|\cdot|)$ satisfies the condition $\mathbf{Q}{1}(s, \varkappa)$ with $\varkappa<1 / 2$ and the condition $\mathbf{Q}{q}(s, \kappa)$ with some $\kappa \geq \varkappa$, and let $\Xi_{\rho}$ be given by (1.14). Then for all $x \in \mathbf{R}^{n}$ and $\eta \in \Xi_{\rho}$ it holds:
$$\left|\widehat{x}{\text {reg }}(A x+\eta)-x\right|{p} \leq \frac{4(2 s)^{\frac{1}{p}}[1+\kappa-x]^{\frac{q(p-1)}{p(q-1)}}}{1-2 \varkappa}\left[\rho+\frac{\left|x-x^{s}\right|_{1}}{2 s}\right], 1 \leq p \leq q$$
For proofs of Theorems $1.3$ and 1.4, see Section 1.5.1.
Before commenting on the above results, let us present their alternative versions.

## 统计代写|统计推断代写Statistical inference代考|Validity of ℓ1 minimization in the noiseless case

1.2.1.1 符号约定

• I_{x}=\left{j: x_{j} \neq 0\right}I_{x}=\left{j: x_{j} \neq 0\right}代表支持X; 我们还设置
I_{x}^{+}=\left{j: x_{j}>0\right}, I_{x}^{-}=\left{j: x_{j}<0\right} \quad\左[\Rightarrow I_{x}=I_{x}^{+} \cup I_{x}^{-}\right]I_{x}^{+}=\left{j: x_{j}>0\right}, I_{x}^{-}=\left{j: x_{j}<0\right} \quad\左[\Rightarrow I_{x}=I_{x}^{+} \cup I_{x}^{-}\right]
• 对于一个子集我索引集的1,…,n,X我代表从获得的向量X通过将索引不在的条目清零我， 和我○为补我 :
我○=一世∈1,…,n:一世∉我
• 为了s≤n,Xs代表从获得的向量X通过清零除s
• 数量级最大的条目。5注意Xs是最好的s-稀疏近似X在所有ℓp规范，1≤p≤∞;
• 为了s≤n和p∈[1,∞]， 我们设置
|X|s,p=|Xs|p
注意|⋅|s,p是一种规范。
1.2.1.2 s-善良
的定义s-善良。让我们说一个米×n传感矩阵一个是s-只要有真实信号就好了X基本的无噪声观察是s-sparse，这个信号将完全恢复ℓ1最小化。换句话说，一个是s- 好，如果任何时候是(1.4) 中的形式为是=一个Xs-稀疏的X,X是 (1.4) 的唯一最优解。

## 统计代写|统计推断代写Statistical inference代考|Imperfect ℓ1 minimization

|在|s,1≤ķ|在|1∀在∈克尔⁡一个

|在|s,1≤C|一个在|2+ķ|在|1∀在.

|一个H|2≥σ|H|2,

|在|s,1≤|在¯|s,1+|H|s,1≤ķ|在¯|1+s|H|s,2≤ķ[|在|1+|H|1]+s|H|2 ≤ķ|在|1+[ķn+s]|H|2≤σ−1[ķn+s]⏟C|一个H|2⏟−|一个在|2+ķ|在|1,

## 统计代写|统计推断代写Statistical inference代考|Regular ℓ1 recovery

\widehat{x}{\text {reg }}(y) \in \underset{u}{\operatorname{Argmin}}\left{|u|{1}:\left|H^{T}(A uy )\对| \leq \rho\right},\widehat{x}{\text {reg }}(y) \in \underset{u}{\operatorname{Argmin}}\left{|u|{1}:\left|H^{T}(A uy )\对| \leq \rho\right},

\Xi{\rho}=\left{\eta:\left|H^{T} \eta\right| \leq \rho\right} 。\Xi{\rho}=\left{\eta:\left|H^{T} \eta\right| \leq \rho\right} 。

|X^注册 (一个X+这)−X|p≤4(2s)1p1−2ķ[ρ+|X−Xs|12s],1≤p≤q.

|X^注册 (一个X+这)−X|p≤4(2s)1p[1+ķ−X]q(p−1)p(q−1)1−2ε[ρ+|X−Xs|12s],1≤p≤q

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

## 统计代写|统计推断代写Statistical inference代考|MAST90100

statistics-lab™ 为您的留学生涯保驾护航 在代写统计推断Statistical inference方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写统计推断Statistical inference代写方面经验极为丰富，各种代写统计推断Statistical inference相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• Advanced Probability Theory 高等概率论
• Advanced Mathematical Statistics 高等数理统计学
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 统计代写|统计推断代写Statistical inference代考|Signal Recovery Problem

One of the basic problems in Signal Processing is the problem of recovering a signal $x \in \mathbf{R}^{n}$ from noisy observations
$$y=A x+\eta$$
of a linear image of the signal under a given sensing mapping $x \mapsto A x: \mathbf{R}^{n} \rightarrow \mathbf{R}^{m}$; in (1.1), $\eta$ is the observation error. Matrix $A$ in (1.1) is called sensing matrix.
Recovery problems of the outlined types arise in many applications, including, but by far not reducing to,

• communications, where $x$ is the signal sent by the transmitter, $y$ is the signal recorded by the receiver, and $A$ represents the communication channel (reflecting, e.g., dependencies of decays in the signals’ amplitude on the transmitter-receiver distances); $\eta$ here typically is modeled as the standard (zero mean, unit covariance matrix) $m$-dimensional Gaussian noise; ${ }^{1}$
• image reconstruction, where the signal $x$ is an image – a $2 \mathrm{D}$ array in the usual photography, or a 3D array in tomography-and $y$ is data acquired by the imaging device. Here $\eta$ in many cases (although not always) can again be modeled as the standard Gaussian noise;
• linear regression, arising in a wide range of applications. In linear regression, one is given $m$ pairs “input $a^{i} \in \mathbf{R}^{n \text { ” }}$ to a “black box,” with output $y_{i} \in \mathbf{R}$. Sometimes we have reason to believe that the output is a corrupted by noise version of the “existing in nature,” but unobservable, “ideal output” $y_{i}^{*}=x^{T} a^{i}$ which is just a linear function of the input (this is called “linear regression model,” with inputs $a^{i}$ called “regressors”). Our goal is to convert actual observations $\left(a^{i}, y_{i}\right), 1 \leq i \leq m$, into estimates of the unknown “true” vector of parameters $x$. Denoting by $A$ the matrix with the rows $\left[a^{i}\right]^{T}$ and assembling individual observations $y_{i}$ into a single observation $y=\left[y_{1} ; \ldots ; y_{m}\right] \in \mathbf{R}^{m}$, we arrive at the problem of recovering vector $x$ from noisy observations of $A x$. Here again the most popular model for $\eta$ is the standard Gaussian noise.

## 统计代写|统计推断代写Statistical inference代考|Parametric and nonparametric cases

Recovering signal $x$ from observation $y$ would be easy if there were no observation noise $(\eta=0)$ and the rank of matrix $A$ were equal to the dimension $n$ of the signals. In this case, which arises only when $m \geq n$ (“more observations than unknown parameters”), and is typical in this range of $m$ and $n$, the desired $x$ would be the unique solution to the system of linear equations, and to find $x$ would be a simple problem of Linear Algebra. Aside from this trivial “enough observations, no noise” case, people over the years have looked at the following two versions of the recovery problem:

Parametric case: $m \gg n, \eta$ is nontrivial noise with zero mean, say, standard Gaussian. This is the classical statistical setup with the emphasis on how to use numerous available observations in order to suppress in the recovery, to the extent possible, the influence of observation noise.

Nonparametric case: $m \ll n .^{2}$ If addressed literally, this case seems to be senseless: when the number of observations is less that the number of unknown parameters, even in the noiseless case we arrive at the necessity to solve an undetermined (fewer equations than unknowns) system of linear equations. Linear Algebra says that if solvable, the system has infinitely many solutions. Moreover, the solution set (an affine subspace of positive dimension) is unbounded, meaning that the solutions are in no sense close to each other. A typical way to make the case of $m \ll n$ meaningful is to add to the observations (1.1) some a priori information about the signal. In traditional Nonparametric Statistics, this additional information is summarized in a bounded convex set $X \subset \mathbf{R}^{n}$, given to us in advance, known to contain the true signal $x$. This set usually is such that every signal $x \in X$ can be approximated by a linear combination of $s=1,2, \ldots, n$ vectors from a properly selected basis known to us in advance (“dictionary” in the slang of signal processing) within accuracy $\delta(s)$, where $\delta(s)$ is a function, known in advance, approaching 0 as $s \rightarrow \infty$. In this situation, with appropriate $A$ (e.g., just the unit matrix, as in the denoising problem), we can select some $s \leqslant m$ and try to recover $x$ as if it were a vector from the linear span $E_{s}$ of the first $s$ vectors of the outlined basis $[54,86,124,112,208]$. In the “ideal case,” $x \in E_{s}$, recovering $x$ in fact reduces to the case where the dimension of the signal is $s \ll m$ rather than $n \gg m$, and we arrive at the well-studied situation of recovering a signal of low (compared to the number of observations) dimension. In the “realistic case” of $x \delta(s)$-close to $E_{s}$, deviation of $x$ from $E_{s}$ results in an additional component in the recovery error (“bias”); a typical result of traditional Nonparametric Statistics quantifies the resulting error and minimizes it in $s[86,124,178,222,223,230,239]$. Of course, this outline of the traditional approach to “nonparametric” (with $n \gg m$ ) recovery problems is extremely sketchy, but it captures the most important fact in our context: with the traditional approach to nonparametric signal recovery, one assumes that after representing the signals by vectors of their coefficients in properly selected base, the $n$-dimensional signal to be recovered can be well approximated by an $s$-sparse (at most $s$ nonzero entries) signal, with $s \ll n$, and this sparse approximation can be obtained by zeroing out all but the first $s$ entries in the signal vector. The assumption just formulated indeed takes place for signals obtained by discretization of smooth uni- and multivariate functions, and this class of signals for several decades was the main, if not the only, focus of Nonparametric Statistics.

## 统计代写|统计推断代写Statistical inference代考|Compressed Sensing via ℓ1 minimization: Motivation

In principle there is nothing surprising in the fact that under reasonable assumption on the $m \times n$ sensing matrix $A$ we may hope to recover from noisy observations of $A x$ an $s$-sparse signal $x$, with $s \ll m$. Indeed, assume for the sake of simplicity that there are no observation errors, and let $\operatorname{Col}{j}[A]$ be $j$-th column in $A$. If we knew the locations $j{1}<j_{2}<\ldots<j_{s}$ of the nonzero entries in $x$, identifying $x$ could be reduced to solving the system of linear equations $\sum_{\ell=1}^{s} x_{i_{\ell}} \operatorname{Col}_{j \ell}[A]=y$ with $m$ equations and $s \ll m$ unknowns; assuming every $s$ columns in $A$ to be linearly independent (a quite unrestrictive assumption on a matrix with $m \geq s$ rows), the solution to the above system is unique, and is exactly the signal we are looking for. Of course, the assumption that we know the locations of nonzeros in $x$ makes the recovery problem completely trivial. However, it suggests the following course of action: given noiseless observation $y=A x$ of an s-sparse signal $x$, let us solve the combinatorial optimization problem
$$\min {z}\left{|z|{0}: A z=y\right},$$
where $|z|_{0}$ is the number of nonzero entries in $z$. Clearly, the problem has a solution with the value of the objective at most $s$. Moreover, it is immediately seen that if every $2 s$ columns in $A$ are linearly independent (which again is a very unrestrictive assumption on the matrix $A$ provided that $m \geq 2 s$ ), then the true signal $x$ is the unique optimal solution to $(1.2)$.
What was said so far can be extended to the case of noisy observations and “nearly $s$-sparse” signals $x$. For example, assuming that the observation error is “uncertainbut-bounded,” specifically some known norm $|\cdot|$ of this error does not exceed a given $\epsilon>0$, and that the true signal is s-sparse, we could solve the combinatorial optimization problem
$$\min {z}\left{|z|{0}:|A z-y| \leq \epsilon\right} .$$
Assuming that every $m \times 2 \mathrm{~s}$ submatrix $\bar{A}$ of $A$ is not just with linearly independent columns (i.e., with trivial kernel), but is reasonably well conditioned,
$$|\bar{A} w| \geq C^{-1}|w|_{2}$$
for all ( $2 s)$-dimensional vectors $w$, with some constant $C$, it is immediately seen that the true signal $x$ underlying the observation and the optimal solution $\widehat{x}$ of (1.3) are close to each other within accuracy of order of $\epsilon:|x-\widehat{x}|_{2} \leq 2 C \epsilon$. It is easily seen that the resulting error bound is basically as good as it could be.

## 统计代写|统计推断代写Statistical inference代考|Signal Recovery Problem

• 通讯，在哪里X是发射机发送的信号，是是接收器记录的信号，并且一个表示通信信道（反映，例如，信号幅度衰减对发射机-接收机距离的依赖性）；这这里通常被建模为标准（零均值，单位协方差矩阵）米-维高斯噪声；1
• 图像重建，其中信号X是一个图像——一个2D通常摄影中的阵列，或断层扫描中的 3D 阵列 – 和是是成像设备获取的数据。这里这在许多情况下（尽管并非总是如此）可以再次建模为标准高斯噪声；
• 线性回归，在广泛的应用中出现。在线性回归中，给出一个米对“输入一个一世∈Rn ” 到一个“黑匣子”，输出是一世∈R. 有时我们有理由相信输出是“存在于自然界”但不可观察的“理想输出”的噪声版本是一世∗=X吨一个一世这只是输入的线性函数（这称为“线性回归模型”，输入一个一世称为“回归器”）。我们的目标是转换实际观察结果(一个一世,是一世),1≤一世≤米, 估计未知的“真实”参数向量X. 表示一个具有行的矩阵[一个一世]吨并收集个人观察结果是一世一次观察是=[是1;…;是米]∈R米，我们得到了恢复向量的问题X从嘈杂的观察一个X. 这里又是最受欢迎的模型这是标准高斯噪声。

## 统计代写|统计推断代写Statistical inference代考|Compressed Sensing via ℓ1 minimization: Motivation

\min {z}\left{|z|{0}: A z=y\right},\min {z}\left{|z|{0}: A z=y\right},

\min {z}\left{|z|{0}:|A zy| \leq \epsilon\right} 。\min {z}\left{|z|{0}:|A zy| \leq \epsilon\right} 。

|一个¯在|≥C−1|在|2

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

## 统计代写|统计推断代写Statistical inference代考|STAT 7604

statistics-lab™ 为您的留学生涯保驾护航 在代写统计推断Statistical inference方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写统计推断Statistical inference代写方面经验极为丰富，各种代写统计推断Statistical inference相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• Advanced Probability Theory 高等概率论
• Advanced Mathematical Statistics 高等数理统计学
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 统计代写|统计推断代写Statistical inference代考|Independent random variables

The term independent and identically distributed (IID) is one that is used with great frequency in statistics. One of the key assumptions that is often made in inference is that we have a random sample. Assuming a sample is random is equivalent to stating that a reasonable model for the process that generates the data is a sequence of independent and identically distributed random variables. We start by defining what it means for a pair of random variables to be independent.

Definition 4.4.1 (Independent random variables)
The random variables $X$ and $Y$ are independent if and only if the events ${X \leq x}$ and ${Y \leq y}$ are independent for all $x$ and $y$.

One immediate consequence of this definition is that, for independent random variables, it is possible to generate the joint distribution from the marginal distributions.
Claim 4.4.2 (Joint distribution of independent random variables)
Random variables $X$ and $Y$ are independent if and only if the joint cumulative distribution function of $X$ and $Y$ is the product of the marginal cumulative distribution functions, that is, if and only if
$$F_{X, Y}(x, y)=F_{X}(x) F_{Y}(y) \text { for all } x, y \in \mathbb{R}$$
The claim holds since, by Definition 4.4.1, the events ${X \leq x}$ and ${Y \leq y}$ are independent if and only if the probability of their intersection is the product of the individual probabilities. Claim 4.4.2 states that, for independent random variables, knowledge of the margins is equivalent to knowledge of the joint distribution; this is an attractive property. The claim can be restated in terms of mass or density.
Proposition 4.4.3 (Mass/density of independent random variables)
The random variables $X$ and $Y$ are independent if and only if their joint mass/density is the product of the marginal mass/density functions, that is, if and only if
$$f_{X, Y}(x, y)=f_{X}(x) f_{Y}(y) \quad \text { for all } x, y \in \mathbb{R}$$
Proof.

## 统计代写|统计推断代写Statistical inference代考|Mutual independence

We can readily extend the ideas of this section to a sequence of $n$ random variables. When considering many random variables, the terms pairwise independent and mutually independent are sometimes used. Pairwise independent, as the name suggests, means that every pair is independent in the sense of Definition 4.4.1.

Definition 4.4.7 (Mutually independent random variables)
The random variables $X_{1}, X_{2}, \ldots, X_{n}$ are mutually independent if and only if the events $\left{X_{1} \leq x_{1}\right},\left{X_{2} \leq x_{2}\right}, \ldots,\left{X_{n} \leq x_{n}\right}$ are mutually independent for all choices of $x_{1}, x_{2}, \ldots, x_{n}$

When $X_{1}, X_{2}, \ldots, X_{n}$ are mutually independent the term “mutually” is often dropped and we just say $X_{1}, X_{2}, \ldots, X_{n}$ are independent or $\left{X_{i}\right}$ is a sequence of independent random variables. Note that this is a stronger property than pairwise independence; mutually independent implies pairwise independent but the reverse implication does not hold.

Any one of the equivalent statements summarised in the following claim could be taken to be a definition of independence.
Claim 4.4.8 (Equivalent statements of mutual independence) If $X_{1}, \ldots, X_{n}$ are random variables, the following statements are equivalent:
i. The events $\left{X_{1} \leq x_{1}\right},\left{X_{2} \leq x_{2}\right}, \ldots,\left{X_{n} \leq x_{n}\right}$ are independent for all $x_{1}, \ldots, x_{n}$.
ii. $F_{X_{1}, \ldots, X_{n}}\left(x_{1}, \ldots, x_{n}\right)=F_{X_{1}}\left(x_{1}\right) F_{X_{2}}\left(x_{2}\right) \ldots F_{X_{n}}\left(x_{n}\right)$ for all $x_{1}, \ldots, x_{n}$.
iii. $f_{X_{1}, \ldots, X_{n}}\left(x_{1}, \ldots, x_{n}\right)=f_{X_{1}}\left(x_{1}\right) f_{X_{2}}\left(x_{2}\right) \ldots f_{X_{n}}\left(x_{n}\right)$ for all $x_{1}, \ldots, x_{n}$.
The implications of mutual independence may be summarised as follows.
Claim 4.4.9 (Implications of mutual independence)
If $X_{1}, \ldots, X_{n}$ are mutually independent random variables, then
i. $\mathrm{E}\left(X_{1} X_{2} \ldots X_{n}\right)=\mathrm{E}\left(X_{1}\right) \mathrm{E}\left(X_{2}\right) \ldots \mathrm{E}\left(X_{n}\right)$,
ii. if, in addition, $g_{1}, \ldots, g_{n}$ are well-behaved, real-valued functions, then the random variables $g_{1}\left(X_{1}\right), \ldots, g_{n}\left(X_{n}\right)$ are also mutually independent.

## 统计代写|统计推断代写Statistical inference代考|Identical distributions

Another useful simplifying assumption is that of identical distributions.
Definition 4.4.10 (Identically distributed random variables)
The random variables $X_{1}, X_{2}, \ldots, X_{n}$ are identically distributed if and only if their cumulative distribution functions are identical, that is
$$F_{X_{1}}(x)=F_{X_{2}}(x)=\ldots=F_{X_{n}}(x) \text { for all } x \in \mathbb{R}$$
If $X_{1}, X_{2}, \ldots, X_{n}$ are identically distributed we will often just use the letter $X$ to denote a random variable that has the distribution common to all of them. So the cumulative distribution function of $X$ is $\mathrm{P}(X \leq x)=F_{X}(x)=F_{X_{1}}(x)=\ldots=F_{X_{n}}(x)$. If $X_{1}, X_{2}, \ldots, X_{n}$ are independent and identically distributed, we may sometimes denote this as $\left{X_{i}\right} \sim$ IID.

1. Suppose $X_{1}, \ldots, X_{n}$ is a sequence of $n$ independent and identically distributed standard normal random variables. Find an expression for the joint density of $X_{1}, \ldots, X_{n}$. [We denote this by $\left{X_{i}\right} \sim \operatorname{NID}(0,1)$, where NID stands for “normal and independently distributed”.]
2. Let $X_{1}, \ldots, X_{n}$ be a sequence of $n$ independent random variables with cumulantgenerating functions $K_{X_{1}}, \ldots, K_{X_{n}}$. Find an expression for the joint cumulantgenerating function $K_{X_{1}, \ldots, X_{n}}$ in terms of the individual cumulant-generating functions.

## 统计代写|统计推断代写Statistical inference代考|Independent random variables

FX,是(X,是)=FX(X)F是(是) 对所有人 X,是∈R

FX,是(X,是)=FX(X)F是(是) 对所有人 X,是∈R

## 统计代写|统计推断代写Statistical inference代考|Mutual independence

i．事件\left{X_{1} \leq x_{1}\right},\left{X_{2} \leq x_{2}\right}, \ldots,\left{X_{n} \leq x_{n} \正确的}\left{X_{1} \leq x_{1}\right},\left{X_{2} \leq x_{2}\right}, \ldots,\left{X_{n} \leq x_{n} \正确的}对所有人都是独立的X1,…,Xn.
ii.FX1,…,Xn(X1,…,Xn)=FX1(X1)FX2(X2)…FXn(Xn)对所有人X1,…,Xn.
iii.FX1,…,Xn(X1,…,Xn)=FX1(X1)FX2(X2)…FXn(Xn)对所有人X1,…,Xn.

i.和(X1X2…Xn)=和(X1)和(X2)…和(Xn),

## 统计代写|统计推断代写Statistical inference代考|Identical distributions

FX1(X)=FX2(X)=…=FXn(X) 对所有人 X∈R

1. 认为X1,…,Xn是一个序列n独立同分布的标准正态随机变量。求联合密度的表达式X1,…,Xn. [我们将其表示为\left{X_{i}\right} \sim \operatorname{NID}(0,1)\left{X_{i}\right} \sim \operatorname{NID}(0,1)，其中 NID 代表“正常且独立分布”。]
2. 让X1,…,Xn是一个序列n具有累积量生成函数的独立随机变量ķX1,…,ķXn. 找到联合累积量生成函数的表达式ķX1,…,Xn就各个累积量生成函数而言。

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。