### 统计代写|统计推断代写Statistical inference代考|STATS 2107

statistics-lab™ 为您的留学生涯保驾护航 在代写统计推断Statistical inference方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写统计推断Statistical inference代写方面经验极为丰富，各种代写统计推断Statistical inference相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 统计代写|统计推断代写Statistical inference代考|VERIFIABILITY AND TRACTABILITY ISSUES

The good news about $\ell_{1}$ recovery stated in Theorems $1.3,1.4$, and $1.5$ is “conditional”-we assume that we are smart enough to point out a pair $(H,|\cdot|)$ satisfying condition $\mathbf{Q}{1}(s, \varkappa)$ with $\varkappa<1 / 2$ (and condition $\mathbf{Q}{q}(s, \kappa)$ with a “moderate” $\varkappa{ }^{8}$ ). The related issues are twofold:

1. First, we do not know in which range of $s, m$, and $n$ these conditions, or even the weaker than $\mathrm{Q}{1}(s, \varkappa), \varkappa<1 / 2$, nullspace property can be satisfied; and without the nullspace property, $\ell{1}$ minimization becomes useless, at least when we want to guarantee its validity whatever be the s-sparse signal we want to recover;
2. Second, it is unclear how to verify whether a given sensing matrix $A$ satisfies the nullspace property for a given $s$, or a given pair $(H,|\cdot|)$ satisfies the condition $\mathbf{Q}_{q}(s, \kappa)$ with given parameters.
What is known about these crucial issues can be outlined as follows.
3. It is known that for given $m, n$ with $m \ll n$ (say, $m / n \leq 1 / 2$ ), there exist $m \times n$ sensing matrices which are $s$-good for the values of $s$ “nearly as large as $m, “$ specifically, for $s \leq O(1) \frac{m}{\ln (n / m)} \cdot{ }^{9}$ Moreover, there are natural families of matrices where this level of goodness “is a rule.” E.g., when drawing an $m \times n$ matrix at random from Gaussian or Rademacher distributions (i.e., when filling the matrix with independent realizations of a random variable which is either a standard (zero mean, unit variance) Gaussian one, or takes values $\pm 1$ with probabilities $0.5$ ), the result will be $s$-good, for the outlined value of $s$, with probability approaching 1 as $m$ and $n$ grow. All this remains true when instead of speaking about matrices $A$ satisfying “plain” nullspace properties, we are speaking about matrices $A$ for which it is easy to point out a pair $(H,|\cdot|)$ satisfying the condition $\mathrm{Q}_{2}(s, \varkappa)$ with, say, $\varkappa=1 / 4$.

The above results can be considered as a good news. A bad news is that we do not know how to check efficiently, given an $s$ and a sensing matrix $A$, that the matrix is s-good, just as we do not know how to check that $A$ admits good (i.e., satisfying $\mathbf{Q}_{1}(s, \psi)$ with $\left.\varkappa<1 / 2\right)$ pairs $(H,|\cdot|)$. Even worse: we do not know an efficient recipe allowing us to build, given $m$, an $m \times 2 m$ matrix $A^{m}$ which is provably s-good for $s$ larger than $O(1) \sqrt{m}$, which is a much smaller “level of goodness” than the one promised by theory for randomly generated matrices. 10 The “common life” analogy of this situation would be as follows: you know that $90 \%$ of bricks in your wall are made of gold, and at the same time, you do not know how to tell a golden brick from a usual one.

1. There exist verifiable sufficient conditions for $s$-goodness of a sensing matrix, similarly to verifiable sufficient conditions for a pair $(H,|\cdot|)$ to satisfy condition $\mathbf{Q}_{q}(s, \kappa)$. The bad news is that when $m \ll n$, these verifiable sufficient conditions can be satisfied only when $s \leq O(1) \sqrt{m}$ – once again, in a much more narrow range of values of $s$ than when typical randomly selected sensing matrices are $s$-good. In fact, $s=O(\sqrt{m})$ is so far the best known sparsity level for which we know individual $s$-good $m \times n$ sensing matrices with $m \leq n / 2$.

## 统计代写|统计推断代写Statistical inference代考|Restricted Isometry Property and s-goodness of random matrices

There are several sufficient conditions for $s$-goodness, equally difficult to verify, but provably satisfied for typical random sensing matrices. The best known of them is the Restricted Isometry Property (RIP) defined as follows:

Definition 1.6. Let $k$ be an integer and $\delta \in(0,1)$. We say that an $m \times n$ sensing matrix A possesses the Restricted Isometry Property with parameters $\delta$ and $k$, $\operatorname{RIP}(\delta, k)$, if for every $k$-sparse $x \in \mathbf{R}^{n}$ one has
$$(1-\delta)|x|_{2}^{2} \leq|A x|_{2}^{2} \leq(1+\delta)|x|_{2}^{2} .$$
It turns out that for natural ensembles of random $m \times n$ matrices, a typical matrix from the ensemble satisfies $\operatorname{RIP}(\delta, k)$ with small $\delta$ and $k$ “nearly as large as $m, “$ and that $\operatorname{RIP}\left(\frac{1}{6}, 2 s\right)$ implies the nullspace condition, and more. The simplest versions of the corresponding results are as follows.

Proposition 1.7. Given $\delta \in\left(0, \frac{1}{5}\right]$, with properly selected positive $c=c(\delta), d=$ $d(\delta), f=f(\delta)$ for all $m \leq n$ and all positive integers $k$ such that
$$k \leq \frac{m}{c \ln (n / m)+d}$$
the probability for a random $m \times n$ matrix $A$ with independent $\mathcal{N}\left(0, \frac{1}{m}\right)$ entries to satisfy $\operatorname{RIP}(\delta, k)$ is at least $1-\exp {-f m}$.
For proof, see Section 1.5.3.
Proposition 1.8. Let $A \in \mathbf{R}^{m \times n}$ satisfy $\operatorname{RIP}(\delta, 2 s)$ for some $\delta<1 / 3$ and positive integer s. Then
(i) The pair $\left(H=\frac{s^{-1 / 2}}{\sqrt{1-\delta}} I_{m},|\cdot|_{2}\right)$ satisfies the condition $\mathbf{Q}{2}\left(s, \frac{\delta}{1-\delta}\right)$ associated with $A$; (ii) The pair $\left(H=\frac{1}{1-\delta} A,|\cdot|{\infty}\right)$ satisfies the condition $\mathbf{Q}_{2}\left(s, \frac{\delta}{1-\delta}\right)$ associated with $A$.
For proof, see Section 1.5.4.

## 统计代写|统计推断代写Statistical inference代考|Verifiable sufficient conditions for Qq

When speaking about verifiable sufficient conditions for a pair $(H,|\cdot|)$ to satisfy $\mathbf{Q}{q}(s, \kappa)$, it is convenient to restrict ourselves to the case where $H$, like $A$, is an $m \times n$ matrix, and $|\cdot|=|\cdot|{\infty}$

Proposition 1.9. Let $A$ be an $m \times n$ sensing matrix, and $s \leq n$ be a sparsity level.

Given an $m \times n$ matrix $H$ and $q \in[1, \infty]$, let us set
$$\nu_{s, q}[H]=\max {j \leq n}\left|\operatorname{Col}{j}\left[I-H^{T} A\right]\right|_{s, q}$$
where $\mathrm{Col}{j}[C]$ is $j$-th column of matrix $C$. Then $$|w|{s, q} \leq s^{1 / q}\left|H^{T} A w\right|_{\infty}+\nu_{s, q}[H]|w|_{1} \forall w \in \mathbf{R}^{n}$$
implying that the pair $\left(H,|\cdot|_{\infty}\right)$ satisfies the condition $\mathbf{Q}{q}\left(s, s^{1-\frac{1}{q}} \nu{s, q}[H]\right)$.
Proof is immediate. Setting $V=I-H^{T} A$, we have
\begin{aligned} &|w|_{s, q}=\left|\left[H^{T} A+V\right] w\right|_{s, q} \leq\left|H^{T} A w\right|_{s, q}+|V w|_{s, q} \ &\leq s^{1 / q}\left|H^{T} A w\right|_{\infty}+\sum_{j} \mid w_{j}\left|\operatorname{Col}{j}[V]\right|{s, q} \leq s^{1 / q}\left|H^{T} A\right|_{\infty}+\nu_{s, q}[H]|w|_{1} \end{aligned}
Observe that the function $\nu_{s, q}[H]$ is an efficiently computable convex function of $H$, so that the set
$$\mathcal{H}{s, q}^{\kappa}=\left{H \in \mathbf{R}^{m \times n}: \nu{s, q}[H] \leq s^{\frac{1}{q}-1} \kappa\right}$$
is a computationally tractable convex set. When this set is nonempty for some $\kappa<1 / 2$, every point $H$ in this set is a contrast matrix such that $\left(H,\left|^{-}\right|_{\infty}\right)$ satisfies the condition $\mathbf{Q}{q}(s, \kappa)$, that is, we can find contrast matrices making $\ell{1}$ minimization valid. Moreover, we can design contrast matrix, e.g., by minimizing over $\mathcal{H}{s, q}^{\kappa}$ the function $|H|{1,2}$, thus optimizing the sensitivity of the corresponding $\ell_{1}$ recoveries to Gaussian observation noise; see items $\mathbf{C}, \mathbf{D}$ in Section 1.2.5.

Explanation. The sufficient condition for s-goodness of $A$ stated in Proposition $1.9$ looks as if coming out of thin air; in fact it is a particular case of a simple and general construction as follows. Let $f(x)$ be a real-valued convex function on $\mathbf{R}^{n}$, and $X \subset \mathbf{R}^{n}$ be a nonempty bounded polytope represented as
$$X=\left{x \in \operatorname{Conv}\left{g_{1}, \ldots, g_{N}\right}: A x=0\right},$$
where $\operatorname{Conv}\left{g_{1}, \ldots, g_{N}\right}=\left{\sum_{i} \lambda_{i} g_{i}: \lambda \geq 0, \sum_{i} \lambda_{i}=1\right}$ is the convex hull of vectors $g_{1}, \ldots, g_{N}$. Our goal is to upper-bound the maximum Opt $=\max {x \in X} f(x)$; this is a meaningful problem, since precisely maximizing a convex function over a polyhedron typically is a computationally intractable task. Let us act as follows: clearly, for any matrix $H$ of the same size as $A$ we have $\max {x \in X} f(x)=$ $\max {x \in X} f\left(\left[I-H^{T} A\right] x\right)$, since on $X$ we have $\left[I-H^{T} A\right] x=x$. As a result, \begin{aligned} \text { Opt } &:=\max {x \in X} f(x)=\max {x \in X} f\left(\left[I-H^{T} A\right] x\right) \ & \leq \max {x \in \operatorname{Conv}\left{g_{1}, \ldots, g_{N}\right}} f\left(\left[I-H^{T} A\right] x\right) \ &=\max {j \leq N} f\left(\left[I-H^{T} A\right] g{j}\right) \end{aligned}
We get a parametric – the parameter being $H$ – upper bound on Opt, namely, the bound $\max {j \leq N} f\left(\left[I-H^{T} A\right] g{j}\right)$. This parametric bound is convex in $H$, and thus is well suited for minimization over this parameter.

## 统计代写|统计推断代写Statistical inference代考|VERIFIABILITY AND TRACTABILITY ISSUES

1. 首先，我们不知道在哪个范围内s,米， 和n这些条件，甚至弱于问1(s,ε),ε<1/2, 可以满足零空间性质；并且没有 nullspace 属性，ℓ1最小化变得无用，至少当我们想要保证它的有效性时，无论我们想要恢复的 s-sparse 信号是什么；
2. 二、不清楚如何验证给定的传感矩阵是否一个满足给定的零空间属性s，或给定的一对(H,|⋅|)满足条件问q(s,ķ)给定参数。
对这些关键问题的了解可以概括如下。
3. 众所周知，对于给定米,n和米≪n（说，米/n≤1/2）， 存在米×n传感矩阵是s- 有利于价值观s“差不多大米,“具体来说，对于s≤○(1)米ln⁡(n/米)⋅9此外，在某些自然矩阵族中，这种良好程度“是一种规则”。例如，当绘制一个米×n从高斯或 Rademacher 分布中随机生成矩阵（即，当用随机变量的独立实现填充矩阵时，该随机变量要么是标准（零均值，单位方差）高斯变量，要么取值±1有概率0.5)，结果将是s-好，对于概述的价值s, 概率接近 1 为米和n生长。当不谈论矩阵时，所有这些都是正确的一个满足“普通”零空间属性，我们正在谈论矩阵一个很容易指出一对(H,|⋅|)满足条件问2(s,ε)与，说，ε=1/4.

1. 存在可验证的充分条件s- 感知矩阵的优度，类似于一对可验证的充分条件(H,|⋅|)满足条件问q(s,ķ). 坏消息是，当米≪n, 这些可验证的充分条件只有在s≤○(1)米– 再一次，在一个更窄的值范围内s比当典型的随机选择的传感矩阵是s-好的。实际上，s=○(米)是迄今为止我们知道的最知名的稀疏度水平s-好的米×n传感矩阵米≤n/2.

## 统计代写|统计推断代写Statistical inference代考|Restricted Isometry Property and s-goodness of random matrices

(1−d)|X|22≤|一个X|22≤(1+d)|X|22.

ķ≤米Cln⁡(n/米)+d

(i) 对(H=s−1/21−d我米,|⋅|2)满足条件问2(s,d1−d)有关联一个; (ii) 对(H=11−d一个,|⋅|∞)满足条件问2(s,d1−d)有关联一个.

## 统计代写|统计推断代写Statistical inference代考|Verifiable sufficient conditions for Qq

νs,q[H]=最大限度j≤n|科尔⁡j[我−H吨一个]|s,q

|在|s,q≤s1/q|H吨一个在|∞+νs,q[H]|在|1∀在∈Rn

|在 $|s, q=|\left[H\right.$ 吨一个+在] 在 $|s, q \leq| H$ 吨一个在 $|s, q+|$ 在在 $|s, q \leq s 1 / q| H$ 吨一个在 $\left|\infty+\sum j\right|$ 在j $\mid$ 科尔 $j$ [在] $|s, q \leq s 1 / q| H$ 吨 一个 $|\infty+\mathrm{VS}, \mathrm{q}[\mathrm{H}]|$ 在| 1

$\backslash$ mathcal ${H}{\mathrm{s}, \mathrm{q}} \wedge{\backslash k a p p a}=\backslash \operatorname{left}\left{\mathrm{H} \backslash\right.$ in $\backslash$ mathbf ${\mathrm{R}} \wedge{\mathrm{m} \backslash$ Itimes $\mathrm{n}}: \backslash \operatorname{Inu}{\mathrm{s}, \mathrm{q}}[\mathrm{H}] \backslash$ leq $\mathrm{s}^{\wedge}{\backslash$ frac ${1}{\mathrm{q}}-1}$
$\backslash$ kappalright $}$ \mathcal ${H}{s, q} \wedge{\backslash k a p p a}=\backslash$ eft ${H \backslash$ in $\backslash$ mathbf ${R} \wedge{m \backslash t i m e s ~ n}:$ lkappatright

X=\left{x \in \operatorname{Conv}\left{g_{1}, \ldots, g_{N}\right}: A x=0\right},X=\left{x \in \operatorname{Conv}\left{g_{1}, \ldots, g_{N}\right}: A x=0\right},

\begin{aligned} \text { Opt } &:=\max {x \in X} f(x)=\max {x \in X} f\left(\left[IH^{T} A\right] x\right) \ & \leq \max {x \in \operatorname{Conv}\left{g_{1}, \ldots, g_{N}\right}} f\left(\left[IH^{T} A\right] x\right) \ &=\max {j \leq N} f\left(\left[IH^{T} A\right] g{j}\right) \end{aligned}\begin{aligned} \text { Opt } &:=\max {x \in X} f(x)=\max {x \in X} f\left(\left[IH^{T} A\right] x\right) \ & \leq \max {x \in \operatorname{Conv}\left{g_{1}, \ldots, g_{N}\right}} f\left(\left[IH^{T} A\right] x\right) \ &=\max {j \leq N} f\left(\left[IH^{T} A\right] g{j}\right) \end{aligned}

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。