### 经济代写|计量经济学作业代写Econometrics代考|Nonlinear Regression Models

statistics-lab™ 为您的留学生涯保驾护航 在代写计量经济学Econometrics方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写计量经济学Econometrics代写方面经验极为丰富，各种代写计量经济学Econometrics相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础
• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 经济代写|计量经济学作业代写Econometrics代考|Nonlinear Regression Models

Suppose that one is given a vector $\boldsymbol{y}$ of observations on some dependent variable, a vector $\boldsymbol{x}(\boldsymbol{\beta})$ of, in general nonlinear, regression functions, which may and normally will depend on independent variables, and the data needed to evaluate $\boldsymbol{x}(\boldsymbol{\beta})$. Then, assuming that these data allow one to identify all elements of the parameter vector $\beta$ and that one has access to a suitable computer program for nonlinear least squares and enough computer time, one can always obtain NLS estimates $\boldsymbol{\beta}$. In order to interpret these estimates, one generally makes the heroic assumption that the model is “correct,” which means that $\boldsymbol{y}$ is in fact generated by a DGP from the family
$$\boldsymbol{y}=\boldsymbol{x}(\boldsymbol{\beta})+\boldsymbol{u}, \quad \boldsymbol{u} \sim \operatorname{IID}\left(\mathbf{0}, \sigma^{2} \mathbf{I}\right)$$
Without this assumption, or some less restrictive variant, it would be very difficult to say anything about the properties of $\hat{\boldsymbol{\beta}}$, although in certain special cases one can do so.

It is clear that $\boldsymbol{\beta}$ must be a vector of random variables, since it will depend on $\boldsymbol{y}$ and hence on the vector of error terms $\boldsymbol{u}$. Thus, if we are to make inferences about $\boldsymbol{\beta}$, we must recognize that $\hat{\boldsymbol{\beta}}$ is random and quantify its randomness. In Chapter 5 , we will demonstrate that it is reasonable, when the sample size is large enough, to treat $\hat{\beta}$ as being normally distributed around the true value of $\boldsymbol{\beta}$, which we may call $\beta_{0}$. Thus the only thing we need to know if we are to make asymptotically valid inferences about $\boldsymbol{\beta}$ is the covariance matrix of $\hat{\boldsymbol{\beta}}$, say $\boldsymbol{V}(\hat{\boldsymbol{\beta}})$. In the next section, we discuss how this covariance matrix may be estimated for linear and nonlinear regression models. In Section $3.3$, we show how the resulting estimates may be used to make inferences about $\boldsymbol{\beta}$. In Section $3.4$, we discuss the basic ideas that underlie all types of hypothesis testing. In Section 3.5, we then discuss procedures for testing hypotheses in linear regression models. In Section 3.6. we discuss similar procedures for testing hypotheses in nonlinear regression models. The latter section provides an opportunity to introduce the three fundamental principles on which most hypothesis tests are based: the Wald, Lagrange multiplier, and likelihood ratio principles. Finally, in Section $3.7$, we discuss the effects of imposing incorrect restrictions and introduce the notion of preliminary test estimators.

## 经济代写|计量经济学作业代写Econometrics代考|Covariance Matrix Estimation

In the case of the linear regression model
$$\boldsymbol{y}=\boldsymbol{X} \boldsymbol{\beta}+\boldsymbol{u}, \quad \boldsymbol{u} \sim \operatorname{\Pi DD}\left(\mathbf{0}, \sigma^{2} \mathbf{I}\right),$$
it is well known that when the DGP satisfies (3.02) for specific parameter values $\boldsymbol{\beta}{0}$ and $\sigma{0}$, the covariance matrix of the vector of OLS estimates $\hat{\boldsymbol{\beta}}$ is
$$\boldsymbol{V}(\hat{\boldsymbol{\beta}})=\sigma_{0}^{2}\left(\boldsymbol{X}^{\top} \boldsymbol{X}\right)^{-1}$$
The proof of this familiar result is quite straightforward. The covariance matrix $\boldsymbol{V}(\hat{\boldsymbol{\beta}})$ is defined as the expectation of the outer product of $\hat{\boldsymbol{\beta}}-E(\hat{\boldsymbol{\beta}})$ with itself, conditional on the independent variables $\boldsymbol{X}$. Starting with this definition and using the fact that $E(\hat{\boldsymbol{\beta}})=\boldsymbol{\beta}{0}$, we first replace $\hat{\boldsymbol{\beta}}$ by what it is equal to under the DGP, then take expectations conditional on $\boldsymbol{X}$, and finally simplify the algebra to obtain (3.03): \begin{aligned} \boldsymbol{V}(\hat{\boldsymbol{\beta}}) & \equiv E\left(\hat{\boldsymbol{\beta}}-\boldsymbol{\beta}{0}\right)\left(\hat{\boldsymbol{\beta}}-\boldsymbol{\beta}{0}\right)^{\top} \ &=E\left(\left(\boldsymbol{X}^{\top} \boldsymbol{X}\right)^{-1} \boldsymbol{X}^{\top} \boldsymbol{y}-\boldsymbol{\beta}{0}\right)\left(\left(\boldsymbol{X}^{\top} \boldsymbol{X}\right)^{-1} \boldsymbol{X}^{\top} \boldsymbol{y}-\boldsymbol{\beta}{0}\right)^{\top} \ &=E\left(\left(\boldsymbol{X}^{\top} \boldsymbol{X}\right)^{-1} \boldsymbol{X}^{\top}\left(\boldsymbol{X} \boldsymbol{\beta}{0}+\boldsymbol{u}\right)-\boldsymbol{\beta}{0}\right)\left(\left(\boldsymbol{X}^{\top} \boldsymbol{X}\right)^{-1} \boldsymbol{X}^{\top}\left(\boldsymbol{X} \boldsymbol{\beta}{0}+\boldsymbol{u}\right)-\boldsymbol{\beta}{0}\right)^{\top} \ &=E\left(\boldsymbol{\beta}{0}+\left(\boldsymbol{X}^{\top} \boldsymbol{X}\right)^{-1} \boldsymbol{X}^{\top} \boldsymbol{u}-\boldsymbol{\beta}{0}\right)\left(\boldsymbol{\beta}{0}+\left(\boldsymbol{X}^{\top} \boldsymbol{X}\right)^{-1} \boldsymbol{X}^{\top} \boldsymbol{u}-\boldsymbol{\beta}{0}\right)^{\top} \ &=E\left(\boldsymbol{X}^{\top} \boldsymbol{X}^{-1} \boldsymbol{X}^{\top} \boldsymbol{u} \boldsymbol{u}^{\top} \boldsymbol{X}\left(\boldsymbol{X}^{\top} \boldsymbol{X}\right)^{-1}\right.\ &=\left(\boldsymbol{X}^{\top} \mathbf{X}\right)^{-1} \boldsymbol{X}^{\top}\left(\sigma{0}^{2} \mathbf{I}\right) \boldsymbol{X}^{\top}\left(\boldsymbol{X}^{\top} \boldsymbol{X}\right)^{-1} \ &=\sigma_{0}^{2}\left(\boldsymbol{X}^{\top} \boldsymbol{X}\right)^{-1} \boldsymbol{X}^{\top} \boldsymbol{X}\left(\boldsymbol{X}^{\top} \boldsymbol{X}\right)^{-1} \ &=\sigma_{0}^{2}\left(\boldsymbol{X}^{\top} \boldsymbol{X}\right)^{-1} \end{aligned}
Deriving an analogous result for the nonlinear regression model (3.01) requires a few concepts of asymptotic analysis that we have not yet developed, plus a certain amount of mathematical manipulation. We will therefore postpone this derivation until Chapter 5 and merely state an approximate result here.
For a nonlinear model, we cannot in general obtain an exact expression for $\boldsymbol{V}(\hat{\boldsymbol{\beta}})$ in the finite-sample case. In Chapter 5 , on the assumption that the data are generated by a DGP which is a special case of (3.01), we will, however, obtain an asymptotic result which allows us to state that
$$\boldsymbol{V}(\hat{\boldsymbol{\beta}}) \cong \sigma_{0}^{2}\left(\boldsymbol{X}^{\top}\left(\boldsymbol{\beta}{0}\right) \boldsymbol{X}\left(\boldsymbol{\beta}{0}\right)\right)^{-1}$$

## 经济代写|计量经济学作业代写Econometrics代考|Confidence Intervals and Confidence Regions

A confidence interval for a single parameter at some level $\alpha$ (between 0 and 1 ) is an interval of the real line constructed in such a way that we are confident that the true value of the parameter will lie in that interval $(1-\alpha) \%$ of the time. A confidence region is conceptually the same, except that it is a region in an l-dimensional space (usually the l-dimensional analog of an ellipse) which is constructed so that we are confident that the true values of an l-vector of parameters will lie in that region $(1-\alpha) \%$ of the time. Notice that, when we find a confidence interval or region, we are not making a statement about the distribution of the parameter itself but rather about the probability that our random interval, because of the way it is constructed in terms of the estimates of the parameters and of their covariance matrix, will include the true value.
In the context of regression models, we normally construct a confidence interval by using an estimate of the single parameter in question, an estimate of its standard error, and, in addition, a certain critical value taken from either the normal or the Student’s $t$ distribution. The estimated standard error is of course simply the square root of the appropriate diagonal element of the estimated covariance matrix. The critical value depends on $1-\alpha$, the probability that the confidence interval will include the true value; if we want this probability to be very close to one, the critical value must be relatively large, and hence so must be the confidence interval.

Suppose that the parameter we are interested in is $\beta_{1}$, that the NLS estimate of it is $\hat{\beta}{1}$, and that the estimated standard error of the estimator is $$\hat{S}\left(\hat{\beta}{1}\right) \equiv s\left(\left(\hat{\boldsymbol{X}}^{\top} \hat{\boldsymbol{X}}\right){11}\right)^{-1 / 2}$$ We first need to know how long our confidence interval has to be in terms of the estimated standard errors $\hat{S}\left(\hat{\beta}{1}\right)$. We therefore look up $\alpha$ in a table of

two-tail critical values of the normal or Student’s $t$ distributions or look up $\alpha / 2$ in a table of one-tail critical values. ${ }^{1}$ This gives us a critical value $c_{\alpha}$. We then find an approximate confidence interval
$$\hat{\beta}{1}-c{\alpha} \hat{S}\left(\hat{\beta}{1}\right) \text { to } \hat{\beta}{1}+c_{\alpha} \hat{S}\left(\hat{\beta}{1}\right)$$ that will include the true value of $\beta{1}$ roughly $(1-\alpha) \%$ of the time. For example, if $\alpha$ were $.05$ and we used tables for the normal distribution, we would find that a two-tail critical value was $1.96$. This means that for the normnl distribution with menn $\mu$ and variance $\omega^{2}, 95 \%$ of the probability maiks of this distribution lies between $\mu-1.96 \omega$ and $\mu+1.96 \omega$. Hence, in this case, our approximate confidence interval would be
$$\hat{\beta}{1}-1.96 \hat{S}\left(\hat{\beta}{1}\right) \text { to } \hat{\beta}{1}+1.96 \hat{S}\left(\hat{\beta}{1}\right)$$

## 经济代写|计量经济学作业代写Econometrics代考|Confidence Intervals and Confidence Regions

b^1−C一种小号^(b^1) 到 b^1+C一种小号^(b^1)这将包括真正的价值b1大致(1−一种)%的时间。例如，如果一种是.05我们使用表格进行正态分布，我们会发现双尾临界值是1.96. 这意味着对于带有 menn 的 normnl 分布μ和方差ω2,95%该分布的概率 maiks 介于μ−1.96ω和μ+1.96ω. 因此，在这种情况下，我们的近似置信区间将是
b^1−1.96小号^(b^1) 到 b^1+1.96小号^(b^1)

## 广义线性模型代考

statistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。