statistics-lab™ 为您的留学生涯保驾护航 在代写计量经济学Econometrics方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写计量经济学Econometrics代写方面经验极为丰富，各种代写计量经济学Econometrics相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

The use of geometry as an aid to the understanding of linear regression has a long history; see Herr (1980). Early and important papers include Fisher (1915), Durbin and Kendall (1951), Kruskal (1961, 1968, 1975), and Seber (1964). One valuable reference on linear models that takes the geometric approach is Seber $(1980)$, although that book may be too terse for many readers. A recent expository paper that is quite accessible is Bryant (1984). The approach has not been used as much in econometrics as it has in statistics, but a number of econometrics texts – notably Malinvaud (1970a) and also Madansky (1976), Pollock (1979), and Wonnacott and Wonnacott (1979) use it to a greater or lesser degree. Our approach could be termed semigeometric, since we have not emphasized the coordinate-free nature of the analysis quite as much as some authors; see Kruskal’s papers, the Seber book or, in econometrics, Fisher $(1981,1983)$ and Fisher and McAleer (1984).
In this chapter, we have entirely ignored statistical models. Linear regression has been treated purely as a computational device which has a geometrical interpretation, rather than as an estimation procedure for a family of statistical models. All the results discussed have been true numerically, as a consequence of how ordinary least squares estimates are computed, and have not depended in any way on how the data were actually generated. We emphasize this, because conventional treatments of the linear regression model often fail to distinguish between the numerical and statistical properties of least squares.

In the remainder of this book, we will move on to consider a variety of statistical models, some of them regression models and some of them not, which are of practical use to econometricians. For most of the book, we will focus on two classes of models: ones that can be treated as linear and nonlinear regression models and ones that can be estimated by the method of maximum likelihood (the latter being a very broad class of models indeed). As we will see, understanding the geometrical properties of linear regression turns out to be central to understanding both nonlinear regression models and the method of maximum likelihood. We will therefore assume throughout our discussion that readers are familiar with the basic results that were presented in this chapter.

## 经济代写|计量经济学作业代写Econometrics代考|Nonlinear Least Squares

In Chapter 1 , we discussed in some detail the geometry of ordinary least squares and its properties as a computational device. That material is important because many commonly used statistical models are usually estimated by some variant of least squares. Among these is the most commonly encountered class of models in econometrics, the class of regression models, of which we now begin our discussion. Instead of restricting ourselves to the familinr territory of lincar regression modcls, which can be cstimated dircetly by OLS, we will consider the much broader family of nonlinear regression models, which may be estimated by nonlinear least squares, or NLS. Occasionally we will specifically treat linear regression models if there are results which are true for them that do not generalize to the nonlinear case.

In this and the next few chapters on regression models, we will restrict our attention to univariate models, meaning models in which there is a single dependent variable. These are a good deal simpler to deal with than multivariate models, in which there are several jointly dependent variables. Univariate models are far more commonly encountered in practice than are multivariate ones, and a good understanding of the former is essential to understanding the latter. Extending results for univariate models to the multivariate case is quite easy to do, as we will demonstrate in Chapter $9 .$

We begin by writing the univariate nonlinear regression model in its generic form as
$$y_{t}=x_{t}(\boldsymbol{\beta})+u_{t}, \quad u_{t} \sim \operatorname{IID}\left(0, \sigma^{2}\right), \quad t=1, \ldots, n$$
Here $y_{t}$ is the $t^{\text {th }}$ observation on the dependent variable, which is a scalar random variable, and $\boldsymbol{\beta}$ is a $k$-vector of (usually) unknown parameters. The scalar function $x_{t}(\beta)$ is a (generally nonlinear) regression function that determines the mean value of $y_{t}$ conditional on $\boldsymbol{\beta}$ and (usually) on certain independent variables. The latter have not been shown explicitly in (2.01), but the $t$ subscript of $x_{t}(\beta)$ does indicate that this function varies from observation to observation. In most cases, the reason for this is that $x_{t}(\boldsymbol{\beta})$ depends on one or more independent variables that do so. Thus $x_{t}(\beta)$ should be interpreted as the mean of $y_{t}$ conditional on the values of those independent variables. More precisely, as we will see in Section 2.4, it should be interpreted as the mean of $y_{t}$ conditional on some information set to which those independent variables belong.

## 经济代写|计量经济学作业代写Econometrics代考|Identification in Nonlinear Regression Models

If we are to minimize $S S R(\boldsymbol{\beta})$ successfully, it is necessary that the model be identified. Identification is a geometrically simple concept that applies to a very wide variety of models and estimation techniques. Unfortunately, the term identification has come to be associated in the minds of many students of econometrics with the tedious algebra of the linear simultaneous equations model. Identification is indeed an issue in such models, and there are some special problems that arise for them (see Chapters 7 and 18), but the concept is applicable to every econometric model. Essentially, a nonlinear regression model is identified by a given data set if, for that data set, we can find a unique $\hat{\boldsymbol{\beta}}$ that minimizes $\operatorname{SSR}(\boldsymbol{\beta})$. If a model is not identified by the data being used, then there will be more than one $\hat{\boldsymbol{\beta}}$, perhaps even an infinite number of them. Some models may not be identifiable by any conceivable data set, while other models may be identified by some data sets but not by others.

There are two types of identification, local and global. The least squares estimate $\boldsymbol{\beta}$ will be locally identified if whenever $\hat{\beta}$ is perturbed slightly, the value of $\operatorname{SSR}(\boldsymbol{\beta})$ increases. This may be stated formally as the requirement that the function $S S R(\boldsymbol{\beta})$ be strictly convex at $\hat{\beta}$. Thus
$$\operatorname{SSR}(\hat{\boldsymbol{\beta}})<\operatorname{SSR}(\hat{\boldsymbol{\beta}}+\boldsymbol{\delta})$$
for all “small” perturbations $\boldsymbol{\delta}$. Recall that strict convexity is guaranteed if the Hessian matrix $\boldsymbol{H}(\boldsymbol{\beta})$, of which a typical element is
$$H_{i j}(\boldsymbol{\beta}) \equiv \frac{\partial^{2} \operatorname{SSR}(\boldsymbol{\beta})}{\partial \beta_{i} \partial \beta_{j}}$$

is positive definite at $\hat{\boldsymbol{\beta}}$. Strict convexity implies that $\operatorname{SSR}(\boldsymbol{\beta})$ is curved in every direction; no flat directions are allowed. If $\operatorname{SSR}(\boldsymbol{\beta})$ were flat in some direction near $\hat{\boldsymbol{\beta}}$, we could move away from $\hat{\boldsymbol{\beta}}$ in that direction without changing the value of the sum of squared residuals at all (remember that the first derivatives of $\operatorname{SSR}(\boldsymbol{\beta})$ are zero at $\hat{\boldsymbol{\beta}}$, which implies that $\operatorname{SSR}(\boldsymbol{\beta})$ must be equal to $\operatorname{SSR}(\hat{\boldsymbol{\beta}})$ everywhere in the flat region). Hence $\hat{\boldsymbol{\beta}}$ would not be the unique NLS estimator but merely one of an infinite number of points that all minimize $\operatorname{SSR}(\boldsymbol{\beta})$. Figure $2.5$ shows the contours of $\operatorname{SSR}(\boldsymbol{\beta})$ for the usual case in which $\hat{\boldsymbol{\beta}}$ is a unique local minimum, while Figure $2.6$ shows them for a case in which the model is not identified, because all points along the line $A B$ minimize $\operatorname{SSR}(\boldsymbol{\beta})$.
Local identification is necessary but not sufficient for us to obtain unique estimates $\hat{\boldsymbol{\beta}}$. A more general requirement is global identification, which may be stated formally as
$$\operatorname{SSR}(\hat{\boldsymbol{\beta}})<\operatorname{SSR}\left(\boldsymbol{\beta}^{}\right) \text { for all } \boldsymbol{\beta}^{} \neq \hat{\boldsymbol{\beta}}$$
This definition of global identification is really just a restatement of the condition that $\hat{\boldsymbol{\beta}}$ be the unique minimizer of $\operatorname{SSR}(\hat{\boldsymbol{\beta}})$. Notice that even if a model is locally identified, it is quite possible for it to have two (or more) distinct estimates, say $\hat{\boldsymbol{\beta}}^{1}$ and $\hat{\boldsymbol{\beta}}^{2}$, with $\operatorname{SSR}\left(\hat{\boldsymbol{\beta}}^{1}\right)=\operatorname{SSR}\left(\hat{\boldsymbol{\beta}}^{2}\right)$. As an example, consider the model
$$y_{t}=\beta \gamma+\gamma^{2} z_{t}+u_{t} .$$

## 经济代写|计量经济学作业代写Econometrics代考|Identification in Nonlinear Regression Models

H一世j(b)≡∂2固态继电器⁡(b)∂b一世∂bj

## 广义线性模型代考

statistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。