### 经济代写|计量经济学作业代写Econometrics代考|The Geometry of Least Squares

statistics-lab™ 为您的留学生涯保驾护航 在代写计量经济学Econometrics方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写计量经济学Econometrics代写方面经验极为丰富，各种代写计量经济学Econometrics相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 经济代写|计量经济学作业代写Econometrics代考|Introduction

The most commonly used, and in many ways the most important, estimation technique in econometrics is least squares. It is useful to distinguish between two varieties of least squares, ordinary least squares, or OLS, and nonlinear least squares, or NLS. In the case of OLS the regression equation that is to be estimated is linear in all of the parameters, while in the case of NLS it is nonlinear in at least one parameter. OLS estimates can be obtained by direct calculation in several different ways (see Section 1.5), while NLS estimates require iterative procedures (see Chapter 6). In this chapter, we will discuss only ordinary least squares, since understanding linear regression is essential to understanding everything else in this book.

There is an important distinction between the numerical and the statistical properties of estimates obtained using OLS. Numerical properties are those that hold as a consequence of the use of ordinary least squares, regardless of how the data were generated. Since these properties are numerical, they can always be verified by direct calculation. An example is the well-known fact that OLS residuals sum to zero when the regressors include a constant term. Statistical properties, on the other hand, are those that hold only under certain assumptions about the way the data were generated. These can never be verified exactly, although in some cases they can be tested. An example is the well-known proposition that OLS estimates are, in certain circumstances, unbiased.

The distinction between numerical properties and statistical properties is obviously fundamental. In order to make this distinction as clearly as possible, we will in this chapter discuss only the former. We will study ordinary least squares purely as a computational device, without formally introducing any sort of statistical model (although we will on occasion discuss quantities that are mainly of interest in the context of linear regression models). No statistical models will be introduced until Chapter 2 , where we will begin discussing nonlinear regression models, of which linear regression models are of course a special case.

By saying that we will study OLS as a computational device, we do not mean that we will discuss computer algorithms for calculating OLS estimates (although we will do that to a limited extent in Section 1.5). Instead, we mean that we will discuss the numerical properties of ordinary least squares and, in particular, the geometrical interpretation of those properties. All of the numerical properties of OLS can be interpreted in terms of Euclidean geometry. This geometrical interpretation often turns out to be remarkably simple, involving little more than Pythagoras’ Theorem and high-school trigonometry, in the context of finite-dimensional vector spaces. Yet the insight gained from this approach is very great. Once one has a thorough grasp of the geometry involved in ordinary least squares, one can often save oneself many tedious lines of algebra by a simple geometrical argument. Moreover, as we hope the remainder of this book will illustrate, understanding the geometrical properties of OLS is just as fundamental to understanding nonlinear models of all types as it is to understanding linear regression models.

## 经济代写|计量经济学作业代写Econometrics代考|The Geometry of Least Squares

The essential ingredients of a linear regression are a regressand $y$ and a matrix of regressors $\boldsymbol{X} \equiv\left[\boldsymbol{x}{1} \ldots \boldsymbol{x}{k}\right]$. The regressand $\boldsymbol{y}$ is an $n$-vector, and the matrix of regressors $\boldsymbol{X}$ is an $n \times k$ matrix, each column $\boldsymbol{x}{i}$ of which is an $n$-vector. The regressand $\boldsymbol{y}$ and each of the regressors $\boldsymbol{x}{1}$ through $\boldsymbol{x}_{k}$ can be thought of as points in $n$-dimensional Euclidean space, $E^{n}$. The $k$ regressors, provided they are linearly independent, span a $k$-dimensional subspace of $E^{n}$. We will denote this subspace by $S(X) .1$

The subspace $\mathcal{S}(\boldsymbol{X})$ consists of all points $z$ in $E^{n}$ such that $\boldsymbol{z}=\boldsymbol{X} \gamma$ for sume $\gamma$, where $\gamma$ is a $k$ =vectur. Strictly speaking, we shuuld refer to $S(X)$ as the subspace spanned by the columns of $\boldsymbol{X}$, but less formally we will often refer to it simply as the span of $\boldsymbol{X}$. The dimension of $\mathcal{S}(\boldsymbol{X})$ is always equal to $\rho(\boldsymbol{X})$, the rank of $\boldsymbol{X}$ (i.e., the number of columns of $\boldsymbol{X}$ that are linearly independent). We will assume that $k$ is strictly less than $n$, something which it is reasonable to do in almost all practical cases. If $n$ were less than $k$, it would be impossible for $\boldsymbol{X}$ to have full column rank $k$.

A Euclidean space is not defined without defining an inner product. In this case, the inner product we are interested in is the so-called natural inner product. The natural inner product of any two points in $E^{n}$, say $\boldsymbol{z}{i}$ and $\boldsymbol{z}{j}$, may be denoted $\left\langle z_{i}, z_{j}\right\rangle$ and is defined by
$$\left\langle\boldsymbol{z}{i}, \boldsymbol{z}{j}\right\rangle \equiv \sum_{t=1}^{n} z_{i t} z_{j t} \equiv \boldsymbol{z}{i}^{\top} \boldsymbol{z}{j} \equiv \boldsymbol{z}{j}^{\top} \boldsymbol{z}{i}$$
1 The notation $S(\boldsymbol{X})$ is not a standard one, there being no standard notation that we are comfortable with. We believe that this notation has much to recommend it and will therefore use it hereafter.

## 经济代写|计量经济学作业代写Econometrics代考|The spaces S(X) and S⊥(X)

This is done by connecting the point $z$ with the origin and putting an arrowhead at $\boldsymbol{z}$. The resulting arrow then shows graphically the two things about a vector that matter, namely, its length and its direction. The Euclidean length of a vector $z$ is
$$|z| \equiv\left(\sum_{t=1}^{n} z_{t}^{2}\right)^{1 / 2}=\left|\left(z^{\top} z\right)^{1 / 2}\right|$$
where the notation emphasizes that $|z|$ is the positive square root of the sum of the squared elements of $z$. The direction is the vector itself normalized to have length unity, that is, $z /|z|$. One advantage of this convention is that if we move one of the arrows, being careful to change neither its length nor its direction, the new arrow represents the same vector, even though the arrowhead is now at a different point. It will often be very convenient to do this, and we therefore adopt this convention in most of our diagrams.

Figure $1.1$ illustrates the concepts discussed above for the case $n=2$ and $k=1$. The matrix of regressors $\boldsymbol{X}$ has only one column in this case, and it is therefore represented by a single vector in the figure. As a consequence, $\mathcal{S}(\boldsymbol{X})$ is one-dimensional, and since $n=2, \mathcal{S}^{\perp}(\boldsymbol{X})$ is also one-dimensional. Notice that $\mathcal{S}(\boldsymbol{X})$ and $\mathcal{S}^{\perp}(\boldsymbol{X})$ would be the same if $\boldsymbol{X}$ were any point on the straight line which is $\mathcal{S}(\boldsymbol{X})$, except for the origin. This illustrates the fact that $\mathcal{S}(\boldsymbol{X})$ is invariant to any nonsingular transformation of $\boldsymbol{X}$.

As we have seen, any point in $\mathcal{S}(\boldsymbol{X})$ can be represented by a vector of the form $\boldsymbol{X} \boldsymbol{\beta}$ for some $k$-vector $\boldsymbol{\beta}$. If one wants to find the point in $\mathcal{S}(\boldsymbol{X})$ that is closest to a given vector $\boldsymbol{y}$, the problem to be solved is that of minimizing, with respert tn the chnice of $\boldsymbol{\beta}$, the diktance hetween $\boldsymbol{y}$ and $\boldsymbol{X} \boldsymbol{\beta}$. Minimizing this distance is evidently equivalent to minimizing the square of this distance.

## 经济代写|计量经济学作业代写Econometrics代考|The Geometry of Least Squares

⟨和一世,和j⟩≡∑吨=1n和一世吨和j吨≡和一世⊤和j≡和j⊤和一世
1 符号小号(X)不是标准的，没有我们喜欢的标准符号。我们相信这个符号有很多值得推荐的地方，因此以后会使用它。

## 经济代写|计量经济学作业代写Econometrics代考|The spaces S(X) and S⊥(X)

|和|≡(∑吨=1n和吨2)1/2=|(和⊤和)1/2|

## 广义线性模型代考

statistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。