### 金融代写|金融计量经济学Financial Econometrics代考|Parameter-Centric Analysis Grossly

statistics-lab™ 为您的留学生涯保驾护航 在代写金融计量经济学Financial Econometrics方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写金融计量经济学Financial Econometrics代写方面经验极为丰富，各种代写金融计量经济学Financial Econometrics相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• Advanced Probability Theory 高等概率论
• Advanced Mathematical Statistics 高等数理统计学
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 金融代写|金融计量经济学Financial Econometrics代考|Over-Certainty

Suppose an observable $y \sim \operatorname{Normal}(0,1)$; i.e., we characterize the uncertainty in an observable $y$ with a normal distribution with known parameters (never mind how we know them). Obviously, we do not know with exactness what any future value of $y$ will be, but we can state probabilities (of intervals) for future observables using this model.

It might seem an odd way of stating it, but in a very real sense we are infinitely more certain about the value of the model parameters than we are about values of the observable. We are certain of the parameters’ values, but we have uncertainty in the observable. In other words, we know what the parameters are, but we don’t know what values the observable will take. If the amount of uncertainty has any kind of measure, it would be 0 for the value of the parameters in this model, and something

positive for the value of the observable. The ratio of these uncertainties, observable to parameters, would be infinite.

That trivial deduction is the proof that, at least for this model, certainty in model parameters is not equivalent to certainty in values of the observable. It would be an obvious gaff, not even worth mentioning, were somebody to report uncertainty in the parameters $a$ if it were the same as the uncertainty in the observable.

Alas, this is what is routinely done in probability models, see Chap. 10 of Briggs (2016). Open the journal of almost any sociology or economics journal, and you will find the mistake being made everywhere. If predictive analysis were used instead of parameteric or testing-based analysis, this mistake would disappear; see e.g. Ando (2007), Arjas and Andreev (2000), Berkhof and van Mechelen (2000), Clarke and Clarke (2018). And then some measure of sanity would return to those fields which are used to broadcasting “novel” results based on statistical model parameters.
The techniques to be described do not work for all probability models; only those models where the parameters are “like” the observables in a sense to be described.

## 金融代写|金融计量经济学Financial Econometrics代考|Theory

There are several candidates for a measure of total uncertainty in a proposition. Since all probability is conditional, this measure will be, too. A common measure is variance; another is the length of the highest (credible) density interval. And there are more, such as entropy, which although attractive has a limitation described in the final section. I prefer here the length of credible intervals because they are stated in predictive terms in many models in units of the observable, made using plainlanguage probability statements. Example: “There is a $90 \%$ chance $y$ is in $(a, b) . “$
In the $y \sim \operatorname{Normal}(0,1)$ example, the variance of the uncertainty of either parameter is 0 , as is the length of any kind of probability interval around them. The variance of the observable is 1 , and the length of the $1-\alpha$ density interval around the observable $y$ is well known to be $2 z_{\alpha / 2}$, where $z_{\alpha / 2} \approx 2$. The ratio of variances, parameter to observable, is $0 / 1=0$. The ratio of the length of confidence intervals, here observable to parameter, is $4 / 0=\infty$.

We pick the ratio of the length of the $1-\alpha$ credible intervals as observable to parameter to indicate the amount of over-certainty. If not otherwise indicated, I let $\alpha$ equal the magic number.

In the simple Normal example, as said in the beginning, if somebody were to make the mistake of claiming the uncertainty in the observable was identical to the uncertainty of the parameters, he would be making the worst possible mistake. Naturally, in situations like this, few or none would this blunder.

Things change, though, and for no good reason, when there exists or enters uncertainty in the parameter. In these cases, the mistake of confusing kinds of uncertainty happens frequently, almost to the point of exclusively.
The simplest models with parameter uncertainty follow this schema:

$$p(y \mid \mathrm{DB})=\int_{\theta} p(y \mid \theta, \mathrm{DB}) p(\theta \mid \mathrm{DB}) d \theta,$$
where $\mathrm{D}=y_{1}, \ldots, y_{n}$ represents old measured or assumed values of the observable, and $B$ represents the background information that insisted on the model formulation used. D need not be present. B must always be; it will contain the reasoning for the model form $p(y \mid \theta \mathbf{D B})$, the form of the model of the uncertainty in the parameters $p(\theta \mid D B)$, and the values of hyperparameters, if any. Obviously, if there are two (or more) contenders $i$ and $j$ for priors on the parameters, then in general $p\left(y \mid \mathrm{DB}{k}\right) \neq p\left(y \mid \mathrm{DB}{l}\right)$. And if there are two (or more) sets of $\mathrm{D}, k$ and $l$, then in general $p\left(y \mid \mathrm{D}{i} \mathrm{~B}\right) \neq p\left(y \mid \mathrm{D}{j} \mathrm{~B}\right)$. Both $\mathrm{D}$ and $\mathrm{B}$ may differ simultaneously, too.
It is worth repeating that unless one can deduce from B the form of the model (from the first principles of B), observables do not “have” probabilities. All probability is conditional: change the conditions, change the probability. All probability models are conditional on some $\mathrm{D}$ (even if null) and $\mathrm{B}$. Change either, change the probability. Thus all measures of over-certainty are also conditional on D and B.
If $\mathrm{D}$ is not null, i.e. past observations exist, then of course
$$p(\theta \mid \mathrm{DB})=\frac{p(y \mid \theta \mathrm{DB}) p(\theta \mid \mathrm{DB})}{\int_{\theta} p(y \mid \theta \mathrm{DB}) p(\theta \mid \mathrm{DB}) d \theta}$$
The variances of $p(y \mid \mathrm{DB})$ or $p(\theta \mid \mathrm{DB})$ can be looked up if the model forms are common, or estimated if not.

Computing the highest density regions or intervals (HDI) of a probability distribution is only slightly more difficult, because multi-modal distributions may not have contiguous regions. We adopt the definition of Hyndman (2012). The $1-\alpha$ highestdensity region $R$ is the subset $R\left(p_{\alpha}\right)$ of $y$ such that $R\left(p_{\alpha}\right)=\left{y: p(y) \geq p_{\alpha}\right}$ where $p_{\alpha}$ is the largest constant such that $\operatorname{Pr}\left(y \in R\left(p_{\alpha}\right) \mid \mathrm{DB}\right) \geq 1-\alpha$. For unimodal distributions, this boils down to taking the shortest continuous interval containing $1-\alpha$ probability. These, too, are computed for many packaged distributions. For the sake of brevity, all HDI will be called here “credible intervals.”

It will turn out that comparing parameters to observables cannot always be done. This is when the parameters is not “like” the observable; when they are not measured in the same units, for example. This limitation will be detailed in the final section.

## 金融代写|金融计量经济学Financial Econometrics代考|Analytic Examples

Let $y \sim \operatorname{Poisson}(\lambda)$, with conjugate prior $\lambda \sim \operatorname{Gamma}(\alpha, \beta)$. The posterior on $\lambda$ is distributed $\operatorname{Gamma}\left(\sum y+\alpha, n+\beta\right)$ (shape and scale parameters). The predictive posterior distribution is Negative Binomial, with parameters $\left(\sum y+\alpha, \frac{1}{n+\beta+1}\right)$. The mean of both the parameter posterior and predictive posterior are $\frac{\sum y+\alpha}{n+\beta}$. The variance of the parameter posterior is $\frac{\sum y+\alpha}{(n+\beta)^{2}}$, while the variance of the predictive posterior is $\frac{\sum y+\alpha}{(n+\beta)^{2}}(n+\beta+1)$. The ratio of the means, independent of both $\alpha$ and $\beta$, is 1 . The ratio of the parameter to predictive variance, independent of $\alpha$, is $1 /(n+\beta+1)$.
It is obvious, for finite $\beta$, that this ratio tends to 0 at the limit. This recapitulates the point that eventually the value of the parameter becomes certain, i.e. with a variance tending toward 0 , while the uncertainty in the observable $y$ remains at some finite level. One quantification of the exaggeration of certainty is thus equal to $(n+\beta+1)$.
Although credible intervals for both parameter and predictive posteriors can be computed easily in this case, it is sometimes an advantage to use normal approximations. Both the Gamma and Negative Binomial admit normal approximations for large $n$. The normal approximation for a $\operatorname{Gamma}\left(\sum y+\alpha, n+\beta\right)$ is $\operatorname{Normal}\left(\left(\sum y+\alpha\right) /(n+\beta),\left(\sum y+\alpha\right) /(n+\beta)^{2}\right)$. The normal approximation for a Negative $\operatorname{Binomial}\left(\sum y+\alpha, \frac{1}{n+\beta+1}\right)$ is $\operatorname{Normal}\left(\left(\sum y+\alpha\right) /(n+\beta),(n+\beta\right.$ $\left.+1) *\left(\sum y+\alpha\right) /(n+\beta)^{2}\right)$.

The length of the $1-\tau$ credible interval, equivalently the $z_{\tau / 2}$ interval, for any normal distribution is $2 z_{\tau / 2} \sigma$. Thus the ratio of predictive to parameter posterior interval lengths is independent of $\tau$ and to first approximation equal to $\sqrt{n+\beta+1}$. Stated another way, the predictive posterior interval will be about $\sqrt{n+\beta+1}$ times higher than the parameter posterior interval. Most pick a $\beta$ of around or equal to 1 , thus for large $n$ the over-certainty grows as $\sqrt{n}$. That is large over-certainty by any definition.

Also to a first approximation, the ratio of length of credible intervals also tends to 0 with $n$. Stated another way, the length of the credible interval for the parameter tends to 0 , while the length of the credible interval for the observable tends to a fixed finite number.

## 金融代写|金融计量经济学Financial Econometrics代考|Theory

p(是∣D乙)=∫θp(是∣θ,D乙)p(θ∣D乙)dθ,

p(θ∣D乙)=p(是∣θD乙)p(θ∣D乙)∫θp(是∣θD乙)p(θ∣D乙)dθ

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。