### 金融代写|风险理论投资组合代写Market Risk, Measures and Portfolio 代考|CONSTRAINED OPTIMIZATION

statistics-lab™ 为您的留学生涯保驾护航 在代写风险理论投资组合方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写风险理论投资组合代写方面经验极为丰富，各种代写风险理论投资组合相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 金融代写|风险理论投资组合代写Market Risk, Measures and Portfolio 代考|CONSTRAINED OPTIMIZATION

In constructing optimization problems solving practical issues, it is very often the case that certain constraints need to be imposed in order for the optimal solution to make practical sense. For example, long-only portfolio optimization problems require that the portfolio weights, which represent the variables in optimization, should be nonnegative and should sum up to one. According to the notation in this chapter, this corresponds to a problem of the type,
$$\begin{array}{rl} \min {x} & f(x) \ \text { subject to } & x^{\prime} e=1 \ & x \geq 0, \end{array}$$ where: $f(x)$ is the objective function. $e \in \mathbb{R}^{n}$ is a vector of ones, $e=(1, \ldots, 1)$. $x^{\prime} e$ equals the sum of all components of $x, x^{\prime} e=\sum{i}^{n} x_{i}$.
$x \geq 0$ means that all components of the vector $x \in \mathbb{R}^{n}$ are nonnegative.
In problem (2.10), we are searching for the minimum of the objective function by varying $x$ only in the set
$$\mathbf{X}=\left{x \in \mathbb{R}^{n}: \begin{array}{l} x^{\prime} e=1 \ x \geq 0 \end{array}\right}$$
which is also called the set of feasible points or the constraint set. A more compact notation, similar to the notation in the unconstrained problems, is sometimes used,
$$\min _{x \in \mathrm{X}} f(x)$$
where $\mathbf{X}$ is defined in equation (2.11).
We distinguish between different types of optimization problems depending on the assumed properties for the objective function and the constraint set. If the constraint set contains only equalities, the problem is easier to handle analytically. In this case, the method of Lagrange multipliers is applied. For more general constraint sets, when they are formed

by both equalities and inequalities, the method of Lagrange multipliers is generalized by the Karush-Kuhn-Tucker conditions (KKT conditions). Like the first-order conditions we considered in unconstrained optimization problems, none of the two approaches leads to necessary and sufficient conditions for constrained optimization problems without further assumptions. One of the most general frameworks in which the KKT conditions are necessary and sufficient is that of convex programming. We have a convex programing problem if the objective function is a convex function and the set of feasible points is a convex set. As important subcases of convex optimization, linear programming and convex quadratic programming problems are considered.

In this section, we describe first the method of Lagrange multipliers, which is often applied to special types of mean-variance optimization problems in order to obtain closed-form solutions. Then we proceed with convex programming that is the framework for reward-risk analysis. The mentioned applications of constrained optimization problems is covered in Chapters 8,9 , and 10 .

## 金融代写|风险理论投资组合代写Market Risk, Measures and Portfolio 代考|Lagrange Multipliers

Consider the following optimization problem in which the set of feasible points is defined by a number of equality constraints,
$$\begin{array}{rl} \min {x} & f(x) \ \text { subject to } & b{1}(x)=0 \ & b_{2}(x)=0 \ \cdots \ & b_{k}(x)=0 \end{array}$$
The functions $h_{i}(x), i=1, \ldots, k$ build up the constraint set. Note that even though the right-hand side of the equality constraints is zero in the classical formulation of the problem given in equation $(2.12)$, this is not restrictive. If in a practical problem the right-hand side happens to be different than zero, it can be equivalently transformed, for example,
$$\left{x \in \mathbb{R}^{n}: v(x)=c\right} \quad \Longleftrightarrow \quad\left{x \in \mathbb{R}^{n}: h_{1}(x)=v(x)-c=0\right} .$$
In order to illustrate the necessary condition for optimality valid for (2.12), let us consider the following two-dimensional example:
\begin{aligned} \min _{x \in \mathbb{R}^{2}} & \frac{1}{2} x^{\prime} C x \ \text { subject to } & x^{\prime} e=1, \end{aligned}

where the matrix is
$$C=\left(\begin{array}{cc} 1 & 0.4 \ 0.4 & 1 \end{array}\right) .$$
The objective function is a quadratic function and the constraint set contains one linear equality. In Chapter 8, we see that the mean-variance optimization problem in which short positions are allowed is very similar to (2.13). The surface of the objective function and the constraint are shown on the top plot in Figure 2.7. The black line on the surface shows the function values of the feasible points. Geometrically, solving problem (2.13) reduces to finding the lowest point of the black curve on the surface. The contour lines shown on the bottom plot in Figure $2.7$ imply that the feasible point yielding the minimum of the objective function is where a contour line is tangential to the line defined by the equality constraint. On the plot, the tangential contour line and the feasible points are in bold. The black dot indicates the position of the point in which the objective function attains its minimum subject to the constraints.

Even though the example is not general in the sense that the constraint set contains one linear rather than a nonlinear equality, the same geometric intuition applies in the nonlinear case. The fact that the minimum is attained where a contour line is tangential to the curve defined by the nonlinear equality constraints in mathematical language is expressed in the following way: The gradient of the objective function at the point yielding the minimum is proportional to a linear combination of the gradients of the functions defining the constraint set. Formally, this is stated as
$$\nabla f\left(x^{0}\right)-\mu_{1} \nabla h_{1}\left(x^{0}\right)-\cdots-\mu_{k} \nabla h_{k}\left(x^{0}\right)=0 .$$
where $\mu_{i}, i=1, \ldots, k$ are some real numbers called Lagrange multipliers and the point $x^{0}$ is such that $f\left(x^{0}\right) \leq f(x)$ for all $x$ that are feasible. Note that if there are no constraints in the problem, then (2.14) reduces to the first-order condition we considered in unconstrained optimization. Therefore, the system of equations behind (2.14) can be viewed as a generalization of the first-order condition in the unconstrained case.

The method of Lagrange multipliers basically associates a function to the problem in $(2.12)$ such that the first-order condition for unconstrained optimization for that function coincides with (2.14). The method of Lagrange multiplier consists of the following steps.

## 金融代写|风险理论投资组合代写Market Risk, Measures and Portfolio 代考|Convex Programming

The general form of convex programming problems is the following:
$$\begin{array}{rl} \min {x} & f(x) \ \text { subject to } & g{i}(x) \leq 0, \quad i=1, \ldots, m \ & h_{j}(x)=0, \quad j=1, \ldots, k, \end{array}$$

where:
$f(x)$ is a convex objective function.
$g_{1}(x), \ldots, g_{m}(x)$ are convex functions defining the inequality constraints. $h_{1}(x), \ldots, h_{k}(x)$ are affine functions defining the equality constraints.
Generally, without the assumptions of convexity, problem $(2.17)$ is more involved than $(2.12)$ because besides the equality constraints, there are inequality constraints. The KKT condition, generalizing the method of Lagrange multipliers, is only a necessary condition for optimality in this case. However, adding the assumption of convexity makes the KKT condition necessary and sufficient.

Note that, similar to problem (2.12), the fact that the right-hand side of all constraints is zero is nonrestrictive. The limits can be arbitrary real numbers.
Consider the following two-dimensional optimization problem;
\begin{aligned} \min {\substack{x \in \mathbb{R}^{2}}} & \frac{1}{2} x^{\prime} \mathrm{C} x \ \text { subject to } &\left(x{1}+2\right)^{2}+\left(x_{2}+2\right)^{2} \leq 3 \end{aligned}
in which
$$C=\left(\begin{array}{cc} 1 & 0.4 \ 0.4 & 1 \end{array}\right) \text {. }$$
The objective function is a two-dimensional convex quadratic function and the function in the constraint set is also a convex quadratic function. In fact, the boundary of the feasible set is a circle with a radius of $\sqrt{3}$ centered at the point with coordinates $(-2,-2)$. The top plot in Figure $2.8$ shows the surface of the objective function and the set of feasible points. The shaded part on the surface indicates the function values of all feasible points. In fact, solving problem (2.18) reduces to finding the lowest point on the shaded part of the surface. The bottom plot shows the contour lines of the objective function together with the feasible set that is in gray. Geometrically, the point in the feasible set yielding the minimum of the objective function is positioned where a contour line only touches the constraint set. The position of this point is marked with a black dot and the tangential contour line is given in bold.

Note that the solution points of problems of the type $(2.18)$ can happen to be not on the boundary of the feasible set but in the interior. For example, suppose that the radius of the circle defining the boundary of the feasible set in $(2.18)$ is a larger number such that the point $(0,0)$ is inside the feasible

set. Then, the point $(0,0)$ is the solution to problem $(2.18)$ because at this point the objective function attains its global minimum.

In the two-dimensional case, when we can visualize the optimization problem, geometric reasoning guides us to finding the optimal solution point. In a higher dimensional space, plots cannot be produced and we rely on the analytic method behind the KKT conditions.

## 金融代写|风险理论投资组合代写Market Risk, Measures and Portfolio 代考|CONSTRAINED OPTIMIZATION

X≥0表示向量的所有分量X∈Rn是非负的。

\mathbf{X}=\left{x \in \mathbb{R}^{n}: \begin{array}{l} x^{\prime} e=1 \ x \geq 0 \end{array}\对}\mathbf{X}=\left{x \in \mathbb{R}^{n}: \begin{array}{l} x^{\prime} e=1 \ x \geq 0 \end{array}\对}

## 金融代写|风险理论投资组合代写Market Risk, Measures and Portfolio 代考|Lagrange Multipliers

\left{x \in \mathbb{R}^{n}: v(x)=c\right} \quad \Longleftrightarrow \quad\left{x \in \mathbb{R}^{n}: h_{1 }(x)=v(x)-c=0\right} 。\left{x \in \mathbb{R}^{n}: v(x)=c\right} \quad \Longleftrightarrow \quad\left{x \in \mathbb{R}^{n}: h_{1 }(x)=v(x)-c=0\right} 。

C=(10.4 0.41).

∇F(X0)−μ1∇H1(X0)−⋯−μķ∇Hķ(X0)=0.

## 金融代写|风险理论投资组合代写Market Risk, Measures and Portfolio 代考|Convex Programming

F(X)是一个凸目标函数。
G1(X),…,G米(X)是定义不等式约束的凸函数。H1(X),…,Hķ(X)是定义等式约束的仿射函数。

C=(10.4 0.41).

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。