数学代写|AM221 Convex optimization
Statistics-lab™可以为您提供harvard.edu AM221 Convex optimization凸优化课程的代写代考和辅导服务!

AM221 Convex optimization课程简介
This is a graduate-level course on optimization. The course covers mathematical programming and combinatorial optimization from the perspective of convex optimization, which is a central tool for solving large-scale problems. In recent years convex optimization has had a profound impact on statistical machine learning, data analysis, mathematical finance, signal processing, control, theoretical computer science, and many other areas. The first part will be dedicated to the theory of convex optimization and its direct applications. The second part will focus on advanced techniques in combinatorial optimization using machinery developed in the first part.
PREREQUISITES
Instructor: Yaron Singer (OH: Wednesdays 4-5pm, MD239)
Teaching fellows:
- Thibaut Horel (OH: Tuesdays 4:30-5:30pm, MD’s 2nd floor lounge)
- Rajko Radovanovic (OH: Mondays 5:30-6:30pm, MD’s 2nd floor lounge)
Time: Monday \& Wednesday, 2:30pm-4:00pm
Room: MD119
Sections: Wednesdays 5:30-7:00pm in MD221
AM221 Convex optimization HELP(EXAM HELP, ONLINE TUTOR)
Problem 1 (20pts). Let $A \in \mathbb{R}^{m \times n}$ be given. We are interested in finding a vector $x \in \mathbb{R}_{+}^n$ such that $A x=\mathbf{0}$ and the number of positive components of $x$ is maximized. Formulate this problem as a linear program. Justify your answer.
We want to find a vector $x\in\mathbb{R}_+^n$ such that $Ax=\mathbf{0}$ and the number of positive components of $x$ is maximized. We can formulate this as a linear programming problem as follows:
\begin{alignat*}{3} &\text{maximize} \quad && \sum_{i=1}^{n} z_i \ &\text{subject to} \quad && A x = 0 \ & && x_i \geq 0 \quad (i=1,\dots,n) \ & && z_i \in {0,1} \quad (i=1,\dots,n) \ & && z_i \geq x_i \quad (i=1,\dots,n) \end{alignat*}
Here, $z_i$ is a binary variable indicating whether $x_i$ is positive or not. We want to maximize the sum of all $z_i$, subject to the constraint that $Ax = 0$. The constraints $z_i \geq x_i$ ensure that $z_i$ is only $1$ if $x_i$ is positive.
To see that this is a valid formulation of the problem, note that the objective function is linear and the constraints are all linear as well. The constraint $Ax = 0$ ensures that $x$ is a feasible solution to the original problem, while the constraints $x_i \geq 0$ and $z_i \geq x_i$ ensure that the binary variables $z_i$ correctly indicate whether $x_i$ is positive or not. Since the objective function is to maximize the number of positive components of $x$, it follows that maximizing the sum of $z_i$ is equivalent to the original problem. Therefore, the above linear program is a valid formulation of the problem.
Problem 2 (25pts).
(a) (15pts). Let $S=\left{x \in \mathbb{R}^n: x^T A x+b^T x+c \leq 0\right}$, where $A \in \mathcal{S}^n, b \in \mathbb{R}^n$, and $c \in \mathbb{R}$ are given. Show that $S$ is convex if $A \succeq \mathbf{0}$. Is the converse true? Explain.
(b) (10pts). Is the set $S=\left{X \in \mathcal{S}^n: \lambda_{\max }(X) \geq 1\right}$ convex? Justify your answer.
(a) To show that $S$ is convex, we need to show that for any $x, y \in S$ and $\lambda \in [0,1]$, we have $\lambda x+(1-\lambda) y \in S$. That is, we need to show that $(\lambda x+(1-\lambda) y)^T A (\lambda x+(1-\lambda) y)+b^T(\lambda x+(1-\lambda) y)+c \leq 0$.
Expanding the left-hand side, we have \begin{align*} &(\lambda x+(1-\lambda) y)^T A (\lambda x+(1-\lambda) y)+b^T(\lambda x+(1-\lambda) y)+c \ &= \lambda^2 x^T A x + (1-\lambda)^2 y^T A y + 2\lambda (1-\lambda) x^T A y + \lambda b^T x + (1-\lambda) b^T y + c \ &\leq \lambda^2 x^T A x + (1-\lambda)^2 y^T A y + 2\lambda (1-\lambda) \sqrt{x^T A x}\sqrt{y^T A y} + \lambda b^T x + (1-\lambda) b^T y + c \ &= \lambda^2 (\sqrt{x^T A x}+\sqrt{y^T A y})^2 + (1-\lambda)^2 (\sqrt{x^T A x}+\sqrt{y^T A y})^2 – 2\lambda(1-\lambda) (\sqrt{x^T A x}+\sqrt{y^T A y})^2 \ &\qquad+ \lambda b^T x + (1-\lambda) b^T y + c \ &= \lambda(\sqrt{x^T A x}+\sqrt{y^T A y}+b^T x)+ (1-\lambda)(\sqrt{x^T A x}+\sqrt{y^T A y}+b^T y) – \lambda(1-\lambda) (\sqrt{x^T A x}+\sqrt{y^T A y})^2 + c \ &= \lambda(\sqrt{x^T A x}+b^T x)+ (1-\lambda)(\sqrt{y^T A y}+b^T y) – \lambda(1-\lambda) (\sqrt{x^T A x}+\sqrt{y^T A y})^2 + c \ &\leq \lambda(\sqrt{x^T A x}+b^T x)+ (1-\lambda)(\sqrt{y^T A y}+b^T y) + c \ &= \lambda(x^T A x+b^T x+c) + (1-\lambda)(y^T A y+b^T y+c) \ &\leq 0, \end{align*} where we used the fact that $A \succeq 0$ implies $\sqrt{x^T A x}+\sqrt{y^T A y} \geq \sqrt{(x+y)^T A (x+y)}$ (which follows from the Cauchy-Schwarz inequality), and that $x, y \in S$ implies $x^T A x+b^T x+c \leq 0$ and $y^T A y+b^T y+c \leq 0$.
The converse is not true. To see this, consider $n=1$, $A=-1$, $b
Textbooks
• An Introduction to Stochastic Modeling, Fourth Edition by Pinsky and Karlin (freely
available through the university library here)
• Essentials of Stochastic Processes, Third Edition by Durrett (freely available through
the university library here)
To reiterate, the textbooks are freely available through the university library. Note that
you must be connected to the university Wi-Fi or VPN to access the ebooks from the library
links. Furthermore, the library links take some time to populate, so do not be alarmed if
the webpage looks bare for a few seconds.

Statistics-lab™可以为您提供harvard.edu AM221 Convex optimization现代代数课程的代写代考和辅导服务! 请认准Statistics-lab™. Statistics-lab™为您的留学生涯保驾护航。