### 数学代写|随机过程统计代写Stochastic process statistics代考|MXB334

statistics-lab™ 为您的留学生涯保驾护航 在代写随机过程统计Stochastic process statistics方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写随机过程统计Stochastic process statistics代写方面经验极为丰富，各种代写随机过程统计Stochastic process statistics相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 数学代写|随机过程统计代写Stochastic process statistics代考|Software for Deterministic Optimization

In this chapter, some of the basic concepts of deterministic optimization have been presented. In particular, methods for the solution of nonlinear optimization problems are presented. Nevertheless, deterministic optimization embraces several other types of problems, such as the mixed-integer optimization problems or the general disjunctive optimization problems. Furthermore, process engineering models usually involve a great number of constraints, which make finding solutions for the models difficult. Because of that, the use of software for the solution of such models, and the associated optimization problems, is mandatory. Deterministic optimization software, such as GAMS and LINDO, are available in the market. These software use an equation-based approach. They are based on the use of solvers to determine optimal solutions for the objective function subject to a set of constraints. Solvers are basically routines to optimize, and most of them are based on gradient methods. Both GAMS and LINDO have the capacity to deal with different optimization problems, such as linear programming (LP), nonlinear programming (NLP), mixed-integer linear programming (MILP), and mixed-integer nonlinear programming (MINLP), using local or global solvers. The user is required to write the model to be solved, and the software uses a given method to optimize it in terms of the objective function. In fact, although the software uses default solvers for each type of optimization problem, the user must be careful to properly select the solver.

Deterministic optimization software can be used to solve process engineering problems when the model is available. Additional strategies can be required to make easier finding an optimum for nonconvex solution spaces and/or avoiding falling into local optimum. Nevertheless, those strategies, along with the guidelines for the use of deterministic optimization software, are beyond the scope of this book. For a deeper knowledge of GAMS, the reader is referred to the user’s manuals (Brooke et al., 1998; McCarl, 2004).

## 数学代写|随机过程统计代写Stochastic process statistics代考|Stochastic Optimization vs. Deterministic Optimization

Stochastic and deterministic methods are usually considered as opposite approaches for optimization because of the differences on the basis from which the method is developed. First, the deterministic methods are based on rigorous mathematical principles, mainly on the concepts of calculus. As seen in Chapter 2, most of the methods rely on obtaining solutions for which the gradient is zero, and the evaluation of a given solution to determine if it is indeed an optimum is based on the calculation of the second derivatives, in the form of the Hessian matrix. However, stochastic optimization is based on the evaluation of the objective function in the entire feasible region and the comparison of different solutions to select the best solution for each iteration. Nevertheless, occasionally some bad solutions can be selected in a given iteration, depending on some selection probabilities. For convex functions, deterministic methods always ensure finding a global optimum because they are formulated to search for solutions that comply with the optimality conditions. A stochastic optimization method may reach the global optimum or, at least, solutions close to it, even for highly nonconvex functions. This will depend on proper tuning of the parameters of the algorithm. Another difference between both the methods relies on the importance of initial solutions. Local deterministic methods have a strong dependence on initial values, because the selection of that point at the initial iteration may take the solution to a local optimum, depending, once more, on the convexity (or nonconvexity) of the function. On the contrary, the dependence on the initial solution for a stochastic method is not that strong. The main issue is that, if the initial values are not good, the method will require a higher number of iterations to reach a region close to the global optimum. Finally, the computational time and capacity required for the solution of an optimization problem through deterministic methods are relatively low, whereas these are higher for a stochastic method because a wide range of potential solutions are evaluated. Nonetheless, stochastic methods are a good alternative when dealing with highly nonconvex problems with a high number of degrees of freedom, reducing the difficulties on finding feasible initial solutions, and avoiding the necessity of computing the derivatives, which can be a difficult task for complex functions. Moreover, the stochastic methods can deal with problems for unknown models considering only the input-output data, following a gray-box approach.

## 数学代写|随机过程统计代写Stochastic process statistics代考|Stochastic Optimization with Constraints

The aforementioned generalities about stochastic optimization are valid for the solution of unconstrained problems. Nevertheless, most of the engineering problems have a set of constraints associated with the objective functions, given by the model of the studied system, and must be considered to obtain feasible solutions. The basic stochastic optimization methods cannot deal with constrained problems; thus, strategies have been developed to allow solving such problems. The strategies used to handle constraints in stochastic optimizations can be classified as follows: penalty functions, special representations and operators, repair algorithms, separation of objectives and

constraints, and hybrid methods (Coello Coello, 2002). One of the most used constraints-handling methods comprises the use of penalty functions. This strategy is explained in this section.

One of the first reports on the use of penalty functions to deal with constrained optimization problems was presented by Carroll (1961). In general, the method involves adding or subtracting a certain quantity to the objective function, depending on how big is the violation to the constraints. Thus, the constrained problem is converted into an unconstrained one. Because most of the meta-heuristic optimization methods involve the selection of the best solution for each iteration, if the objective function of a given solution is worsened because it violates one or more constraints, the probability of selecting such solution as a good one is reduced, and the solutions satisfying all the constraints are more likely to be chosen.

One of the basic approaches to implement the penalty function involves the use of exterior penalties. Such methods can start out of the feasible region, and then move into it. This is one of their main advantages because no feasible initial solution is required. According to Coello Coello (2002), the formulation for an exterior penalty function is given as follows:
$$\phi(\bar{x})=\mathrm{f}(\bar{x}) \pm\left[\sum_{i=1}^{n} r_{i} \cdot G_{i}+\sum_{j=1}^{p} c_{j} \cdot L_{j}\right]$$
where $\mathrm{f}(\bar{x})$ is the original objective function and $\phi(\bar{x})$ is the expanded objective function; $G_{i}$ is the function of the inequality constraints, $g_{i}(\bar{x})$, whereas $L_{j}$ is the function of the equality constraints, $h_{i}(\bar{x})$; and finally, $r_{i}$ and $c_{j}$ are known as penalty factors and are positive constants. The penalty functions $G_{i}$ and $L_{j}$ must be selected to avoid too low or too high penalizations to the objective function $\mathrm{f}(\bar{x})$. In general, the penalty functions can be stated as follows (Yeniay, 2005):
$$\begin{gathered} G_{i}=\max \left[0, g_{i}(\bar{x})\right]^{\beta} \ L_{j}=\left|h_{j}(\bar{x})\right|^{\gamma} \end{gathered}$$
where $\beta$ and $\gamma$ are usually set as 1 or 2 . The penalty factors can be calculated in several ways: keeping them constant in all the optimization procedure (static penalties), computing them in terms of the number of iterations (dynamic penalties), using annealing approaches, among others. For a detailed description of the particular penalty methodologies, the reader is referred to the work of Coello Coello (2002).

## 数学代写|随机过程统计代写Stochastic process statistics代考|Stochastic Optimization with Constraints

Carroll (1961) 提出了使用惩罚函数处理约束优化问题的首批报告之一。通常，该方法涉及向目标函数添加或减去某个量，具体取决于违反约束的程度。这样，有约束的问题就转化为无约束的问题。因为大多数元启发式优化方法都涉及为每次迭代选择最佳解决方案，如果给定解决方案的目标函数因为违反一个或多个约束而恶化，则选择这样的解决方案作为好的解决方案的概率会降低，并且满足所有约束的解决方案更有可能被选择。

φ(X¯)=F(X¯)±[∑一世=1nr一世⋅G一世+∑j=1pCj⋅大号j]

G一世=最大限度[0,G一世(X¯)]b 大号j=|Hj(X¯)|C

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。