### 数学代写|理论计算机代写theoretical computer science代考| Hardness of Exact Computation

statistics-lab™ 为您的留学生涯保驾护航 在代写理论计算机theoretical computer science方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写理论计算机theoretical computer science代写方面经验极为丰富，各种代写理论计算机theoretical computer science相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 数学代写|理论计算机代写theoretical computer science代考|Hardness of Exact Computation

In this section we provide several hardness results for the DOUBLE UPPER EC and the UPPER $r$-EC in some graph classes as regular graphs and split graphs.
Theorem 11. Let $G=(V, E)$ be an $(r+1)$-regular graph with $r \geq 2$. Then,
$$\operatorname{uec}{r}(G)=|E|-\operatorname{eds}(G) .$$ Proof. In order to prove this equation, first we will show that if $S \subseteq E$ is a $r$-tec of $G$, then $\bar{S}=E \backslash S$ is an edge dominating set of $G$. Let $\left(V{1}, V_{2}\right)$ be the associated partition related to $S$. By the Property 1 we know that $V_{2}$ is an independent set. Because our graph is $(r+1)$-regular, it is easy to see that $\forall v \in V_{1}, d_{G_{S}}(v)=r$ and $\forall u \in V_{2}, d_{G_{S}}(u)=r+1$. This observation gives us that the set $\bar{S}$ covers each vertex of $V_{1}$ (they have degree $r$ ) and that all the edges in $E$ are incident to a vertex in $V_{1}$ (because $V_{2}$ is an independent set). So, $\bar{S}$ is an edge dominating set.

Conversely, let $S$ be a solution of EDS. We will show that there exists a $r$-tec of size $|\bar{S}|$. Because EDS is equivalent to Lower EM, we consider the edge set $S$ as a maximal matching. Now, let $V^{\prime}=V \backslash V[S]$. Observe that $V^{\prime}$ is an independent set in $G$ and each vertex $v \in V^{\prime}$ has $d_{G_{S}}(v)=r+1$. The first holds because if there exists an edge $e$ between two vertices of $V^{\prime}$ then $S \cup{e}$ will be a matching with size greater than $S$, which contradicts the maximality of $S$. The second holds because the edges in $S$ are not incident to vertices of $V^{\prime}$ by definition, and thus, all the edges in $\bar{S}$ are incident to at least one vertex in $V[S]$. Finally, because $S$ is a matching we have that all the vertices in $V[S]$ have degree $r$ in $G_{\bar{S}}$ so by the Property $1, \bar{S}$ is a $r$-tec. So the Eq. (6) holds.
Corollary 12. UPPER $r$-EC is $\mathbf{N P}$-hard in $(r+1)$-regular bipartite graphs.
Proof. Using the NP-hardness proof of EDS in $r$-regular bipartite graphs given in $[15]$, the results follows from Theorem $11 .$
Corollary 13. DOUBLE UPPER EC is NP-hard in cubic planar graphs.
Proof. Using the NP-hardness proof of EDS for cubic planar graphs given in [23], the results follows from Theorem $11 .$

Theorem 14. UPPER $(r+1)$-EC is NP-hard in graphs of maximum degree $\Delta+1$ if UPPER $r$-EC is $N P$-hard in graphs of maximum degree $\Delta$, and this holds even for bipartite graphs.

Proof. Let $G=(V, E)$ be a bipartite graph of maximum degree $\Delta$, we construct a new graph $G^{\prime}=\left(V^{\prime}, E^{\prime}\right)$ by starting from $r+1$ copies of $G$. Then for each vertex $v \in V$ we add a new vertex $u_{v}$ in $G^{\prime}$ and connect it to each one of the $r+1$ copies of the vertex $v$.

## 数学代写|理论计算机代写theoretical computer science代考|Hardness of Approximation

In the following theorems we provide some inapproximability results for the UPPER EC and the DOUBLE UPPER EC.

Theorem 17. It is NP-hard to approximate the solution of UPPER EC to within $\frac{593}{594}$ and $\frac{363}{364}$ in graphs of max degree 4 and 5 respectively.

Proof. In order to prove this we will use a reduction from Min VC. Starting from an $r$-regular graph $G=(V, E)$ we will construct a new graph $G^{\prime}$; first we add a $P_{2}$ to each vertex $v \in V$. Then for each edge $e=v u \in E$ we add a new vertex $v_{e}$ adjacent to $v$ and $u$. In the end we remove all the starting edges $E$. Since Min VC can not be approximated to within a factor $\frac{100}{99}$ (resp. $\frac{53}{52}$ ) in 3-regular (resp. 4-regular) graphs [11], it deduces the expected results.

Theorem 18. It is NP-hard to approximate the solution of DOUBLE UPPER EC to within $\frac{883}{884}$ in graphs of max degree 6 .

Proof. In order to get the inapproximability result, we first make a reduction from MiN VC to UPPER. EC similar to Theorem 17, then we reduce it to DOUBLE UPPER EC using a reduction proposed in Theorem $14 .$

Theorem 19. It is NP-hard to approximate the solution of DOUBLE UPPER. EC to within $\frac{571}{572}$ in graphs of max degree 9 .

Proof. Again, we will make a reduction from VerTex Cover problem. Starting from a 4-regular graph $G=(V, E)$ we construct a new graph by adding a set of new vertices $V_{E}$ which has one vertex $v_{e}$ for each edge $e \in E$, and then adding new edges $v_{\varepsilon} u$ if the edge $e$ was incident to $u$ in the original graph $G$. Let $G^{\prime}=\left(V^{\prime}, E^{\prime}\right)$ be the new graph. It is easy to see that $\left|V^{\prime}\right|=|V|+|E|$ and $\Delta\left(G^{\prime}\right)=2 \Delta(G)=8$. Furthermore, we can show that from any $V C$ of $G$ we can

construct a tec of $G^{\prime}$ of size $|T E C|=|E|+|V|-|V C|$ and conversely, from any tec of $G^{\prime}$ we can construct a $V C$ of $G$ of size $|V C|=|E|+|V|-|T E C|$. In order to prove the first direction we will start from a $V C$ of $G$. Let $S$ be the set of all the edges $v_{c} u$ where $u \in V C . S$ is a partial tec of $G$ because it covers only the vertices in $V C \cup V_{E}$, any vertex of $V_{E}$ has degree one in $G_{S}^{\prime}$ and the vertices of $V C$ are independent in $G_{S}^{\prime}$. It is easy to extend $S$ to a tec of $G^{\prime}$ by adding one edge for every vertex $v \in V \backslash V C$ that is adjacent to a vertex in $V C$ (there exists because $v \in V$ and $V C$ is a vertex cover of $G$ ). The extended $S$ is a tec due to the fact that the vertices that may have greater degree than one are all in $V C$, which is independent in $G_{S}$ and all the vertices are covered. Now we have to observe that this tec contains exactly one edge for each vertex in $V_{E}$ and one for each vertex in $V \backslash V C$ so the size is exactly
$$|T E C|=|E|+|V|-|V C| .$$
Conversely, we will start from a tec of $G^{\prime}$ and we will construct a vertex cover of $G$ of the wanted size. First we have to show that for any tec $S$ of $G^{\prime}$, if there exists $v_{e} \in V_{E}$ such that $d_{G_{s}^{\prime}}\left(v_{e}\right)=2$ (it can not be greater because $d_{G^{\prime}}\left(v_{c}\right)=2$ ) then there exists an other tec $S^{\prime}$ of $G^{\prime}$ that has the same size and every vertex $v_{e} \in V_{E}$ has degree $d_{G_{S^{\prime}}}\left(v_{e}\right)=1$. This is easy to prove by repeating the following:
If there exists $e=u v \in E$ such that $d_{G_{S^{\prime}}}\left(v_{e}\right)=2$ then $S^{\prime}:=S^{\prime}+v_{e} u-v_{e} v$.

## 数学代写|理论计算机代写theoretical computer science代考|Using Apollonius Circle

Typically, supervised learning methods are useful when there is enough labeled data, but in many real word tasks, unlabeled data is available. Furthermore, in practice, labeling is an expensive and time consuming task, because it needs human experience and efforts [14]. Therefore, finding an approach which can employ both labeled and unlabeled data to construct a proper model is crucial. Such a learning approach is named semi-supervised learning. In semi-supervised learning algorithms, we use labeled data as well as unlabeled data. The main goal of semi-supervised learning is to employ unlabeled instances and combine the information in the unlabeled data with the explicit classification information of labeled data to improve the classification performance. The main challenge

in semi-supervised learning is how to extract knowledge from the unlabeled data $[4,21,30]$. Several different algorithms for semi-supervised learning have been introduced, such as the Expectation Maximization (EM) based algorithms $[11,13,15]$, self-training $[7,9,23]$, co-training $[19,27]$, Transduction Support Vector Machine (TSVM) $[1,12,26]$, Semi-Supervised SVM (S3VM) [3], graph-based methods $[2,28]$, and boosting based semi-supervised learning methods $[5,16,24]$.
Most of the semi-supervised algorithms follow two main approaches, extension of a specific base learner to learn from labeled and unlabeled examples and using a framework to learn from both labeled and unlabeled data regardless of the used base learner. Examples of the first approach include S3VM. TSVM. and LapSVM. Wrapper-based and Boosting-based methods follow the second approach, like Co-training, SemiBoost [16], MSAB [24], and MSSBoost [22]. In this article, we focus on both approaches and propose a novel semi-supervised approach based on the SVM base learner.

Support vector machine is proposed by Cortes and Vapnik $[6]$ and is one of the promising base learners in many practical domains, such as object detection, document and web-page categorization. It is a supervised learning method based on margin as well as statistical learning theory [8]. The purpose of the support vector machine algorithm is to find an optimal hyperplane in an N-dimensional space in order to classify the data points. As shown in Fig. 1 , there are many possible hyperplanes that could be selected. The optimal classification hyperplane of SVM needs not only segregating the data points correctly, but also maximizing the margin [8]. Maximizing the margin leads to a strong classification model. The standard form of SVM is only used for supervised learning tasks. This base learner can not directly handle the semi-supervised classification tasks. There are several extensions to SVM, like S3VM and TSVM. These methods use the unlabeled data to regularize the decision boundary. These methods mainly extend the SVM base classifier to semi-supervised learning, which are computationally expensive $[16]$. Therefore, these approaches are suitable only for small datasets. More recently, a semi-supervised self-training has been used to handle the semi-supervised classification tasks $[7,23,29]$. Self-training is a wrapper-based algorithm that repeatedly uses a supervised learning method. It starts to train on labeled data only. At each step, a set of unlabeled points is labeled according to the current decision function; then the supervised method is retrained using its own predictions as additional labeled points [23]. However, the performance of this method depends on correctly predicting the labels of unlabeled data. This is important, because the selection of incorrect predictions will propagate to produce further classification errors. In general, there are two main challenges. The first is to select the appropriate candidate set of unlabeled examples to label at each iteration of the training procedure. The second is to correctly predict labels to unlabeled examples. To handle these issues, the recent studies tend to find a set of high-confidence predictions at each iteration $[7,23,29]$. These selected examples are typically far away from the decision boundary. Hence, this type of algorithm cannot effectively exploit information from the unlabeled data and the final decision boundary will be very close to that of the initial classifier [22].

## 数学代写|理论计算机代写theoretical computer science代考|Hardness of Approximation

|吨和C|=|和|+|在|−|在C|.

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。