## 澳洲代写｜PHYC30018｜Quantum Physics量子计算 墨尔本大学

statistics-labTM为您提供墨尔本大学The University of Melbourne，简称UniMelb，中文简称“墨大”）Quantum Physics量子计算澳洲代写代考辅导服务！

Quantum mechanics plays a central role in our understanding of fundamental phenomena, primarily in the microscopic domain. It lays the foundation for an understanding of atomic, molecular, condensed matter, nuclear and particle physics.

## Quantum Physics量子计算问题集

(a) Suppose that the resistivity matrix is given by the classical result
$$\rho=\left(\begin{array}{cc} \rho_0 & -\rho_H \ \rho_H & \rho_0 \end{array}\right)$$
where $\rho_H=B /$ nec is the Hall resistivity and $\rho_0$ is the usual Ohmic resistivity. Find the conductivity matrix, $\sigma=\rho^{-1}$. Write it in the form:
$$\sigma=\left(\begin{array}{cc} \sigma_0 & \sigma_H \ -\sigma_H & \sigma_0 \end{array}\right) .$$
What are $\sigma_0$ and $\sigma_H$ ?
(b) Suppose $B=0$, so the Hall resistivity is zero. Notice that the Ohmic conductivity, $\sigma_0$, is just $1 / \rho_0$. In particular, note that $\sigma_0 \rightarrow \infty$ as $\rho_0 \rightarrow 0$. Now suppose $\rho_H \neq 0$. Show that $\sigma_0 \rightarrow 0$ as $\rho_0 \rightarrow 0$, so it is possible to have both $\sigma_0$ and $\rho_0$ equal to zero

This problem asks you to give a complete presentation of a calculation that is almost the same as one you saw in lecture.

Consider a constant electric field, $\vec{E}=\left(0, E_0, 0\right)$ and a constant magnetic field, $\vec{B}=\left(0,0, B_0\right)$.
(a) Choose an electrostatic potential $\phi$ and a vector potential $\vec{A}$ which describe the $\vec{E}$ and $\vec{B}$ fields, and write the Hamiltonian for a charged particle of mass $m$ and charge $q$ in these fields. Assume that the particle is restricted to move in the $x y$-plane.
(b) What are the allowed energies as a function of $B_0$ and $E_0$ ? Draw a figure to show how the Landau levels (energy levels when $E_0=0$ ) change as $E_0$ increases.

You will see the “standard presentation” of the Aharonov-Bohm effect in lecture, on the day that this problem set is due. The standard presentation has its advantages, and in particular is more general than the presentation you will work through in this problem. However, students often come away from the standard presentation of the Aharonov-Bohm effect thinking that the only way to detect this effect is to do an interference experiment. This is not true, and the purpose of this problem is to disabuse you of this misimpression before you form it.

As Griffiths explains on pages 385-387 (344-345 in 1st Ed.), the Aharonov-Bohm effect modifies the energy eigenvalues of suitably chosen quantum mechanical systems. In this problem, I ask you to work through the same physical example that Griffiths uses, but in a different fashion which makes more use of gauge invariance.

Imagine a particle constrained to move on a circle of radius $b$ (a bead on a wire ring, if you like.) Along the axis of the circle runs a solenoid of radius $a<b$, carrying a magnetic field $\vec{B}=\left(0,0, B_0\right)$. The field inside the solenoid is uniform

and the field outside the solenoid is zero. The setup is depicted in Griffiths’ Fig. 10.10. (10.12 in 1st Ed.)
(a) Construct a vector potential $\vec{A}$ which describes the magnetic field (both inside and outside the solenoid) and which has the form $A_r=A_z=0$ and $A_\phi=\alpha(r)$ for some function $\alpha(r)$. I am using cylindrical coordinates $z, r$, $\phi$.
(b) Since $\vec{\nabla} \times \vec{A}=0$ for $r>a$, it must be possible to write $\vec{A}=\vec{\nabla} f$ in any simply connected region in $r>a$. [This is a theorem in vector calculus.] Show that if we find such an $f$ in the region
$$r>a \text { and }-\pi+\epsilon<\phi<\pi-\epsilon,$$
then
$$f(r, \pi-\epsilon)-f(r,-\pi+\epsilon) \rightarrow \Phi \text { as } \epsilon \rightarrow 0 .$$
Here, the total magnetic flux is $\Phi=\pi a^2 B_0$. Now find an explicit form for $f$, which is a function only of $\phi$.
(c) Now consider the motion of a “bead on a ring”: write the Schrödinger equation for the particle constrained to move on the circle $r=b$, using the $\vec{A}$ you found in (a). Hint: the answer is given in Griffiths.
(d) Use the $f(\phi)$ found in (b) to gauge transform the Schrödinger equation for $\psi(\phi)$ within the angular region $-\pi+\epsilon<\phi<\pi-\epsilon$ to a Schrödinger equation for a free particle within this angular region. Call the original wave function $\psi(\phi)$ and the gauge-transformed wave function $\psi^{\prime}(\phi)$.
(e) The original wave function $\psi$ must be single-valued for all $\phi$, in particular at $\phi=\pi$. That is, $\psi(\pi-\epsilon)-\psi(-\pi+\epsilon) \rightarrow 0$ and $\frac{\partial \psi}{\partial \phi}(\pi-\epsilon)-\frac{\partial \psi}{\partial \phi}(-\pi+\epsilon) \rightarrow 0$ as $\epsilon \rightarrow 0$. What does this say about the gauge-transformed wave function? I.e., how must $\psi^{\prime}(\pi-\epsilon)$ and $\psi^{\prime}(-\pi+\epsilon)$ be related as $\epsilon \rightarrow 0$ ?
[Hint: because the $f(\phi)$ is not single valued at $\phi=\pi$, the gauge transformed wave function $\psi^{\prime}(\phi)$ is not single valued there either.]
(f) Solve the Schrödinger equation for $\psi^{\prime}$, and find energy eigenstates which satisfy the boundary conditions you derived in (e). What are the allowed energy eigenvalues?
(g) Undo the gauge transformation, and find the energy eigenstates $\psi(\phi)$ in the original gauge. Do the energy eigenvalues in the two gauges differ?
(h) Plot the energy eigenvalues as a function of the enclosed flux, $\Phi$. Show that the energy eigenvalues are periodic functions of $\Phi$ with period $\Phi_0$, where you must determine $\Phi_0$. For what values of $\Phi$ does the enclosed magnetic field have no effect on the spectrum of a particle on a ring? Show that the

Aharonov-Bohm effect can only be used to determine the fractional part of $\Phi / \Phi_0$.
[Note: you have shown that even though the bead on a ring is everywhere in a region in which $\vec{B}=0$, the presence of a nonzero $\vec{A}$ affects the energy eigenvalue spectrum. However, the effect on the energy eigenvalues is determined by $\Phi$, and is therefore gauge invariant. To confirm the gauge invariance of your result, you can compare your answer for the energy eigenvalues to Griffiths’ result, obtained using a different gauge.]

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

## 澳洲代写｜COMP30027｜Machine Learning机器学习 墨尔本大学

statistics-labTM为您提供墨尔本大学The University of Melbourne，简称UniMelb，中文简称“墨大”）Complex Analysis复杂分析澳洲代写代考辅导服务！

Machine Learning, a core discipline in data science, is prevalent across Science, Technology, the Social Sciences, and Medicine; it drives many of the products we use daily such as banner ad selection, email spam filtering, and social media newsfeeds. Machine Learning is concerned with making accurate, computationally efficient, interpretable and robust inferences from data. Originally borne out of Artificial Intelligence, Machine Learning has historically been the first to explore more complex prediction models and to emphasise computation, while in the past two decades Machine Learning has grown closer to Statistics gaining firm theoretical footing.

## Machine Learning机器学习 问题集

The data and scripts for this problem are available in hw2/prob1. You can load the data using the MATLAB script load_al_data. This script should load the matrices y_noisy, y_true, X_in. The $y$ vectors are $n \times 1$ while $\mathrm{X}{-}$in is a $n \times 3$ matrix with each row corresponding to a point in $\mathcal{R}^3$. The $y{\text {true }}$ vectors correspond to the ideal $y$ values, generated directly from the “true” model (whatever it may be) without any noise. In contrast, the $y_{\text {noisy }}$ vectors are the actual, noisy observations, generated by adding Gaussian noise to the $y_{\text {true }}$ vectors. You should use $y_{n o i s y}$ for any estimation. $y_{\text {true }}$ is provided only to make it easier to evaluate the error in your predictions (simulate an infinite test data). You would not have $y_{\text {true }}$ in any real task.
(a) Write MATLAB functions theta = linear_regress $(\mathrm{y}, \mathrm{X})$ and $\mathrm{y}$ hat $=$ linear_pred(theta, X_test). Note that we are not explicitly including the offset parameter but instead rely on the feature vectors to provide a constant component. See part (b).
(b) The feature mapping can substantially affect the regression results. We will consider two possible feature mappings:
\begin{aligned} & \phi_1\left(x_1, x_2, x_3\right)=\left[1, x_1, x_2, x_3\right]^T \ & \phi_2\left(x_1, x_2, x_3\right)=\left[1, \log x_1^2, \log x_2^2, \log x_3^2\right]^T \end{aligned}
Use the provided MATLAB function feature mapping to transform the input data matrix into a matrix
$$X=\left[\begin{array}{c} \phi\left(\mathbf{x}1\right)^T \ \phi\left(\mathbf{x}_2\right)^T \ \cdots \ \phi\left(\mathbf{x}{\mathbf{n}}\right)^T \end{array}\right]$$
For example, $\mathrm{X}$ = feature_mapping ( $\mathrm{X}_{-}$in, 1 ) would get you the first feature representation. Using your completed linear regression functions, compute the mean squared prediction error for each feature mapping (2 numbers).

(c) The selection of points to query in an active learning framework might depend on the feature representation. We will use the same selection criterion as in the lectures, the expected squared error in the parameters, proportional to $\operatorname{Tr}\left[\left(X^T X\right)^{-1}\right]$. Write a MATLAB function $\mathrm{idx}=\operatorname{active_ learn}(\mathrm{X}, \mathrm{k} 1, \mathrm{k} 2)$. Your function should assume that the top $k_1$ rows in $X$ have been queried and your goal is to sequentially find the indices of the next $k_2$ points to query. The final set of $k_1+k_2$ indices should be returned in idx. The latter may contain repeated entries. For each feature mapping, and $k_1=5$ and $k_2=10$, compute the set of points that should be queried (i.e., $\mathrm{X}(:, \mathrm{idx})$ ). For each set of points, use the feature mapping $\phi_2$ to perform regression and compute the resulting mean squared prediction errors (MSE) over the entire data set (again, using $\phi_2$ ).

(d) Let us repeat the steps of part (c) with randomly selected additional points to query. We have provided a MATLAB function $i d x=\operatorname{randomly}(\operatorname{select}(\mathrm{X}, \mathrm{k} 1, \mathrm{k} 2)$ which is essentially the same as active_learn except that it selects the $k_2$ points uniformly at random from $X$. Repeat the regression steps as in previous part, and compute the resulting mean squared prediction error again. To get a reasonable comparison you should repeat this process 50 times, and use the median MSE. Compare the resulting errors with the active learning strategies. What conclusions can you draw?

(e) Let us now compare the two sets of points chosen by active learning due to the different feature representations. We have provided a function plot_points(X,idx_r,idx_b) which will plot each row of $X$ as a point in $\mathbf{R}^3$. The points indexed by $i d x _r$ will be circled in red and those marked by idx_b will be circled (larger) in blue (some of the points indexed by idx_r and idx_b might be common). Plot the original data points using the indexes of the actively selected points based on the two feature representations. Also plot the same indexes using $\mathrm{X}$ from the second feature representation with its first constant column removed. In class, we saw an example where the active learning strategy chose points at the extrema of the available space. Can you see evidence of this in the two plots?

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

## 澳洲代写｜MAST30021｜Complex Analysis复杂分析  墨尔本大学

statistics-labTM为您提供墨尔本大学The University of Melbourne，简称UniMelb，中文简称“墨大”）Complex Analysis复杂分析澳洲代写代考辅导服务！

Complex analysis is a core subject in pure and applied mathematics, as well as the physical and engineering sciences. While it is true that physical phenomena are given in terms of real numbers and real variables, it is often too difficult and sometimes not possible, to solve the algebraic and differential equations used to model these phenomena without introducing complex numbers and complex variables and applying the powerful techniques of complex analysis.

Topics include:the topology of the complex plane; convergence of complex sequences and series; holomorphic functions, the Cauchy-Riemann equations, harmonic functions and applications; contour integrals and the Cauchy Integral Theorem; singularities, Laurent series, the Residue Theorem, evaluation of integrals using contour integration, conformal mapping; and aspects of the gamma function.

## Complex Analysis复杂分析 问题集

By writing $z$ in the form $z=a+b \mathrm{i}$, find all solutions $z$ of the following equations:
(i) $z^2=-5+12 \mathrm{i}$
(ii) $z^2=2+\mathrm{i}$
(iii) $(7+24 \mathrm{i}) z=375$
(iv) $z^2-(3+\mathrm{i}) z+(2+2 \mathrm{i})=0$
(v) $z^2-3 z+1+\mathrm{i}=0$

If $\lambda$ is a positive real number, show that
$${z \in \mathbb{C}:|z|=\lambda|z-1|}$$
is a circle, unless $\lambda$ takes one particular value (which?)

Draw the set of points
$${z \in \mathbb{C}: \operatorname{re}(z+1)=|z-1|}$$
by substituting $z=x+\mathrm{i} y$ and computing the real equation relating $x$ and $y$.
Now note that re $(z+1)$ is the distance from $z$ to the line $y=-1$, and $|z-1|$ is the distance between $z$ and 1. Compare with the classical ‘focus-directrix’ definition of a parabola: the locus of a point equidistant from a fixed line (here $y=-1$ ) and a fixed point (here $(x, y)=(1,0)$ ).

Let $r, s, \theta, \phi$ be real. Let
\begin{aligned} z & =r(\cos \theta+\mathrm{i} \sin \theta) \ w & =s(\cos \phi+\mathrm{i} \sin \phi) \end{aligned}
Form the product $z w$ and use the standard formulas for $\cos (\theta+\phi), \sin (\theta+\phi)$ to show that $\arg (z w)=\arg (z)+\arg (w)$ (for any values of $\arg$ on the right, and some value of arg on the left).
By induction on $n$, derive De Moivre’s Theorem
$$(\cos \theta+\mathrm{i} \sin \theta)^n=\cos n \theta+\mathrm{i} \sin n \theta$$
for all natural numbers $n$.

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

## 澳洲代写｜MAST90125｜Bayesian Statistical Learning贝叶斯统计学习  墨尔本大学

statistics-labTM为您提供墨尔本大学The University of Melbourne，简称UniMelb，中文简称“墨大”）Bayesian Statistical Learning贝叶斯统计学习 要素澳洲代写代考辅导服务！

Bayesian inference treats all unknowns as random variables, and the core task is to update the probability distribution for each unknown as new data is observed. After introducing Bayes’ Theorem to transform prior probabilities into posterior probabilities, the first part of this subject introduces theory and methodological aspects underlying Bayesian statistical learning including credible regions, comparisons of means and proportions, multi-model inference and model selection. The second part of the subject focuses on advanced supervised and unsupervised Bayesian machine learning methods in the context of Gaussian processes and Dirichlet processes. The subject will also cover practical implementations of Bayesian methods through Markov Chain Monte Carlo computing and real data applications.

## Bayesian Statistical Learning贝叶斯统计学习问题集

Example 3: Given the condition $C$ of a symmetrical die the probability is $2 / 6=1 / 3$ to throw a two or a three according to the classical definition (2.24) of probability.
$\Delta$
Example 4: A card is taken from a deck of 52 cards under the condition $C$ that no card is marked. What is the probability that it will be an ace or a diamond? If $A$ denotes the statement of drawing a diamond and $B$ the one of drawing an ace, $P(A \mid C)=13 / 52$ and $P(B \mid C)=4 / 52$ follow from (2.24). The probability of drawing the ace of diamonds is $P(A B \mid C)=1 / 52$. Using (2.17) the probability of an ace or diamond is then $P(A+B \mid C)=$ $13 / 52+4 / 52-1 / 52=4 / 13$.

Example 5: Let the condition $C$ be true that an urn contains 15 red and 5 black balls of equal size and weight. Two balls are drawn without being replaced. What is the probability that the first ball is red and the second one black? Let $A$ be the statement to draw a red ball and $B$ the statement to draw a black one. With $(2.24)$ we obtain $P(A \mid C)=15 / 20=3 / 4$. The probability $P(B \mid A C)$ of drawing a black ball under the condition that a red one has been drawn is $P(B \mid A C)=5 / 19$ according to (2.24). The probability of drawing without replacement a red ball and then a black one is therefore $P(A B \mid C)=(3 / 4)(5 / 19)=15 / 76$ according to the product rule (2.12).

Example 6: The grey value $g$ of a picture element, also called pixel, of a digital image takes on the values $0 \leq g \leq 255$. If 100 pixels of a digital image with $512 \times 512$ pixels have the gray value $g=0$, then the relative frequency of this value equals $100 / 512^2$ according to (2.24). The distribution of the relative frequencies of the gray values $g=0, g=1, \ldots, g=255$ is called a histogram.
$\Delta$

Axioms of Probability
Probabilities of random events are introduced by axioms for the probability theory of traditional statistics, see for instance KосH (1999, p.78). Starting from the set $S$ of elementary events of a random experiment, a special system $Z$ of subsets of $S$ known as $\sigma$-algebra is introduced to define the random events. $Z$ contains as elements subsets of $S$ and in addition as elements the
10
2 Probability
empty set and the set $S$ itself. $Z$ is closed under complements and countable unions. Let $A$ with $A \in Z$ be a random event, then the following axioms are presupposed,

Axiom 1: A real number $P(A) \geq 0$ is assigned to every event $A$ of $Z . P(A)$ is called the probability of $A$.
Axiom 2: The probability of the sure event is equal to one, $P(S)=1$.
Axiom 3: If $A_1, A_2, \ldots$ is a sequence of a finite or infinite but countable number of events of $Z$ which are mutually exclusive, that is $A_i \cap A_j=\emptyset$ for $i \neq j$, then
$$P\left(A_1 \cup A_2 \cup \ldots\right)=P\left(A_1\right)+P\left(A_2\right)+\ldots .$$
The axioms introduce the probability as a measure for the sets which are the elements of the system $Z$ of random events. Since $Z$ is a $\sigma$-algebra, it may contain a finite or infinite number of elements, whereas the rules given in Chapter 2.1.4 and 2.1.5 are valid only for a finite number of statements.
If the system $Z$ of random events contains a finite number of elements, the $\sigma$-algebra becomes a set algebra and therefore a Boolean algebra, as already mentioned at the end of Chapter 2.1.2. Axiom 1 is then equivalent to the requirement 1 of Chapter 2.1.4, which was formulated with respect to the plausibility. Axiom 2 is identical with (2.13) and Axiom 3 with (2.21), if the condition $C$ in (2.13) and (2.21) is not considered. We may proceed to an infinite number of statements, if a well defined limiting process exists. This is a limitation of the generality, but is is compensated by the fact that the probabilities (2.12) to (2.14) have been derived as rules by consistent and logical reasoning. This is of particular interest for the product rule (2.12). It is equivalent in the form
$$P(A \mid B C)=\frac{P(A B \mid C)}{P(B \mid C)} \text { with } \quad P(B \mid C)>0,$$
if the condition $C$ is not considered, to the definition of the conditional probability of traditional statistics. This definition is often interpreted by relative frequencies which in contrast to a derivation is less obvious.

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

## 澳洲代写 随机过程Stochastic process代考2023

statistics-lab™ 长期致力于留学生网课服务，涵盖各个网络学科课程：金融学Finance经济学Economics数学Mathematics会计Accounting，文学Literature，艺术Arts等等。除了网课全程托管外，statistics-lab™ 也可接受单独网课任务。无论遇到了什么网课困难，都能帮你完美解决！

statistics-lab™ 为您的留学生涯保驾护航 在代写代考随机过程Stochastic process方面已经树立了自己的口碑, 保证靠谱, 高质量且原创的统计Statistics数学Math代写服务。我们的专家在代考随机过程Stochastic process相关的作业也就用不着说。

## 随机过程Stochastic process代考

#### 傅立叶分析Fourier analysis代写代考

• 数学模型Mathematical model
• 线性代数Linear algebra
• 概率学Probability

## 随机过程Stochastic process定义

A stochastic or stochastic process can be defined as a collection of random variables indexed by some mathematical set, which means that each random variable in the stochastic process is uniquely related to some element in the set. Historically, the index set was some subset of the real lines, such as the natural numbers, which gave the index set a temporal interpretation. For example, the state space can be integers, solid lines, or… n-dimensional Euclidean space]. An increment is the amount of change of a random process between two exponential values, usually interpreted as two points in time. Due to randomness, a random process can have many outcomes, and a single outcome of a random process is called a sample function or realization.

Many fields use observations as functions of time (or, more rarely, spatial variables). In the simplest case, these observations yield a well-defined curve. In fact, from the earth sciences to the humanities, observations are more or less unstable. Therefore, there is a certain uncertainty in the interpretation of these observations, which may be reflected in the use of probabilities to express them.

Stochastic processes generalize the concept of random variables used in probability. It is defined as a sequence of random variables X(t) related to all values t ∈ T (usually time).

From a statistical perspective, we treat all available observations x(t) as realizations of the process, which creates certain difficulties. The first problem concerns the fact that the duration of the build process is usually infinite, whereas the implementation covers a finite duration. Therefore, it is impossible to perfectly reproduce reality. A second, more serious difficulty is that, unlike random variable problems, the available information about the process is often reduced to a single realization.

## 随机过程Stochastic process的重难点

$p$ e $q=1$ – $p$ :
\begin{aligned} & P\left(X_i=1\right)=p ; \ & P\left(X_i=0\right)=q=1-p . \end{aligned}

$$P\left(S_n=k\right)=\left(\begin{array}{l} n \ k \end{array}\right) p^k q^{n-k}$$

$$P(N=n)=P\left(S_{n-1}=0\right) \cdot P\left(X_n=1\right)=q^n \frac{p}{q} .$$

$P\left(N_k=n\right)=P\left(S_{n-1}=k-1\right) \cdot P\left(X_n=1\right)=\left(\begin{array}{c}n-1 \ k-1\end{array}\right) p^k q^{n-k}$

$$P\left(P_k=r\right)=P\left(N_k=r+k\right)=\left(\begin{array}{c} r+k-1 \ k-1 \end{array}\right) p^k q^r=(-1)^k\left(\begin{array}{c} -r \ k \end{array}\right) p^k q^r$$
Applicazioni
[ modifica | modifica wikitesto ]

$$P{X=N}=\frac{(2 N) !}{N !(2 N-N) !} p^N(1-p)^N=\left(\begin{array}{c} 2 N \ N \end{array}\right)\left(p-p^2\right)^N .$$

$$P{X=N}=\left(\begin{array}{c} 2 N \ N \end{array}\right)\left(\frac{1}{2}\right)^{2 N} \approx \frac{1}{\sqrt{N \pi}},$$我们对足够大的 $N$ 应用斯特林近似，

$$N ! \sim \sqrt{2 \pi N}\left(\frac{N}{e}\right)^N .$$现在记住随机变量的期望值由下式给出

$$E[X]=\sum_{n=0}^{\infty} n P(n)$$

## 随机过程Stochastic process的相关课后作业范例

Show that in successive tosses of a fair die indefinitely, the probability of obtaining no 6 is 0 .

Solution: For $n \geq 1$, let $E_n$ be the event of at least one 6 in the first $n$ tosses of the die. Clearly,
$$E_1 \subseteq E_2 \subseteq \cdots \subseteq E_n \subseteq E_{n+1} \subseteq \cdots .$$
Therefore, $E_n$ ‘s form an increasing sequence of events. Note that $\lim {n \rightarrow \infty} E_n=\bigcup{i=1}^{\infty} E_i$ is the event that in successive tosses of the die indefinitely, eventually a 6 will occur. By the Continuity of the Probability Function (Theorem 1.8), we have
$$P\left(\lim {n \rightarrow \infty} E_n\right)=\lim {n \rightarrow \infty} P\left(E_n\right)=\lim {n \rightarrow \infty}\left[1-\left(\frac{5}{6}\right)^n\right]=1-\lim {n \rightarrow \infty}\left(\frac{5}{6}\right)^n=1-0=1 .$$
This shows that, with probability 1 , eventually a 6 will occur. Therefore, the probability of no 6 ever is 0 .

## 澳洲代写 多元统计分析代写Multivariate Statistical Analysis代考2023

statistics-lab™ 长期致力于留学生网课服务，涵盖各个网络学科课程：金融学Finance经济学Economics数学Mathematics会计Accounting，文学Literature，艺术Arts等等。除了网课全程托管外，statistics-lab™ 也可接受单独网课任务。无论遇到了什么网课困难，都能帮你完美解决！

statistics-lab™ 为您的留学生涯保驾护航 在代写代考多元统计分析Multivariate Statistical Analysis方面已经树立了自己的口碑, 保证靠谱, 高质量且原创的统计Statistics数学Math代写服务。我们的专家在代考多元统计分析Multivariate Statistical Analysis相关的作业也就用不着说。

## 多元统计分析代写Multivariate Statistical Analysis代考

#### 概率论Probability distribution代写代考

\begin{aligned}
& \beta_{0 j}=\gamma_{00}+\gamma_{01} W_j+u_{0 j} \
& \beta_{1 j}=\gamma_{10}+u_{1 j}
\end{aligned}
$$\gamma_{00} -总截距（当所有预测因子等于 0 时，所有组的因变量得分平均值） • W_j – 二级预测因子。 • \gamma_{01} – 因变量 \beta_{1 j}与二级预测因子 W_j 之间的总体回归系数（斜率 • u_{0 j}- 组截距与总体截距之差的随机误差 • \gamma_{10} – 因变量\beta_{1 j} 与第 1 层预测因子X_{i j} 之间的总体回归系数（斜率 • u_{1 j}斜率的误差成分{ }^{[2]}（总体斜率与组斜率之差） ## 多尺度模型Multilevel Models的相关课后作业范例 这是一篇关于图论Graph Theoryry的作业 问题 1. This box summarizes the terminology for the various algebraic terms used in the models i y_{i j} is the dependent variable: the outcome for individual i living in neighbourhood j. Individuals are numbered from i=1, \ldots, N and each lives in one neighbourhood j=1, \ldots, J. There are n_j individuals from neighbourhood j so N=\sum_{j=1}^J n_j. x_{p i j} are the independent variables, measured on individual i in neighbourhood j. The subscript p is used to distinguish between the variables. x_{p j} are independent variables, measured at the neighbourhood level; this variable takes the same value for all individuals living in neighbourhood j. \beta_0 is used to denote the intercept. \beta_p is the regression coefficient associated with x_{p i j} or x_{p j}. u_{0 j} is the estimated effect or residual for area j. This is the difference in the outcome for an individual in neighbourhood j compared to an individual in the average neighbourhood, after taking into account those characteristics that have been included in the model. The 0 in the subscript denotes that this is a random intercept residual, a departure from the overall intercept \beta_0 applying equally to everyone in neighbourhood j regardless of individual characteristics. u_{p j} is the slope residual for neighbourhood j that is associated with the independent variable x_{p i j} or x_{p j}. Just as u_{0 j} denotes a departure from the overall intercept \beta_0, u_{p j} indicates the extent of a departure from the overall slope in a random slope model. e_{0 i j} is the individual-level residual or error term for individual i in neighbourhood j. \sigma_{u 0}^2 is the| variance of the neighbourhood-level intercept residuals u_{0 j}. \sigma_{u p}^2 is the variance of the neighbourhood-level slope residuals u_{p j}. \sigma_{u 0 p} is the covariance between the neighbourhood-level intercept residuals u_{0 j} and slope residuals u_{p j}. \sigma_{e 0}^2 is the variance of the individual-level errors e_{0 i j}. \rho_{\mathrm{I}} is the intraclass correlation coefficient or the proportion of the total variation in the outcome that is attributable to differences between areas. ## 最后的总结： 通过对多尺度模型Multilevel Models各方面的介绍，想必您对这门课有了初步的认识。如果你仍然不确定或对这方面感到困难，你仍然可以依靠我们的代写和辅导服务。我们拥有各个领域、具有丰富经验的专家。他们将保证你的 essay、assignment或者作业都完全符合要求、100%原创、无抄袭、并一定能获得高分。需要如何学术帮助的话，随时联系我们的客服。 ## 图论代写Graph Theory代考2023 如果你也在图论Graph Theory这个学科遇到相关的难题，请随时添加vx号联系我们的代写客服。我们会为你提供专业的服务。 statistics-lab™ 长期致力于留学生网课服务，涵盖各个网络学科课程：金融学Finance经济学Economics数学Mathematics会计Accounting，文学Literature，艺术Arts等等。除了网课全程托管外，statistics-lab™ 也可接受单独网课任务。无论遇到了什么网课困难，都能帮你完美解决！ statistics-lab™ 为您的留学生涯保驾护航 在代写代考图论Graph Theory方面已经树立了自己的口碑, 保证靠谱, 高质量且原创的统计Statistics数学Math代写服务。我们的专家在代考图论Graph Theory相关的作业也就用不着说。 ## 图论代写Graph Theory代考 图形可用于显示不同事物之间的关系。 由六个节点和七条边组成的图形示例 例如，在考虑铁路或公共汽车线路图时，问题是车站（节点）如何通过线路（边）连接起来，而铁路线的具体弯曲度往往不是一个重要问题。 因此，线路图对车站之间的距离、微妙的布局和线路形状的描述往往与地理实际情况不同。 换句话说，对于线路图的用户来说，车站之间 “如何 “连接才是最重要的信息。 图论探讨了图的各种特性。 当问题不仅是 “如何连接”，而且是 “从哪里连接到哪里 “时，就会在边上附加箭头。 这种图称为有向图或数字图。 没有箭头的图称为无向图。 图论Graph Theory包含几个不同的主题，列举如下： #### 数学结构Mathematical structure代写代考 在数学中，集合的结构是由额外的数学对象组成的，这些对象在某种程度上与集合重叠，使集合可视化、可研究、可用作计算工具，并为集合及其元素赋予特定的意义。 一些可能的结构包括度量、代数结构（群、场等）、拓扑、度量、排序、等价和微分结构。有时，一个集合会同时被赋予几种结构，这使得数学家可以研究结构之间丰富的协同作用。例如，阶诱导拓扑。另一个例子是，集合既是群又具有拓扑结构，如果这两种结构以某种方式相关联，它们就会成为拓扑群。 在数学的许多领域中，保留某些结构的集合之间的应用（如域上的结构映射到代域上的等效结构）都非常重要，被称为态。例如，保留代数结构的同态；保留拓扑结构的同态；以及保留微分结构的差分同态。 #### 离散数学Discrete mathematics代写代考 离散数学是原则上处理离散（换句话说，非连续、零星）对象的数学。 它有时也被称为有限数学或离散数学。 在涉及图论、组合学、优化问题、计算几何、程序设计和算法理论的应用领域中，它经常被用来全面而抽象地描述相关领域[1]。 当然，离散数学也包括数论，但除了初等数论之外，它还与分析和其他领域（解析数论）相关，超出了离散数学的范围。 其他相关科目课程代写： • 多图式Multigraph • 代数图论Algebraic graph theory ## 图论Graph Theory历史 欧拉发表的 “哥尼斯堡七桥 “是第一篇将图形视为数学实体的文章。这篇文章也代表了拓扑几何中一个不依赖于任何测量的问题：哥尼斯堡桥问题的首次讨论。 19 世纪，人们提出并讨论了四色问题，事实证明这个问题非常具有挑战性，直到 20 世纪下半叶才得到解决。汉密尔顿路径问题也被提出。直到 20 世纪中叶，几乎没有其他发现。 20 世纪下半叶，随着组合学和自动计算的蓬勃发展，研究和成果也得到了广泛的发展。一方面，计算机的引入使图论的实验研究得以发展（特别是四色定理的证明），另一方面，图论需要研究具有强大应用影响力的算法和模型。短短五十年间，图论已成为数学中高度发达的一章，成果丰富而深刻，应用影响巨大。 The first text to consider graphs as mathematical entities is Euler’s publication on the ‘Seven Bridges of Königsberg’. This text also represents the first time that a problem in topological geometry, which does not depend on any measurement, is addressed: the Königsberg bridges problem. In the 19th century, the four-colour problem was posed and discussed, which proved to be very challenging and was only solved in the second half of the 20th century. The problem of Hamiltonian paths was also introduced. Until the middle of the 20th century little else was discovered. In the second half of the 20th century, studies and results developed extensively, in tune with the strong developments in combinatorics and automatic calculation. On the one hand, the introduction of the computer allowed for the development of experimental investigations of graphs (as, in particular, in the proof of the four-colour theorem) and, on the other hand, required graph theory to investigate algorithms and models with a strong application impact. Within fifty years, graph theory has become a highly developed chapter of mathematics, rich in profound results and with strong application influences. 统计代写请认准statistics-lab™. statistics-lab™为您的留学生涯保驾护航。 ## 图论Graph Theory的重难点 什么是顶点（图形理论）Vertex (graph theory)？ 在离散数学中，更具体地说，在图论中，顶点（复数顶点）或节点是构成图的基本单位：无向图由一组顶点和一组边（无序的顶点对）组成，而有向图由一组顶点和一组弧（有序的顶点对）组成。在图的示意图中，顶点通常用带有标签的圆来表示，而边则用从一个顶点延伸到另一个顶点的线或箭头来表示。 从图论的角度来看，顶点被视为无特征且不可分割的对象，但根据图的应用情况，顶点可能具有额外的结构；例如，语义网络就是一个顶点代表概念或对象类别的图。 有 6 个顶点和 7 条边的图，其中最左边的 6 号顶点是叶顶点或垂顶点 构成一条边的两个顶点称为这条边的端点，这条边称为顶点的入射边。如果图中包含一条边 (v,w)，则顶点 w 与另一个顶点 v 相邻。顶点 v 的邻域是由与 v 相邻的所有顶点组成的图的诱导子图。 什么是有向图Directed graph？ 从形式上看，有向图是一对有序的 G=（V，A），其中 • V 是一个集合，其元素称为顶点、节点或点、 简单有向图 • A 是一组有序的顶点对，称为弧、有向边（有时简称为边，相应的集合称为 E 而不是 A）、箭或有向线。 它与普通或无向图不同，后者是由无序的顶点对定义的，通常称为边、链接或线。 上述定义不允许有向图具有具有相同源节点和目标节点的多条弧，但有些作者考虑了一个更宽泛的定义，允许有向图具有这样的多条弧（即允许弧集是一个多集）。有时，这些实体被称为有向多图（或多图）。 另一方面，上述定义允许有向图具有循环（即直接连接节点与自身的弧），但有些作者认为狭义的定义不允许有向图具有循环。没有循环的有向图可称为简单有向图，而有循环的有向图可称为循环图（参见 “有向图的类型 “一节）。 什么是图形着色Graph coloring 图形着色是为图形中的某些元素分配颜色，使其满足某些约束条件。 最简单地说，就是给所有顶点着色，使相邻顶点不具有相同颜色。 这就是顶点着色。 同样，边着色是给所有边着色，使相邻边不具有相同颜色的问题；面着色是给平面图中边所围成的每个区域（面）着色，使相邻面不具有相同颜色的问题。 ## 图论Graph Theory的相关课后作业范例 这是一篇关于图论Graph Theoryry的作业 问题 1. Let T be a normal tree in G. (i) Any two vertices x, y \in T are separated in G by the set \lceil x\rceil \cap\lceil y\rceil. (ii) If S \subseteq V(T)=V(G) and S is down-closed, then the components of G-S are spanned by the sets \lfloor x\rfloor with x minimal in T-S. Proof. (i) Let P be any x-y path in G. Since T is normal, the vertices of P in T form a sequence x=t_1, \ldots, t_n=y for which t_i and t_{i+1} are always comparable in the tree oder of T. Consider a minimal such sequence of vertices in P \cap T. In this sequence we cannot have t_{i-1}t_{i+1} for any i, since t_{i-1} and t_{i+1} would then be comparable and deleting t_i would yield a smaller such sequence. \mathrm{Sp}$$
x=t_1>\ldots>t_k<\ldots<t_n=y
$$for some k \in{1, \ldots, n}. As t_k \in\lceil x\rceil \cap\lceil y\rceil \cap V(P), the result follows. (ii) Since S is down-closed, the upper neighbours in T of any vertex of G-S are again in G-S (and clearly in the same component), so the components C of G-S are up-closed. As S is down-closed, minimal vertices of C are also minimal in G-S. By (i), this means that C has only one minimal vertex x and equals its up-closure \lfloor x\rfloor. Normal spanning trees are also called depth-first search trees, because of the way they arise in computer searches on graphs. This fact is often used to prove their existence. The following inductive proof, however, is simpler and illuminates nicely how normal trees capture the structure of their host graphs. ## 最后的总结： 通过对图论Graph Theory各方面的介绍，想必您对这门课有了初步的认识。如果你仍然不确定或对这方面感到困难，你仍然可以依靠我们的代写和辅导服务。我们拥有各个领域、具有丰富经验的专家。他们将保证你的 essay、assignment或者作业都完全符合要求、100%原创、无抄袭、并一定能获得高分。需要如何学术帮助的话，随时联系我们的客服。 ## 信息论代写Information Theory代考2023 如果你也在信息论Information Theory这个学科遇到相关的难题，请随时添加vx号联系我们的代写客服。我们会为你提供专业的服务。 statistics-lab™ 长期致力于留学生网课服务，涵盖各个网络学科课程：金融学Finance经济学Economics数学Mathematics会计Accounting，文学Literature，艺术Arts等等。除了网课全程托管外，statistics-lab™ 也可接受单独网课任务。无论遇到了什么网课困难，都能帮你完美解决！ statistics-lab™ 为您的留学生涯保驾护航 在代写代考信息论Information Theory方面已经树立了自己的口碑, 保证靠谱, 高质量且原创的统计Statistics数学Math代写服务。我们的专家在代考信息论Information Theory相关的作业也就用不着说。 ## 信息论代写Information Theory代考 信息论是对信息和通信的数学研究。 它是应用数学的一个分支，主要研究数据的量化问题，目的是在媒介中存储尽可能多的数据或通过通信信道发送数据。 一种被称为信息熵的数据测量方法是用存储或通信数据所需的平均比特数来表示的。 例如，如果每天的天气用 3 比特的熵来表示，我们就可以说，经过足够天数的观察，”平均 “每天需要大约 3 比特（每个比特的值为 0 或 1）来表示每天的天气。 信息论的基本应用包括 ZIP 格式（无损压缩）、MP3（无损压缩）和 DSL（传输线编码）。 该领域也是一个跨学科领域，与数学、统计学、计算机科学、物理学、神经科学和电子学相互交叉。 它的影响体现在各种事件中，如旅行者号深空探测任务的成功、CD 的发明、移动电话的实现、互联网的发展、语言学和人类感知的研究以及对黑洞的理解。 信息论包含几个不同的主题，列举如下： #### 概率论Probability theory代写代考 概率论（英语：probability theory，法语：théorie des probabilités，德语：Wahrscheinlichkeitstheorie）是数学的一个分支，提供并分析偶然现象的数学模型。 它最初起源于对赌博（如掷骰子）的研究。 现在，它仍被用作保险和投资等领域的基础理论。 虽然 “概率论 “一词有时也用来指概率计算领域，但本文并不涉及。 #### 计算机科学Computer science代写代考 计算机科学或计算机科学或 CS 是研究信息和计算的理论基础及其在计算机上的实现和应用的领域。 计算机科学也被翻译为 “信息科学 “或 “信息工程”。 计算机科学有许多不同的领域。 一些领域以应用为导向，如计算机制图，而另一些领域则更具数学性质，如被称为理论计算机科学的领域。 计算科学是一个响应科学和技术计算的 “计算需求 “的领域，研究实现这一需求的手段就是高性能计算。 另一种看似简单的分类是 “硬件”（如计算机工程）和 “软件”（如程序设计），但有些领域可同时被描述为 “硬件 “和 “软件”，如可重构计算，因此这并不是一种简单的分类。 其他相关科目课程代写： • 统计推断Statistical inference • 统计力学Statistical mechanics • 量子计算Quantum computing ## 信息论Information Theory历史 1948 年 7 月和 10 月，克劳德-香农（Claude Shannon）在《贝尔系统技术杂志》（Bell System Technical Journal）上发表了《通信的数学理论》（A Mathematical Theory of Communication）一文，这是决定信息论诞生并立即引起世界关注的决定性事件。 在这篇文章发表之前，贝尔实验室几乎没有发展出什么信息理论概念，处理等价事件的假设也一直是隐含的。哈里-奈奎斯特（Harry Nyquist）在 1924 年发表的文章《影响电报速度的某些因素》（Certain Factors Affecting Telegraph Speed）中包含了一些理论部分，量化了 “情报 “及其在通信系统中传输的 “速度”，给出了 W=K \log m 的关系式，其中 W 是情报传输的速度，m 是每一步可选择的不同电压水平的数量，而 K 是一个常数。1928 年拉尔夫-哈特利（Ralph Hartley）发表的文章《信息的传输》（Transmission of Information）用信息一词来表示一个可测量的量，反映了接收者将一个符号序列与另一个符号序列区分开来的能力；文中对信息的定义是：H=\log S^n=n \log S，其中 S 是可能的符号数，n 是传输的符号数。因此，信息的自然计量单位是十进制数位，后来为了纪念他，改称为哈特里。阿兰-图灵在 1940 年对第二次世界大战中德军使用的英格玛密码的破译进行统计分析时使用了类似的想法。 In July and October 1948, Claude Shannon published A Mathematical Theory of Communication in the Bell System Technical Journal, which was the decisive event that determined the birth of information theory and immediately brought it to the attention of the world. This was the decisive event that determined the birth of information theory and brought it to the world’s attention immediately. Prior to this article, Bell Labs had developed few information-theoretic concepts, and the assumption of dealing with equivalent events had been implicit. Harry Nyquist’s 1924 article Certain Factors Affecting Telegraph Speed contained some theoretical parts that quantified “intelligence” and its “speed” of transmission through a communication system, giving W=K \log m  where W is the speed at which the intelligence is transmitted, m is the number of different voltage levels that can be selected at each step, and K is a constant.The 1928 article Transmission of Information by Ralph Hartley used the term information to denote a measurable quantity that It reflects the ability of a receiver to distinguish one sequence of symbols from another; information is defined in the article as H=\log S^n=n \log S, where S is the number of possible symbols and n is the number of symbols transmitted. Thus, the natural unit of measurement for information is the decimal digit, later renamed Hartree in his honour. Alan Turing used a similar idea in 1940 when he statistically analysed the breaking of the Enigma code used by the Germans in World War II. 统计代写请认准statistics-lab™. statistics-lab™为您的留学生涯保驾护航。 ## 信息论Information Theory的重难点 什么是概率质量函数Probability mass function？ 概率质量函数（PMF）是概率论和统计学中的一种函数，它将离散随机变量映射为取该值的概率（有时简称为概率函数）。 给定离散随机变量 X：S \rightarrow \mathbb{R}，概率函数为$$
p_X(x)=P(X=x)=P(\s \in S: X(s)=x))
$$该函数将随机变量 X 取的每个值 x 与变量 X 取该值的概率联系起来。此外，必须满足以下等式：\Sigma_{i=1}^{\infty} p_X\left(x_i\right)=1 为了将此定义扩展到整个实数直线，我们假设对于 X 不能取的每个值 x（即不包含在 X 的支持中），其值为 0，即$$
p_X： \³mathbb{R} \longrightarrow[0,1], p_X(x)= \begin{cases}P(X=x), & x \in S, \ 0, & x \in \mathbb{R}. \backslash S .\end{cases}
$$由于 S，即 X 的支持，是一个可数集，p_X(x) 几乎到处都是空函数。 在离散多元变量（即支持是 \mathbb{R}^n 的离散子集）X=\left(X_1, X_2, \ldots, X_n\right) 的情况下，联合概率函数定义如下：$$
p_X\left(x_1, x_2, \ldots, x_n\right)=P\left(\left(X_1=x_1\right) \cap \left(X_2=x_2\right) \cap \ldots \cap\left(X_n=x_n\right)\right)
$$为了记号的方便，第二个成员通常被写成更简单的 P\left(X_1=x_1, X_2=x_2, \ldots, X_n=x_n\right) 什么是定向信息Directed information？ 有向信息是一种信息论度量，它量化了从随机字符串 X^n=\left(X_1, X_2, \ldots, X_n\right) 到随机字符串 Y^n=\left(Y_1, Y_2, \ldots, Y_n\right) 的信息流。有向信息一词由詹姆斯-梅西（James Massey）提出，其定义为$$
I\left(X^n \rightarrow Y^n\right) \triangleq \sum_{i=1}^n I\left(X^i ; Y_i \mid Y^{i-1}\right)
$$其中 I\left(X^i ; Y_i \mid Y^{i-1}\right) 是条件互信息 I\left(X_1, X_2, \ldots, X_i ; Y_i \mid Y_1, Y_2, \ldots, Y_{i-1}\right) 。 有向信息可应用于因果关系起重要作用的问题，如具有反馈能力的离散无记忆网络的信道容量、具有块内记忆的网络容量、具有因果侧信息的赌博、具有因果侧信息的压缩、实时控制通信设置和统计物理学等。 什么是概率分布Probability distribution 更正式地说，给定一个概率空间 (\Omega, \mathcal{F}, \nu ) （其中 \Omega 是一个称为样本空间或事件集的集合、 \mathcal{F}是\Omega上的西格玛代数，\nu是概率度量），给定一个可测空间(E, \mathcal{E})，一个(E, \mathcal{E})变量随机是一个可测函数X： \Omega \rightarrow E 从样本空间到 E。 在这个定义中，我们可以理解，如果对于每个 A \ in \mathcal{E} 我们都有 X^{-1}(A) \ in \mathcal{F} ，那么函数 X 就是可测的。这个可测性定义是 Lindgren（1976）所定义的定义的一般化：当且仅当事件 \omega \in \Omega: X(\omega) \leq \lambda} 对于每个 \lambda 都属于 \mathcal{B} 时，定义在样本空间 \Omega 上的函数 X 才被称为相对于 Borel 场 \mathcal{B} 是可测的。 如果 E 是拓扑空间，并且 mathcal{E} 是波尔的西格玛代数，那么 X 也被称为 E 随机变量。此外，如果 E=\mathbb{R}^n，那么 X 就被简单地称为随机变量。 换句话说，随机变量 X 是由定义在事件集 \Omega 上的概率度量诱导目标可测空间 E 上的概率度量的一种方法。 • 一维随机变量（即值在 \mathbb{R} 中）被称为简单或单变量。 • 多维随机变量被称为多变量或多元变量（双变量、三变量、k-uple）。 取决于参数 t（其中 t 通常代表时间）的随机变量被视为随机过程。 ## 信息论Information Theory的相关课后作业范例 这是一篇关于信息论Information Theory的作业 问题 1. For nonnegative numbers, a_1, a_2, \ldots, a_n and b_1, b_2, \ldots, b_n,$$
\sum_{i=1}^n a_i \log \frac{a_i}{b_i} \geq\left(\sum_{i=1}^n a_i\right) \log \frac{\sum_{i=1}^n a_i}{\sum_{i=1}^n b_i}
$$with equality if and only if \frac{a_i}{b_i}= const. Proof: Assume without loss of generality that a_i>0 and b_i>0. The function f(t)=t \log t is strictly convex, since f^{\prime \prime}(t)=\frac{1}{t} \log e>0 for all positive t. Hence by Jensen’s inequality, we have$$
\sum \alpha_i f\left(t_i\right) \geq f\left(\sum \alpha_i t_i\right)
$$for \alpha_i \geq 0, \sum_i \alpha_i=1. Setting \alpha_i=\frac{b_i}{\sum_{j=1}^n b_j} and t_i=\frac{a_i}{b_i}, we obtain$$
\sum \frac{a_i}{\sum b_j} \log \frac{a_i}{b_i} \geq \sum \frac{a_i}{\sum b_j} \log \sum \frac{a_i}{\sum b_j},
$$which is the log sum inequality. We now use the \log sum inequality to prove various convexity results. We begin by reproving Theorem 2.6.3, which states that D(p | q) \geq 0 with equality if and only if p(x)=q(x). By the log sum inequality,$$
\begin{aligned}
D(p | q) & =\sum p(x) \log \frac{p(x)}{q(x)} \
& \geq\left(\sum p(x)\right) \log \sum p(x) / \sum q(x) \
& =1 \log \frac{1}{1}=0
\end{aligned}

with equality if and only if $\frac{p(x)}{q(x)}=c$. Since both $p$ and $q$ are probability mass functions, $c=1$, and hence we have $D(p | q)=0$ if and only if $p(x)=q(x)$ for all $x$.