## EEL6029 Statistical inference课程简介

That sounds like a challenging and exciting course! Here are some key topics that might be covered in such a course:

1. Probability theory: This would include understanding the axioms of probability and basic rules of probability, such as conditional probability and Bayes’ theorem.
2. Statistical inference: This would cover topics such as estimation, hypothesis testing, and confidence intervals. Students would learn about point estimation, maximum likelihood estimation, and Bayesian estimation. They would also learn about hypothesis testing, including null and alternative hypotheses, p-values, and type I and type II errors.

## PREREQUISITES

1. Linear regression: This would involve understanding the basics of simple and multiple linear regression, including the assumptions and limitations of the models. Students would learn about least squares estimation, hypothesis testing, and confidence intervals.
2. Bayesian statistics: This would cover the principles of Bayesian inference, including Bayes’ theorem, prior and posterior distributions, and Bayesian model selection. Students would learn about Markov Chain Monte Carlo (MCMC) methods for sampling from complex posterior distributions.
3. Machine learning: This would include an introduction to basic machine learning techniques such as decision trees, random forests, and support vector machines. Students would learn about the trade-offs between different types of models, as well as methods for model selection and evaluation.
4. Time series analysis: This would involve understanding the basics of time series models, including autoregressive (AR), moving average (MA), and autoregressive integrated moving average (ARIMA) models. Students would learn about forecasting and time series model selection.
5. Cyber systems modeling: This would cover modeling of complex cyber systems using mathematical tools, such as graph theory and network analysis. Students would learn about the basics of cyber security and how to model and analyze cyber attacks using statistical methods.

Overall, this course would provide a strong foundation in mathematical statistics and its applications to complex cyber systems, data analytics, and Bayesian intelligence.

## EEL6029 Statistical inference HELP（EXAM HELP， ONLINE TUTOR）

An urn contains 11 balls numbered $0,1, \ldots, 10$. A ball is selected at random. Suppose the number on the selected ball is $k$. A second urn is filled with $k$ red balls and $10-k$ blue balls. Five balls are selected at random with replacement from the second urn.
(a) Find the probability that the sample from the second urn consists of three red and two blue balls.
(b) Given that the sample from the second urn consists of three red and two blue balls, find the conditional probability that the ball selected from the first urn had the number $k=6$.

(a) Given that ball $k$ is chosen from the first urn, the probability of choosing three red and two blue balls from the second when sampling with replacement is the binomial probability
$$P(\text { three red|ball } k)=\left(\begin{array}{l} 5 \ 3 \end{array}\right)\left(\frac{k}{10}\right)^3\left(1-\frac{k}{10}\right)^2 .$$
The probability of choosing ball $k$ from the first urn and three red balls from the second is therefore
\begin{aligned} P(\text { three red and ball } k) & =P(\text { three red } \mid \text { ball } k) P(\text { ball } k) \ & =\left(\begin{array}{l} 5 \ 3 \end{array}\right)\left(\frac{k}{10}\right)^3\left(1-\frac{k}{10}\right)^2 \times \frac{1}{11}, \end{aligned}
and the unconditional probability of choosing three red balls from the second urn is
\begin{aligned} P(\text { three red }) & =\sum_{k=0}^{10} P(\text { three red and ball } k) \ & =\sum_{k=0}^{10}\left(\begin{array}{l} 5 \ 3 \end{array}\right)\left(\frac{k}{10}\right)^3\left(1-\frac{k}{10}\right)^2 \times \frac{1}{11} \ & \approx 0.1515 \end{aligned}
This can be computed in $\mathrm{R}$ as
\begin{aligned} & >\operatorname{sum}(\operatorname{dbinom}(3,5,(0: 10) / 10) / 11) \ & \text { [1] } 0.1515 \end{aligned}
(b) The conditional probability that the chosen ball from the first urn was numbered $k=6$, given that three red balls were chosen from the second,is
\begin{aligned} P(\text { ball } k=6 \mid \text { three red }) & =\frac{P(\text { three red and ball } k=6)}{P(\text { three red })} \ & =\frac{\left(\begin{array}{c} 5 \ 3 \end{array}\right)\left(\frac{k}{10}\right)^3\left(1-\frac{k}{10}\right)^2 \times \frac{1}{11}}{P(\text { three red })} \ & \approx \frac{0.03142}{0.1515} \approx 0.2074 \end{aligned}

1. A coin has probability $p$ of coming up heads and $1-p$ of tails, with $0<p<1$. An experiment is conducted with the following steps:
2. Flip the coin.
3. Flip the coin a second time.
5. Otherwise let the result of the experiment be the result of the last flip at step 2.
Assume flips are independent.
(a) The $R$ function
simulates this experiment, with 1 representing heads and 0 tails. Use this function to estimate the probability of heads for $p=0.2,0.4,0.6,0.8$.
(b) Find the probability that the result of the experiment is a head mathematically as a function of $p$.

1. (a) One possible approach:
\begin{aligned} &>\operatorname{sapply}(\operatorname{seq}(0.2,0.9, \text { by }=0.2), \ &\text { function(p) mean(replicate }(10000, \operatorname{sim} 1(p)))) \end{aligned}
[1] 0.49130 .49650 .50340 .4991
This suggests that the probability of heads may be 0.5 for any $p$.
(b) Let $A$ be the event that the process returns a head, and let $B$ be the event that the process ends after the first two flips. Then
$$P(A)=P(A \cap B)+P\left(A \mid B^c\right) P\left(B^c\right)$$
Now $A \cap B$ is the event that the first toss is a tail and the second toss is a head, so $P(A \cap B)=(1-p) p . B$ is the event that either the first toss is a head and the second a tail, or the first is a tail and the second is a head; so $P(B)=2 p(1-p)$ and $P\left(B^c\right)=1-2 p(1-p)$. If the process does not end with the first two tosses then it starts over again independently, so $P\left(A \mid B^c\right)=P(A)$. Therefore $P(A)$ satisfies
$$P(A)=p(1-p)+P(A)(1-2 p(1-p))$$
and thus
$$P(A)=\frac{p(1-p)}{2 p(1-p)}=\frac{1}{2},$$
as the simulation in part (a) suggests. The requirement that $p>0$ and $p<1$ ensures that the denominator is positive and that the process is guaranteed to end.

## Textbooks

• An Introduction to Stochastic Modeling, Fourth Edition by Pinsky and Karlin (freely
available through the university library here)
• Essentials of Stochastic Processes, Third Edition by Durrett (freely available through
the university library here)
To reiterate, the textbooks are freely available through the university library. Note that
you must be connected to the university Wi-Fi or VPN to access the ebooks from the library
links. Furthermore, the library links take some time to populate, so do not be alarmed if
the webpage looks bare for a few seconds.

Statistics-lab™可以为您提供usf.edu EEL6029 Statistical inference统计推断课程的代写代考辅导服务！ 请认准Statistics-lab™. Statistics-lab™为您的留学生涯保驾护航。

## EEL6029 Statistical inference课程简介

That sounds like a challenging and exciting course! Here are some key topics that might be covered in such a course:

1. Probability theory: This would include understanding the axioms of probability and basic rules of probability, such as conditional probability and Bayes’ theorem.
2. Statistical inference: This would cover topics such as estimation, hypothesis testing, and confidence intervals. Students would learn about point estimation, maximum likelihood estimation, and Bayesian estimation. They would also learn about hypothesis testing, including null and alternative hypotheses, p-values, and type I and type II errors.

## PREREQUISITES

1. Linear regression: This would involve understanding the basics of simple and multiple linear regression, including the assumptions and limitations of the models. Students would learn about least squares estimation, hypothesis testing, and confidence intervals.
2. Bayesian statistics: This would cover the principles of Bayesian inference, including Bayes’ theorem, prior and posterior distributions, and Bayesian model selection. Students would learn about Markov Chain Monte Carlo (MCMC) methods for sampling from complex posterior distributions.
3. Machine learning: This would include an introduction to basic machine learning techniques such as decision trees, random forests, and support vector machines. Students would learn about the trade-offs between different types of models, as well as methods for model selection and evaluation.
4. Time series analysis: This would involve understanding the basics of time series models, including autoregressive (AR), moving average (MA), and autoregressive integrated moving average (ARIMA) models. Students would learn about forecasting and time series model selection.
5. Cyber systems modeling: This would cover modeling of complex cyber systems using mathematical tools, such as graph theory and network analysis. Students would learn about the basics of cyber security and how to model and analyze cyber attacks using statistical methods.

Overall, this course would provide a strong foundation in mathematical statistics and its applications to complex cyber systems, data analytics, and Bayesian intelligence.

## EEL6029 Statistical inference HELP（EXAM HELP， ONLINE TUTOR）

1. For each of the following experiments, describe a reasonable sample space:
(a) Toss a coin four times.
(b) Count the number of insect-damaged leaves on a plant.
(c) Measure the lifetime (in hours) of a particular brand of light bulb.
(d) Three people arrive at an airport checkpoint. Two of the three are randomly chosen to complete a survey.
2. The set-theoretic difference $A \backslash B=A \cap B^c$ is the set of all elements in $A$ that are not in $B$. The symmetric difference $A \Delta B=(A \backslash B) \cup(B \backslash A)$ is the set of all elements in either $A$ or $B$ but not both. Verify the following identities:
(a) $A \backslash B=A \backslash(A \cap B)$
(b) $A \Delta B=A^c \Delta B^c$
(c) $A \cup B=A \cup(B \backslash A)$
(d) $B=(B \cap A) \cup\left(B \cap A^c\right)$
3. Problem 1.4 in the textbook
4. Problem 1.5 in the textbook
5. Problem 1.13 in the textbook

1. (a) Toss coin 4 times:
$${(H, H, H, H), \ldots}=\left{\left(x_1, x_2, x_3, x_4\right): x_i \in{H, T}\right}$$
or
$${0,1,2,3,4}$$
(b) Count number of insect-damaged leaves:
$${0,1, \ldots, N} \quad N=\text { # leaves (or upper bound) }$$
$${0,1,2, \ldots} \text { if no upper bound is available }$$
2. (c) Measure lifetime in hours:
3. 4. \begin{aligned} 5. {0,1,2, \ldots} & \text { if rounded (can put in upper limit) } \ 6. {[0, \infty) } & \text { if fractional hours are allowed } 7. \end{aligned} 8.
9. (d) Two out of three people chosen to complete a survey: Suppose the people are labeled $A, B$, and $C$. One possible sample space is is the collection of all subsets of size 2 that can be chosen from the set ${A, B, C}$ :
10. $$11. {{A, B},{A, C},{B, C}} 12.$$
13. Another possibility is the collection of all ordered pairs that can be formed:
14. $$15. {(A, B),(B, A),(A, C),(C, A),(B, C),(C, B)} . 16.$$
17. (a) $A \backslash B$ is defined as $A \cap B^c$. To see that $A \backslash B=A \backslash(A \cap B)$ :
$$\begin{array}{rlr} A \backslash(A \cap B) & =A \cap(A \cap B)^c & \ & =A \cap\left(A^c \cup B^c\right) & \text { De Morgan’s law } \ & =\left(A \cap A^c\right) \cup\left(A \cap B^c\right) & \text { distributive law } \ & =\emptyset \cup\left(A \cap B^c\right) & \ & =A \backslash B & \end{array}$$
(b) $A \Delta B=A^c \Delta B^c$ : For any two sets $A$ and $B$
\begin{aligned} A \backslash B & =A \cap B^c \ & =B^c \cap A \ & =B^c \cap\left(A^c\right)^c \ & =B^c \backslash A^c \end{aligned}
So
\begin{aligned} A \Delta B & =(A \backslash B) \cup(B \backslash A) \ & =\left(A^c \backslash B^c\right) \cup\left(B^c \backslash A^c\right) \ & =A^c \Delta B^c \end{aligned}

1. Suppose $n$ balls are placed at random in $n$ cells; cells can contain more than one ball.
(a) Show that the probability that exactly one cell remains empty is
$$\frac{n(n-1)\left(\begin{array}{l} n \ 2 \end{array}\right)(n-2) !}{n^n}=\frac{\left(\begin{array}{c} n \ 2 \end{array}\right) n !}{n^n}$$
(b) The $\mathrm{R}$ function defined as
$\operatorname{sim} 1$ <- function $(\mathrm{n})$
$\quad$ length(unique (sample $(1: \mathrm{n}, \mathrm{n}$, replace = TRUE)))
is one way to simulate an assignment of balls to cells and count the number of occupied cells. Use this function to compute the probability of exactly one empty cell for $n=4,6,8,10$ by simulation, and compare the simulated results to values computed with the formula of part (a). The function replicate may be useful.

1. (a) $n$ balls are assigned at random into $n$ cells. $S$ has $n^n$ elements since multiple balls per call are allowed.

Assume equally likely outcomes. Equivalently, assume balls are assigned independently.
Exactly one cell empty means:

• one empty cell
• one with two
• n-2 with 1
Choices:
• $n$ for the one empty cell
• $n-1$ for the cell with two balls
• $\left(\begin{array}{l}n \ 2\end{array}\right)$ for the balls to use for the two-ball cell.
• $(n-2)$ ! arrangements for the other balls in their cells.
So the number of ways to get one empty is
$$n(n-1)\left(\begin{array}{l} n \ 2 \end{array}\right)(n-2) !=\left(\begin{array}{l} n \ 2 \end{array}\right) n !$$
The probability of this arrangement is
$$\frac{\left(\begin{array}{l} n \ 2 \end{array}\right) n !}{n^n}$$

(b) We can define a function sim1empty to compute the probability of one empty cell by simulation using $\mathrm{N}$ simulation replicates:
$\operatorname{sim1empty}<-\operatorname{function}(n, N)$
$\quad \operatorname{mean}(\operatorname{replicate}(N, \operatorname{sim} 1(n)=n-1))$
Probability estimates and standard errors for $n=4,6,8,10$ can then be computed and displayed with
$\mathrm{N}<-10000$
$\mathrm{n}<-\operatorname{seq}(4,10$, by $=2)$
phat <- sapply $(n$, sim1empty, $N)$
se <- sqrt (phat * $(1$ – phat) /N)
$\operatorname{plot}(n$, phat)
segments $(n$, phat $-2 *$ se, $n$, phat $+2 * s e)$
The formula derived in part (a) can be computed as
plempty <- function $(n)$ choose $(n, 2) * \operatorname{factorial}(n) / n=n$
and added to the plot with
points $(n$, plempty $(n)$, col $=$ “red”)

## Textbooks

• An Introduction to Stochastic Modeling, Fourth Edition by Pinsky and Karlin (freely
available through the university library here)
• Essentials of Stochastic Processes, Third Edition by Durrett (freely available through
the university library here)
To reiterate, the textbooks are freely available through the university library. Note that
you must be connected to the university Wi-Fi or VPN to access the ebooks from the library
links. Furthermore, the library links take some time to populate, so do not be alarmed if
the webpage looks bare for a few seconds.

Statistics-lab™可以为您提供usf.edu EEL6029 Statistical inference统计推断课程的代写代考辅导服务！ 请认准Statistics-lab™. Statistics-lab™为您的留学生涯保驾护航。