Learn R Programming

BayesianFROC (version 1.0.0)

chi_square_at_replicated_data_and_MCMC_samples_MRMC: chi square at replicated data drawn (only one time) from model with each MCMC samples.

Description

To pass the return value to the calculator of the posterior predictive p value.

Usage

chi_square_at_replicated_data_and_MCMC_samples_MRMC(
  StanS4class,
  summary = TRUE,
  seed = NA,
  serial.number = NA
)

Arguments

StanS4class

An S4 object of class stanfitExtended which is an inherited class from the S4 class stanfit. This R object is a fitted model object as a return value of the function fit_Bayesian_FROC().

To be passed to DrawCurves() ... etc

summary

Logical: TRUE of FALSE. Whether to print the verbose summary. If TRUE then verbose summary is printed in the R console. If FALSE, the output is minimal. I regret, this variable name should be verbose.

seed

This is used only in programming phase. If seed is passed, then, in procedure indicator the seed is printed. This parameter is only for package development.

serial.number

A positive integer or Character. This is for programming perspective. The author use this to print the serial numbre of validation. This will be used in the validation function.

Value

A list.

From any given posterior MCMC samples \(\theta_1,\theta_2,...,\theta_i,....,\theta_n\) (provided by stanfitExtended object), it calculates a return value as a vector of the form \(\chi(y_i|\theta_i),i=1,2,....\), where each dataset \(y_i\) is drawn from the corresponding likelihood \(likelihood(.|\theta_i),i=1,2,...\), namely,

$$y_i \sim likelihood(.| \theta_i).$$

The return value also retains these \(y_i, i=1,2,..\).

Revised 2019 Dec. 2

Details

For a given dataset \(D_0\), let us denote by \(\pi(|D_0)\) a posterior distribution of the given data \(D_0\).

Then, we draw poterior samples.

$$\theta_1 \sim \pi(.| D_0),$$ $$\theta_2 \sim \pi(.| D_0),$$ $$\theta_3 \sim \pi(.| D_0),$$ $$....,$$ $$\theta_n \sim \pi(.| D_0).$$

We let \(L(|\theta)\) be a likelihood function or probability law of data, which is also denoted by \(L(y|\theta)\) for a given data \(y\). But, the specification of data \(y\) is somehow conversome, thus, to denote the function sending each \(y\) into \(L(y|\theta)\), we use the notation \(L(|\theta)\).

Now, we synthesize data-samples \((y_i;i=1,2,...,n)\) in only one time drawing from the collection of likelihoods \(L(|\theta_1),L(|\theta_2),...,L(|\theta_n)\).

$$y_1 \sim L(.| \theta_1),$$ $$y_2 \sim L(.| \theta_2),$$ $$y_3 \sim L(.| \theta_3),$$ $$....,$$ $$y_n \sim L(.| \theta_n).$$

Altogether, using these pair of samples \((y_i, \theta_i), i= 1,2,...,n\), we calculate the chi squares as the return value of this function. That is,

$$\chi(y_1|\theta_1),$$ $$\chi(y_2|\theta_2),$$ $$\chi(y_3|\theta_3),$$ $$....,$$ $$\chi(y_n|\theta_n).$$

This is contained as a vector in the return value,

so the return value is a vector whose length is the number of MCMC iterations except the burn-in period.

Note that in MRMC cases, $$\chi(y|\theta).$$ is defined as follows.

$$\chi^2(y|\theta) := \sum_{r=1}^R \sum_{m=1}^M \sum_{c=1}^C \biggr( \frac{[ H_{c,m,r}-N_L\times p_{c,m,r}(\theta)]^2}{N_L\times p_{c,m,r}(\theta)}+\frac{[F_{c,m,r}-(\lambda _{c} -\lambda _{c+1} )\times N_{L}]^2}{(\lambda_{c}(\theta) -\lambda_{c+1}(\theta) )\times N_{L} }\biggr).$$ where a dataset \(y\) consists of the pairs of the number of False Positives and the number of True Positives \( (F_{c,m,r}, H_{c,m,r}) \) together with the number of lesions \(N_L\) and the number of images \(N_I\) and \(\theta\) denotes the model parameter.

Application of this return value to calculate the so-called Posterior Predictive P value.

As will be demonstrated in the other function, chaning seed, we can obtain

$$y_{1,1},y_{1,2},y_{1,3},...,y_{1,j},....,y_{1,J} \sim L ( . |\theta_1), $$ $$y_{2,1},y_{2,2},y_{2,3},...,y_{2,j},....,y_{2,J} \sim L ( . |\theta_2), $$ $$y_{3,1},y_{3,2},y_{3,3},...,y_{3,j},....,y_{3,J} \sim L ( . |\theta_3), $$ $$...,$$ $$y_{i,1},y_{i,2},y_{i,3},...,y_{i,j},....,y_{I,J} \sim L ( . |\theta_i), $$ $$...,$$ $$y_{I,1},y_{I,2},y_{I,3},...,y_{I,j},....,y_{I,J} \sim L ( . |\theta_I). $$

where \(L ( . |\theta_i)\) is a likelihood function for a model parameter \(\theta_i\). And thus, we calculate the chi square statistics.

$$ \chi(y_{1,1}|\theta_1), \chi(y_{1,2}|\theta_1), \chi(y_{1,3}|\theta_1),..., \chi(y_{1,j}|\theta_1),...., \chi(y_{1,J}|\theta_1),$$ $$ \chi(y_{2,1}|\theta_2), \chi(y_{2,2}|\theta_2), \chi(y_{2,3}|\theta_2),..., \chi(y_{2,j}|\theta_2),...., \chi(y_{2,J}|\theta_2),$$ $$ \chi(y_{3,1}|\theta_3), \chi(y_{3,2}|\theta_3), \chi(y_{3,3}|\theta_3),..., \chi(y_{3,j}|\theta_3),...., \chi(y_{3,J}|\theta_3),$$ $$...,$$ $$ \chi(y_{i,1}|\theta_i), \chi(y_{i,2}|\theta_i), \chi(y_{i,3}|\theta_i),..., \chi(y_{i,j}|\theta_i),...., \chi(y_{I,J}|\theta_i),$$ $$...,$$ $$ \chi(y_{I,1}|\theta_I), \chi(y_{I,2}|\theta_I), \chi(y_{I,3}|\theta_I),..., \chi(y_{I,j}|\theta_I),...., \chi(y_{I,J}|\theta_I).$$

whih are used when we calculate the so-called Posterior Predictive P value to test the null hypothesis that our model is fitted a data well.

Revised 2019 Sept. 8

Revised 2019 Dec. 2

Revised 2020 March

Revised 2020 Jul

Examples

Run this code
# NOT RUN {

# }
# NOT RUN {
  fit <- fit_Bayesian_FROC( ite  = 1111,  dataList = ddd )
 a <- chi_square_at_replicated_data_and_MCMC_samples_MRMC(fit)

 b<-a$List_of_dataList
 lapply(b, plot_FPF_and_TPF_from_a_dataset)



# }
# NOT RUN {

# }

Run the code above in your browser using DataLab