The package bmscstan provides useful functions to fit Bayesian Multilevel Single Case models (BMSC) using as backend Stan (Carpenter et al. 2017).
This approach is based on the seminal approach of the Crawford’s tests (Crawford and Howell 1998; Crawford and Garthwaite 2005; Crawford et al. 2010), using a small control sample of individuals, to see whether the performance of the single case deviates from them. Unfortunately, Crawford’s tests are limited to a number of specific experimental designs that do not allow researchers to use complex experimental designs.
The BMSC approach is born mainly to deal with this problem: its purpose is, in fact, to allow the fitting of models with the same flexibility of a Multilevel Model, with single case and controls data.
The core function of the bmscstan package is BMSC
, whose theoretical assumptions, and its validation are reported in (Scandola and Romano 2020).
The syntax used by the BMSC
function is extremely similar to the syntax used in the lme4
package. However, the specification of random effects is limited, but it can cover the greatest part of cases (for further details, please see ?bmscstan::randomeffects
).
In order to show an example on the use of the bmscstan package, the datasets in this package will be used.
In these datasets we have data coming from a Body Sidedness Effect paradigm (Ottoboni et al. 2005; Tessari et al. 2012), that is a Simon-like paradigm useful to measure body representation.
In this experimental paradigm, participants have to answer to a circle showed in the centre of the computer screen, superimposed to an irrelevant image of a left or right hand, or to a left or right foot.
The circle can be of two colors (e.g. red or blue), and participants have to press one button with the left when the circle is of a specific colour, and with the right hand when the circle is of the another colour.
When the irrelevant background image (foot or hand) is incongruent with the hand used to answer, the reaction times and frequency of errors are higher.
The two irrelevant backgrounds are administered in different experimental blocks.
This is considered an effect of the body representation.
In the package there are two datasets, one composed by 16 healthy control participants, and the other one by an individual affected by right unilateral brachial plexus lesion (however, s/he could independently press the keyboard buttons).
The datasets are called data.pt
for the single case, and data.ctrl
for the control group, and they can be loaded using data(BSE)
.
In these datasets there are the Reaction Times RT
, a Body.District
factor with levels FOOT and HAND, a Congruency
factor (levels: Congruent, Incongruent), and a Side
factor (levels: Left, Right). In the data.ctrl
dataset there also is an ID
factor, representing the different 16 control participants.
library(bmscstan)
data(BSE)
str(data.pt)
#> 'data.frame': 467 obs. of 4 variables:
#> $ RT : int 562 424 411 491 439 593 504 483 467 413 ...
#> $ Body.District: Factor w/ 2 levels "FOOT","HAND": 1 1 1 1 1 1 1 1 1 1 ...
#> $ Congruency : Factor w/ 2 levels "Congruent","Incongruent": 1 2 2 1 1 2 2 1 1 2 ...
#> $ Side : Factor w/ 2 levels "Left","Right": 1 2 1 2 1 1 2 1 2 2 ...
str(data.ctrl)
#> 'data.frame': 4049 obs. of 5 variables:
#> $ RT : int 785 641 938 841 486 425 408 394 611 387 ...
#> $ Body.District: Factor w/ 2 levels "FOOT","HAND": 1 1 1 1 1 1 1 1 1 1 ...
#> $ Congruency : Factor w/ 2 levels "Congruent","Incongruent": 2 2 2 2 2 2 1 1 1 1 ...
#> $ Side : Factor w/ 2 levels "Left","Right": 1 1 1 1 2 1 1 1 2 2 ...
#> $ ID : Factor w/ 16 levels "HN_017","HN_019",..: 1 1 1 1 1 1 1 1 1 1 ...
ggplot(data.pt, aes(y = RT, x = Body.District:Side , fill = Congruency))+
geom_boxplot()
ggplot(data.ctrl, aes(y = RT, x = Body.District:Side , fill = Congruency))+
geom_boxplot()+
facet_wrap( ~ ID , ncol = 4)
These data seem to have some outliers. Let see if they are normally distributed.
qqnorm(data.ctrl$RT, main = "Controls")
qqline(data.ctrl$RT)
qqnorm(data.pt$RT, main = "Single Case")
qqline(data.pt$RT)
They are not normally distributed. Outliers will be removed by using the boxplot.stats
function.
out <- boxplot.stats( data.ctrl$RT )$out
data.ctrl <- droplevels( data.ctrl[ !data.ctrl$RT %in% out , ] )
out <- boxplot.stats( data.pt$RT )$out
data.pt <- droplevels( data.pt[ !data.pt$RT %in% out , ] )
qqnorm(data.ctrl$RT, main = "Controls")
qqline(data.ctrl$RT)
qqnorm(data.pt$RT, main = "Single Case")
qqline(data.pt$RT)
They are not perfect, but definitively better.
First of all, there is the necessity to think to our hypotheses, and setting the contrast matrices consequently.
In all cases, our factors have only two levels. Therefore, we set the factors with a Treatment Contrasts matrix, with baseline level for Side
the Left level, for Congruency
the Congruent level, and for Body.District
the FOOT level.
In this way, each coefficient will represent the difference between the two levels.
contrasts( data.ctrl$Side ) <- contr.treatment( n = 2 )
contrasts( data.ctrl$Congruency ) <- contr.treatment( n = 2 )
contrasts( data.ctrl$Body.District ) <- contr.treatment( n = 2 )
contrasts( data.pt$Side ) <- contr.treatment( n = 2 )
contrasts( data.pt$Congruency ) <- contr.treatment( n = 2 )
contrasts( data.pt$Body.District ) <- contr.treatment( n = 2 )
The use of the BMSC
function, for those who are used to lme4
or brms
syntax should be straightforward.
In this case, we want to fit the following model:
RT ~ Body.District * Congruency * Side + (Congruency * Side | ID / Body.District)
Unfortunately, BMSC
does not directly allow the syntax ID / Body.District
in the specification of the random effects.
In any case, taking into consideration that (1 | ID / Body.District)
is a short version of (1 | ID : Body.District) + (1 | ID)
, it is possible to solve this problem.
It is necessary to create a new variable for ID : Body.District
and the model would be:
RT ~ Body.District * Congruency * Side + (Congruency * Side | BD_ID) + (Congruency * Side | ID)
At this point, fitting the model is easy, and it can be done with the use of a single function.
mdl <- BMSC(formula = RT ~ Body.District * Congruency * Side +
(Congruency * Side | ID) + (Congruency * Side | BD_ID),
data_ctrl = data.ctrl,
data_sc = data.pt,
cores = 1,
seed = 2020)
#>
#> SAMPLING FOR MODEL '39eb131cad8af045d5282c45702e06e7' NOW (CHAIN 1).
#> Chain 1:
#> Chain 1: Gradient evaluation took 0.002221 seconds
#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 22.21 seconds.
#> Chain 1: Adjust your expectations accordingly!
#> Chain 1:
#> Chain 1:
#> Chain 1: Iteration: 1 / 4000 [ 0%] (Warmup)
#> Chain 1: Iteration: 400 / 4000 [ 10%] (Warmup)
#> Chain 1: Iteration: 800 / 4000 [ 20%] (Warmup)
#> Chain 1: Iteration: 1200 / 4000 [ 30%] (Warmup)
#> Chain 1: Iteration: 1600 / 4000 [ 40%] (Warmup)
#> Chain 1: Iteration: 2000 / 4000 [ 50%] (Warmup)
#> Chain 1: Iteration: 2001 / 4000 [ 50%] (Sampling)
#> Chain 1: Iteration: 2400 / 4000 [ 60%] (Sampling)
#> Chain 1: Iteration: 2800 / 4000 [ 70%] (Sampling)
#> Chain 1: Iteration: 3200 / 4000 [ 80%] (Sampling)
#> Chain 1: Iteration: 3600 / 4000 [ 90%] (Sampling)
#> Chain 1: Iteration: 4000 / 4000 [100%] (Sampling)
#> Chain 1:
#> Chain 1: Elapsed Time: 641.02 seconds (Warm-up)
#> Chain 1: 806.121 seconds (Sampling)
#> Chain 1: 1447.14 seconds (Total)
#> Chain 1:
#>
#> SAMPLING FOR MODEL '39eb131cad8af045d5282c45702e06e7' NOW (CHAIN 2).
#> Chain 2:
#> Chain 2: Gradient evaluation took 0.001685 seconds
#> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 16.85 seconds.
#> Chain 2: Adjust your expectations accordingly!
#> Chain 2:
#> Chain 2:
#> Chain 2: Iteration: 1 / 4000 [ 0%] (Warmup)
#> Chain 2: Iteration: 400 / 4000 [ 10%] (Warmup)
#> Chain 2: Iteration: 800 / 4000 [ 20%] (Warmup)
#> Chain 2: Iteration: 1200 / 4000 [ 30%] (Warmup)
#> Chain 2: Iteration: 1600 / 4000 [ 40%] (Warmup)
#> Chain 2: Iteration: 2000 / 4000 [ 50%] (Warmup)
#> Chain 2: Iteration: 2001 / 4000 [ 50%] (Sampling)
#> Chain 2: Iteration: 2400 / 4000 [ 60%] (Sampling)
#> Chain 2: Iteration: 2800 / 4000 [ 70%] (Sampling)
#> Chain 2: Iteration: 3200 / 4000 [ 80%] (Sampling)
#> Chain 2: Iteration: 3600 / 4000 [ 90%] (Sampling)
#> Chain 2: Iteration: 4000 / 4000 [100%] (Sampling)
#> Chain 2:
#> Chain 2: Elapsed Time: 614.535 seconds (Warm-up)
#> Chain 2: 799.023 seconds (Sampling)
#> Chain 2: 1413.56 seconds (Total)
#> Chain 2:
#>
#> SAMPLING FOR MODEL '39eb131cad8af045d5282c45702e06e7' NOW (CHAIN 3).
#> Chain 3:
#> Chain 3: Gradient evaluation took 0.001697 seconds
#> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 16.97 seconds.
#> Chain 3: Adjust your expectations accordingly!
#> Chain 3:
#> Chain 3:
#> Chain 3: Iteration: 1 / 4000 [ 0%] (Warmup)
#> Chain 3: Iteration: 400 / 4000 [ 10%] (Warmup)
#> Chain 3: Iteration: 800 / 4000 [ 20%] (Warmup)
#> Chain 3: Iteration: 1200 / 4000 [ 30%] (Warmup)
#> Chain 3: Iteration: 1600 / 4000 [ 40%] (Warmup)
#> Chain 3: Iteration: 2000 / 4000 [ 50%] (Warmup)
#> Chain 3: Iteration: 2001 / 4000 [ 50%] (Sampling)
#> Chain 3: Iteration: 2400 / 4000 [ 60%] (Sampling)
#> Chain 3: Iteration: 2800 / 4000 [ 70%] (Sampling)
#> Chain 3: Iteration: 3200 / 4000 [ 80%] (Sampling)
#> Chain 3: Iteration: 3600 / 4000 [ 90%] (Sampling)
#> Chain 3: Iteration: 4000 / 4000 [100%] (Sampling)
#> Chain 3:
#> Chain 3: Elapsed Time: 612.437 seconds (Warm-up)
#> Chain 3: 803.458 seconds (Sampling)
#> Chain 3: 1415.9 seconds (Total)
#> Chain 3:
#>
#> SAMPLING FOR MODEL '39eb131cad8af045d5282c45702e06e7' NOW (CHAIN 4).
#> Chain 4:
#> Chain 4: Gradient evaluation took 0.001682 seconds
#> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 16.82 seconds.
#> Chain 4: Adjust your expectations accordingly!
#> Chain 4:
#> Chain 4:
#> Chain 4: Iteration: 1 / 4000 [ 0%] (Warmup)
#> Chain 4: Iteration: 400 / 4000 [ 10%] (Warmup)
#> Chain 4: Iteration: 800 / 4000 [ 20%] (Warmup)
#> Chain 4: Iteration: 1200 / 4000 [ 30%] (Warmup)
#> Chain 4: Iteration: 1600 / 4000 [ 40%] (Warmup)
#> Chain 4: Iteration: 2000 / 4000 [ 50%] (Warmup)
#> Chain 4: Iteration: 2001 / 4000 [ 50%] (Sampling)
#> Chain 4: Iteration: 2400 / 4000 [ 60%] (Sampling)
#> Chain 4: Iteration: 2800 / 4000 [ 70%] (Sampling)
#> Chain 4: Iteration: 3200 / 4000 [ 80%] (Sampling)
#> Chain 4: Iteration: 3600 / 4000 [ 90%] (Sampling)
#> Chain 4: Iteration: 4000 / 4000 [100%] (Sampling)
#> Chain 4:
#> Chain 4: Elapsed Time: 673.196 seconds (Warm-up)
#> Chain 4: 800.126 seconds (Sampling)
#> Chain 4: 1473.32 seconds (Total)
#> Chain 4:
After fitting the model, we should check its quality by means of Posterior Predictive P-Values (Gelman 2013) with the bmscstan::pp_check
function.
Thanks to this graphical function, we will see if the observed data and the data sampled from the posterior distributions of our BMSC model are similar.
If we observe strong deviations, it means that your BMSC model is not adequately describing your data. In this case, you might want to change the priors distribution (see the help
page), change the random effects structure, or transform your dependent variable (using the logarithm or other math functions).
#> TableGrob (2 x 1) "arrange": 2 grobs
#> z cells name grob
#> 1 1 (1-1,1-1) arrange gtable[layout]
#> 2 2 (2-2,1-1) arrange gtable[layout]
In both the controls and the single case data, the Posterior Predictive P-Values check seems to adequately resemble the observed data.
A further control on our model is given by checking the Effective Sample Size (ESS) for each coefficient and the \(\hat{R}\) diagnostic index (Gelman and Rubin 1992).
The ESS is the “effective number of simulation draws” for any coefficient, namely the approximate number of independent draws, taking into account that the various simulations in a Monte Carlo Markov Chain (MCMC) are not independent each other. For further details, see an introductory book in Bayesian Statistics. A good ESS estimates should be \(ESS > 100\) or \(ESS > 10\%\) of the total draws (remembering that you should remove the burn-in simulations from the total iterations counting).
The \(\hat{R}\) is an index of the convergence of the MCMCs. In BMSC
the default is 4. Usually, MCMCs are considered convergent when \(\hat{R} < 1.1\) (Stan default).
In order to check these values, the summary.BMSC
function is needed (see next section).
summary.BMSC
outputThe output of the brmscstan::summary.BMSC
function is divided in four main parts:
print( summary( mdl ) , digits = 3 )
#>
#> Bayesian Multilevel Single Case model
#>
#> RT ~ Body.District * Congruency * Side + (Congruency * Side |
#> ID) + (Congruency * Side | BD_ID)
#>
#> [1] "Priors for the regression coefficients: normal distribution; Dispersion parameter (scale or sigma): 10"
#>
#>
#> Fixed Effects for the Control Group
#>
#> mean se_mean sd 2.5% 25% 50%
#> (Intercept) 199.88 0.0670 7.79 184.55 194.52 199.90
#> Body.District2 17.20 0.0618 5.80 6.22 13.14 17.11
#> Congruency2 16.10 0.0844 6.75 4.08 11.33 15.72
#> Side2 19.11 0.0912 7.50 5.01 13.88 18.95
#> Body.District2:Congruency2 -10.03 0.0422 4.99 -19.84 -13.38 -9.97
#> Body.District2:Side2 -4.87 0.0448 4.85 -14.37 -8.15 -4.92
#> Congruency2:Side2 -7.22 0.0805 7.76 -22.65 -12.40 -7.13
#> Body.District2:Congruency2:Side2 6.42 0.0614 6.25 -5.99 2.26 6.53
#> 75% 97.5% n_eff Rhat BF10
#> (Intercept) 205.20 215.072 13519 1 1.58e+49
#> Body.District2 21.15 28.931 8823 1 54.8
#> Congruency2 20.53 30.272 6382 1 21.2
#> Side2 24.30 33.919 6763 1 31.9
#> Body.District2:Congruency2 -6.65 -0.375 13994 1 3.81
#> Body.District2:Side2 -1.54 4.553 11759 1 0.796
#> Congruency2:Side2 -1.90 7.653 9310 1 1.17
#> Body.District2:Congruency2:Side2 10.69 18.550 10362 1 1.13
#>
#> mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat
#> sigmaC 75.7 0.00662 0.882 74 75.1 75.7 76.3 77.5 17715 1
#>
#>
#> Fixed Effects for the Patient
#>
#> mean se_mean sd 2.5% 25% 50% 75%
#> (Intercept) 394.8 0.0715 6.39 382.1 390.5 394.9 399.27
#> Body.District2 40.5 0.0762 6.82 27.2 35.8 40.5 45.21
#> Congruency2 47.1 0.0804 7.19 33.5 42.2 47.0 52.01
#> Side2 50.7 0.0790 7.07 36.9 45.8 50.7 55.54
#> Body.District2:Congruency2 -26.9 0.0928 8.30 -43.1 -32.6 -26.9 -21.17
#> Body.District2:Side2 -21.5 0.0908 8.12 -37.4 -27.0 -21.5 -16.06
#> Congruency2:Side2 -14.2 0.0989 8.84 -31.7 -20.2 -14.3 -8.28
#> Body.District2:Congruency2:Side2 -10.4 0.1096 9.80 -29.8 -16.9 -10.4 -3.81
#> 97.5% BF10
#> (Intercept) 407.19 1.65e+70
#> Body.District2 53.83 455825
#> Congruency2 61.42 2859122
#> Side2 64.68 2068424
#> Body.District2:Congruency2 -10.42 145
#> Body.District2:Side2 -5.65 25.4
#> Congruency2:Side2 2.91 3.3
#> Body.District2:Congruency2:Side2 8.79 1.71
#>
#>
#> Fixed Effects for the difference between the Patient and the Control Group
#>
#> mean se_mean sd 2.5% 25% 50% 75%
#> (Intercept) 194.97 0.0630 7.83 179.54 189.6 195.07 200.37
#> Body.District2 23.29 0.0628 6.95 9.66 18.6 23.39 28.03
#> Congruency2 31.04 0.0698 7.41 16.30 26.2 31.01 35.99
#> Side2 31.60 0.0754 7.75 16.11 26.4 31.67 37.07
#> Body.District2:Congruency2 -16.82 0.0625 7.69 -31.99 -22.0 -16.84 -11.69
#> Body.District2:Side2 -16.61 0.0598 7.59 -31.49 -21.8 -16.68 -11.37
#> Congruency2:Side2 -7.02 0.0698 7.99 -22.67 -12.4 -6.99 -1.68
#> Body.District2:Congruency2:Side2 -16.85 0.0624 8.51 -33.74 -22.4 -16.91 -11.10
#> 97.5% n_eff Rhat BF10
#> (Intercept) 210.071 15448 1 Inf
#> Body.District2 36.945 12250 1 134
#> Congruency2 45.651 11283 1 1275
#> Side2 46.469 10561 1 800
#> Body.District2:Congruency2 -1.750 15118 1 8.47
#> Body.District2:Side2 -1.474 16132 1 7.31
#> Congruency2:Side2 8.566 13132 1 1.15
#> Body.District2:Congruency2:Side2 -0.122 18626 1 6.37
#>
#> mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat
#> sigmaP 64.3 0.0203 2.65 59.3 62.5 64.2 66 69.7 17052 1
In the second and fourth part of the output, we can observe a descriptive summary reporting the mean, the standard error, the standard deviation, the \(2.5\%\), \(25%\), \(50\%\), \(75\%\) and \(97.5\%\) of the posterior distributions of each coefficient. If we want the \(95\%\) Credible Interval, we can consider only the \(2.5\%\) and \(97.5\%\) extremes. Then, two diagnostic indexes are reported: the n_eff
parameter, that is the ESS, and the Rhat
(\(\hat{R}\)). Finally, the Savage-Dickey Bayes Factor is reported (BF10).
In the third part the diagnostic indexes are not reported because these coefficients are computed as marginal probabilities from the probabilities summarized in the second and fourth part.
summary.BMSC
outputThe first step should be controlling the diagnostic indexes.
In this model, all n_eff
are greater than the \(10\%\) of the total iterations (default iterations: 4000, default warmup iterations: 2000, default chains: 4 = 800). Also, all \(\hat{R}s < 1.1\). Finally, we already saw that the Posterior Predictive P-values are showing that the model is representative of the data.
Then, observing what the fixed effects of the Control group are showing is important before of seeing the differences with the single case.
In this analysis, there are 5 fixed effects which \(BF_{10}\) is greater than 3 (Raftery 1995).
We can have a general overview of the coefficients of the model with the plot.BMSC
function.
The interaction between Body District and Congruency needs a further analysis to better understand the phenomenon. It comes useful the function pairwise.BMSC
.
pp <- pairwise.BMSC(mdl , contrast = "Body.District2:Congruency2" , who = "control")
print( pp , digits = 3 )
#>
#> Pairwise Bayesian Multilevel Single Case contrasts of coefficients divided by Body.District2:Congruency2
#>
#>
#>
#> Marginal distributions
#>
#> mean se_mean sd 2.5% 25% 50% 75% 97.5% BF10 (not zero)
#> FOOT Incongruent 216 0.109 9.77 197 209 216 222 236 2.87e+32
#> FOOT Congruent 200 0.087 7.79 185 195 200 205 215 1.58e+49
#> HAND Incongruent 223 0.126 11.29 202 215 223 231 246 1.47e+25
#> HAND Congruent 217 0.104 9.31 199 211 217 224 235 5.05e+30
#>
#>
#>
#> Table of contrasts
#>
#> mean se_mean sd 2.5% 25% 50%
#> FOOT Incongruent - FOOT Congruent 16.10 0.0754 6.75 4.08 11.33 15.72
#> FOOT Incongruent - HAND Incongruent -7.16 0.0738 6.60 -20.06 -11.67 -7.12
#> FOOT Incongruent - HAND Congruent -1.10 0.0972 8.70 -17.63 -7.14 -1.14
#> FOOT Congruent - HAND Incongruent -23.27 0.1014 9.07 -41.72 -29.23 -22.89
#> FOOT Congruent - HAND Congruent -17.20 0.0649 5.80 -28.93 -21.15 -17.11
#> HAND Incongruent - HAND Congruent 6.07 0.0867 7.76 -7.95 0.85 5.60
#> 75% 97.5% BF10
#> FOOT Incongruent - FOOT Congruent 20.53 30.27 21.239
#> FOOT Incongruent - HAND Incongruent -2.62 5.64 1.171
#> FOOT Incongruent - HAND Congruent 4.54 16.44 0.889
#> FOOT Congruent - HAND Incongruent -16.89 -6.35 39.246
#> FOOT Congruent - HAND Congruent -13.14 -6.22 61.521
#> HAND Incongruent - HAND Congruent 11.01 22.25 0.963
The output of this function is divided in two parts:
It is also possible to plot the results of this function with the use of plot.pairwise.BMSC
.
#>
#> [[2]]
Finally, it is possible to plot marginal posterior distributions for each effects with \(BF_{10} > 3\).
p1 <- pairwise.BMSC(mdl , contrast = "Body.District2" , who = "control" )
plot( p1 )[[1]] +
ggtitle("Body District" , subtitle = "Marginal effects")
plot( p1 )[[2]] +
ggtitle("Body District" , subtitle = "Contrasts")
p2 <- pairwise.BMSC(mdl , contrast = "Congruency2" , who = "control" )
plot( p2 )[[1]] +
ggtitle("Congruency" , subtitle = "Marginal effects")
plot( p2 )[[2]] +
ggtitle("Congruency" , subtitle = "Contrasts")
p3 <- pairwise.BMSC(mdl , contrast = "Side2" , who = "control" )
plot( p3 )[[1]] +
ggtitle("Side" , subtitle = "Marginal effects")
plot( p3 )[[2]] +
ggtitle("Side" , subtitle = "Contrasts")
Finally, the difference between the Control Group and the Single Case is of interest.
A general plot can be obtained in the following way, plotting both the Control Group and the Single Case:
plot( mdl ) +
theme_bw( base_size = 18 )+
theme( legend.position = "bottom",
legend.direction = "horizontal")
or plotting only the difference
The relevant coefficients are:
The Intercept coefficient is showing us that the single case is generally slower than the Control Sample (generally speaking, when you analyse healthy controls against a single case with a specific disease, the single case is slower).
All the main effects can be further analysed by simply looking at their estimates (knowing the contrast matrix and the direction of the estimate you can understand which level is greater than the other), or by means of the pairwise.BMSC
function, if you also want marginal effects and automatic plots.
The interactions require the use of the pairwise.BMSC
function.
p4 <- pairwise.BMSC(mdl , contrast = "Body.District2:Congruency2" , who = "delta")
print( p4 , digits = 3 )
#>
#> Pairwise Bayesian Multilevel Single Case contrasts of coefficients divided by Body.District2:Congruency2
#>
#>
#>
#> Marginal distributions
#>
#> mean se_mean sd 2.5% 25% 50% 75% 97.5% BF10 (not zero)
#> FOOT Congruent 195 0.0875 7.83 180 190 195 200 210 Inf
#> FOOT Incongruent 226 0.1137 10.17 206 219 226 233 246 2.65e+23
#> HAND Incongruent 216 0.1556 13.92 188 206 216 225 242 Inf
#> HAND Congruent 218 0.1085 9.70 199 212 219 225 237 1.02e+27
#>
#>
#>
#> Table of contrasts
#>
#> mean se_mean sd 2.5% 25% 50%
#> FOOT Congruent - FOOT Incongruent -31.04 0.0829 7.41 -45.7 -35.990 -31.01
#> FOOT Congruent - HAND Incongruent -20.66 0.1398 12.51 -45.1 -29.193 -20.67
#> FOOT Congruent - HAND Congruent -23.29 0.0777 6.95 -36.9 -28.034 -23.39
#> FOOT Incongruent - HAND Incongruent 10.38 0.1244 11.12 -11.5 3.016 10.43
#> FOOT Incongruent - HAND Congruent 7.75 0.1113 9.95 -11.6 0.939 7.69
#> HAND Incongruent - HAND Congruent -2.63 0.1296 11.59 -25.5 -10.518 -2.57
#> 75% 97.5% BF10
#> FOOT Congruent - FOOT Incongruent -26.20 -16.30 621.74
#> FOOT Congruent - HAND Incongruent -12.23 3.76 4.94
#> FOOT Congruent - HAND Congruent -18.56 -9.66 174.55
#> FOOT Incongruent - HAND Incongruent 18.02 31.76 1.70
#> FOOT Incongruent - HAND Congruent 14.35 27.79 1.30
#> HAND Incongruent - HAND Congruent 5.17 20.46 1.17
The pairwise.BMSC
function shows that in all cases the marginal effects of the RTs where greater than zero, but the differences where present only in the comparison between FOOT Congruent and the other cases.
plot( p4 , type = "interval")
#> [[1]]
#>
#> [[2]]
plot( p4 , type = "area")
#> [[1]]
#>
#> [[2]]
plot( p4 , type = "hist")
#> [[1]]
#>
#> [[2]]
In this case we can observe that the single case was more facilitated by the FOOT Congruent condition than the Control Group.
If the interpretation of the results is difficult, it can be useful look what happens in the Single Case marginal effects.
p6 <- pairwise.BMSC(mdl , contrast = "Body.District2:Side2" , who = "delta")
print( p6 , digits = 3 )
#>
#> Pairwise Bayesian Multilevel Single Case contrasts of coefficients divided by Body.District2:Side2
#>
#>
#>
#> Marginal distributions
#>
#> mean se_mean sd 2.5% 25% 50% 75% 97.5% BF10 (not zero)
#> FOOT Left 195 0.0875 7.83 180 190 195 200 210 Inf
#> FOOT Right 227 0.1166 10.43 206 219 226 234 247 8.94e+28
#> HAND Left 218 0.1085 9.70 199 212 219 225 237 1.02e+27
#> HAND Right 233 0.1428 12.77 208 225 233 242 258 3.25e+23
#>
#>
#>
#> Table of contrasts
#>
#> mean se_mean sd 2.5% 25% 50% 75% 97.5%
#> FOOT Left - FOOT Right -31.60 0.0867 7.75 -46.5 -37.07 -31.67 -26.426 -16.11
#> FOOT Left - HAND Left -23.29 0.0777 6.95 -36.9 -28.03 -23.39 -18.563 -9.66
#> FOOT Left - HAND Right -38.29 0.1268 11.34 -60.6 -46.01 -38.35 -30.568 -16.24
#> FOOT Right - HAND Left 8.31 0.1122 10.03 -11.4 1.55 8.23 15.047 28.04
#> FOOT Right - HAND Right -6.69 0.1013 9.06 -24.4 -12.83 -6.80 -0.482 11.05
#> HAND Left - HAND Right -14.99 0.1098 9.82 -34.1 -21.60 -15.02 -8.399 4.50
#> BF10
#> FOOT Left - FOOT Right 737.96
#> FOOT Left - HAND Left 174.55
#> FOOT Left - HAND Right 260.44
#> FOOT Right - HAND Left 1.42
#> FOOT Right - HAND Right 1.21
#> HAND Left - HAND Right 3.13
plot( p6 , type = "hist")[[1]] +
theme_bw( base_size = 18)+
theme( strip.text.y = element_text( angle = 0 ) )
In this case, we can see that the left - right difference in the single case is always present, with faster RTs in the left foot than in the other cases.
p7 <- pairwise.BMSC(mdl ,
contrast = "Body.District2:Congruency2:Side2" ,
who = "delta")
print( p7 , digits = 3 )
#>
#> Pairwise Bayesian Multilevel Single Case contrasts of coefficients divided by Body.District2:Congruency2:Side2
#>
#>
#>
#> Marginal distributions
#>
#> mean se_mean sd 2.5% 25% 50% 75% 97.5%
#> FOOT Congruent Left 195 0.0875 7.83 180 190 195 200 210
#> FOOT Incongruent Right 251 0.1516 13.56 224 241 251 260 276
#> FOOT Incongruent Left 226 0.1137 10.17 206 219 226 233 246
#> FOOT Congruent Right 227 0.1166 10.43 206 219 226 234 247
#> HAND Incongruent Left 232 0.1365 12.21 208 224 233 241 256
#> HAND Congruent Right 233 0.1428 12.77 208 225 233 242 258
#> HAND Congruent Left 218 0.1085 9.70 199 212 219 225 237
#> HAND Incongruent Right 224 0.1778 15.90 192 213 224 234 255
#> BF10 (not zero)
#> FOOT Congruent Left Inf
#> FOOT Incongruent Right 7.69e+34
#> FOOT Incongruent Left 2.65e+23
#> FOOT Congruent Right 8.94e+28
#> HAND Incongruent Left 5.83e+18
#> HAND Congruent Right 3.25e+23
#> HAND Congruent Left 1.02e+27
#> HAND Incongruent Right 5.39e+12
#>
#>
#>
#> Table of contrasts
#>
#> mean se_mean sd 2.5%
#> FOOT Congruent Left - FOOT Incongruent Right -55.621 0.1353 12.10 -79.08
#> FOOT Congruent Left - FOOT Incongruent Left -31.038 0.0829 7.41 -45.65
#> FOOT Congruent Left - FOOT Congruent Right -31.599 0.0867 7.75 -46.47
#> FOOT Congruent Left - HAND Incongruent Left -37.507 0.1206 10.79 -58.08
#> FOOT Congruent Left - HAND Congruent Right -38.285 0.1268 11.34 -60.64
#> FOOT Congruent Left - HAND Congruent Left -23.292 0.0777 6.95 -36.95
#> FOOT Congruent Left - HAND Incongruent Right -28.635 0.1678 15.01 -57.56
#> FOOT Incongruent Right - FOOT Incongruent Left 24.583 0.1158 10.35 3.98
#> FOOT Incongruent Right - FOOT Congruent Right 24.021 0.1111 9.93 4.58
#> FOOT Incongruent Right - HAND Incongruent Left 18.114 0.1465 13.10 -8.22
#> FOOT Incongruent Right - HAND Congruent Right 17.335 0.1436 12.84 -8.30
#> FOOT Incongruent Right - HAND Congruent Left 32.329 0.1491 13.34 6.01
#> FOOT Incongruent Right - HAND Incongruent Right 26.986 0.1230 11.00 5.79
#> FOOT Incongruent Left - FOOT Congruent Right -0.562 0.1176 10.52 -20.61
#> FOOT Incongruent Left - HAND Incongruent Left -6.469 0.1015 9.08 -24.50
#> FOOT Incongruent Left - HAND Congruent Right -7.247 0.1451 12.98 -32.89
#> FOOT Incongruent Left - HAND Congruent Left 7.746 0.1113 9.95 -11.56
#> FOOT Incongruent Left - HAND Incongruent Right 2.403 0.1588 14.21 -25.33
#> FOOT Congruent Right - HAND Incongruent Left -5.907 0.1407 12.58 -30.85
#> FOOT Congruent Right - HAND Congruent Right -6.686 0.1013 9.06 -24.37
#> FOOT Congruent Right - HAND Congruent Left 8.308 0.1122 10.03 -11.39
#> FOOT Congruent Right - HAND Incongruent Right 2.965 0.1518 13.58 -23.43
#> HAND Incongruent Left - HAND Congruent Right -0.779 0.1473 13.17 -26.66
#> HAND Incongruent Left - HAND Congruent Left 14.215 0.1054 9.42 -4.34
#> HAND Incongruent Left - HAND Incongruent Right 8.872 0.1421 12.71 -15.74
#> HAND Congruent Right - HAND Congruent Left 14.993 0.1098 9.82 -4.50
#> HAND Congruent Right - HAND Incongruent Right 9.650 0.1386 12.40 -15.24
#> HAND Congruent Left - HAND Incongruent Right -5.343 0.1638 14.65 -34.17
#> 25% 50% 75% 97.5%
#> FOOT Congruent Left - FOOT Incongruent Right -63.9241 -55.787 -47.360 -31.58
#> FOOT Congruent Left - FOOT Incongruent Left -35.9899 -31.014 -26.204 -16.30
#> FOOT Congruent Left - FOOT Congruent Right -37.0723 -31.665 -26.426 -16.11
#> FOOT Congruent Left - HAND Incongruent Left -44.8580 -37.615 -30.211 -16.39
#> FOOT Congruent Left - HAND Congruent Right -46.0101 -38.349 -30.568 -16.24
#> FOOT Congruent Left - HAND Congruent Left -28.0339 -23.394 -18.563 -9.66
#> FOOT Congruent Left - HAND Incongruent Right -38.6911 -28.664 -18.711 1.14
#> FOOT Incongruent Right - FOOT Incongruent Left 17.5790 24.645 31.519 44.59
#> FOOT Incongruent Right - FOOT Congruent Right 17.2559 24.042 30.818 43.61
#> FOOT Incongruent Right - HAND Incongruent Left 9.4486 18.157 26.940 43.78
#> FOOT Incongruent Right - HAND Congruent Right 8.8399 17.290 25.989 42.18
#> FOOT Incongruent Right - HAND Congruent Left 23.5293 32.449 41.404 58.39
#> FOOT Incongruent Right - HAND Incongruent Right 19.6395 26.985 34.367 48.31
#> FOOT Incongruent Left - FOOT Congruent Right -7.6506 -0.667 6.632 19.82
#> FOOT Incongruent Left - HAND Incongruent Left -12.6041 -6.503 -0.392 11.46
#> FOOT Incongruent Left - HAND Congruent Right -16.0208 -7.292 1.567 17.93
#> FOOT Incongruent Left - HAND Congruent Left 0.9388 7.690 14.353 27.79
#> FOOT Incongruent Left - HAND Incongruent Right -7.4789 2.417 12.019 30.17
#> FOOT Congruent Right - HAND Incongruent Left -14.3229 -5.890 2.430 19.09
#> FOOT Congruent Right - HAND Congruent Right -12.8272 -6.801 -0.482 11.05
#> FOOT Congruent Right - HAND Congruent Left 1.5541 8.235 15.047 28.04
#> FOOT Congruent Right - HAND Incongruent Right -6.3018 2.907 11.954 29.57
#> HAND Incongruent Left - HAND Congruent Right -9.5464 -0.849 7.930 25.77
#> HAND Incongruent Left - HAND Congruent Left 7.8342 14.279 20.514 32.83
#> HAND Incongruent Left - HAND Incongruent Right 0.0165 8.862 17.361 34.01
#> HAND Congruent Right - HAND Congruent Left 8.3990 15.024 21.597 34.15
#> HAND Congruent Right - HAND Incongruent Right 1.1700 9.807 17.990 33.81
#> HAND Congruent Left - HAND Incongruent Right -15.3840 -5.391 4.586 23.69
#> BF10
#> FOOT Congruent Left - FOOT Incongruent Right 3018.60
#> FOOT Congruent Left - FOOT Incongruent Left 621.74
#> FOOT Congruent Left - FOOT Congruent Right 737.96
#> FOOT Congruent Left - HAND Incongruent Left 298.90
#> FOOT Congruent Left - HAND Congruent Right 260.44
#> FOOT Congruent Left - HAND Congruent Left 174.55
#> FOOT Congruent Left - HAND Incongruent Right 9.35
#> FOOT Incongruent Right - FOOT Incongruent Left 17.16
#> FOOT Incongruent Right - FOOT Congruent Right 18.45
#> FOOT Incongruent Right - HAND Incongruent Left 3.66
#> FOOT Incongruent Right - HAND Congruent Right 3.25
#> FOOT Incongruent Right - HAND Congruent Left 25.24
#> FOOT Incongruent Right - HAND Incongruent Right 26.00
#> FOOT Incongruent Left - FOOT Congruent Right 1.07
#> FOOT Incongruent Left - HAND Incongruent Left 1.17
#> FOOT Incongruent Left - HAND Congruent Right 1.51
#> FOOT Incongruent Left - HAND Congruent Left 1.30
#> FOOT Incongruent Left - HAND Incongruent Right 1.50
#> FOOT Congruent Right - HAND Incongruent Left 1.41
#> FOOT Congruent Right - HAND Congruent Right 1.21
#> FOOT Congruent Right - HAND Congruent Left 1.42
#> FOOT Congruent Right - HAND Incongruent Right 1.39
#> HAND Incongruent Left - HAND Congruent Right 1.32
#> HAND Incongruent Left - HAND Congruent Left 2.95
#> HAND Incongruent Left - HAND Incongruent Right 1.61
#> HAND Congruent Right - HAND Congruent Left 3.24
#> HAND Congruent Right - HAND Incongruent Right 1.70
#> HAND Congruent Left - HAND Incongruent Right 1.56
plot( p7 , type = "hist")[[1]] +
theme_bw( base_size = 18)+
theme( strip.text.y = element_text( angle = 0 ) )
Here we can see that the effect was pushed by the facilitation that the single case had in the Left Congruent Foot condition compared to the Control Group.
In this vignette we have seen how to use the package bmscstan and its functions to analyse and make sense of Single Case data.
The output of the main functions is rich of information, and the Bayesian Inference can be done by taking into account the Savage-Dickey \(BF_{10}\), or the \(95\%\) CI (see Kruschke 2014 for further details).
In this vignette there is almost no discussion concerning how to test the Single Case fixed effects (third part of the main output), but it was used to better understand what happens in the differences between the single case and the control group.
However, if your hypotheses focus on the behaviour of the patient, and not only on the differences between single case and the control group, it will be important to analyse in detail also that part.
Carpenter, Bob, Andrew Gelman, Matthew D. Hoffman, Daniel Lee, Ben Goodrich, Michael Betancourt, Marcus Brubaker, Jiqiang Guo, Peter Li, and Allen Riddell. 2017. “<i>Stan</i> : A Probabilistic Programming Language.” Journal of Statistical Software 76 (1). https://doi.org/10.18637/jss.v076.i01.
Crawford, John R., and Paul H. Garthwaite. 2005. “Evaluation of criteria for classical dissociations in single-case studies by Monte Carlo simulation.” Neuropsychology 19 (5): 664–78. https://doi.org/10.1037/0894-4105.19.5.664.
Crawford, John R., Paul H Garthwaite, Sara Porter, and Sara Porter Point. 2010. “Point and interval estimates of effect sizes for the case-controls design in neuropsychology : Rationale , methods , implementations , and proposed reporting standards Point and interval estimates of effect sizes for the case-controls design in neuropsych” 3294. https://doi.org/10.1080/02643294.2010.513967.
Crawford, John R., and David C Howell. 1998. “Comparing an Individual ’ s Test Score Against Norms Derived from Small Samples.” The Clinical Neuropsychologist 12 (4): 482–86. https://doi.org/10.1076/clin.12.4.482.7241.
Gelman, Andrew. 2013. “Understanding posterior p-values.” Electronic Journal of Statistics, 1–8. https://doi.org/10.1214/13-EJS854.
Gelman, Andrew, and Donald B Rubin. 1992. “Inference from Iterative Simulation Using Multiple Sequences.” Statistical Science 7 (4): 457–72.
Kruschke, John K. 2014. Doing Bayesian data analysis: A tutorial with R, JAGS, and Stan, second edition. 2nd ed. Elsevier Inc. https://doi.org/10.1016/B978-0-12-405888-0.09999-2.
Ottoboni, Giovanni, Alessia Tessari, Roberto Cubelli, and Carlo Umiltà. 2005. “Is handedness recognition automatic? A study using a Simon-like paradigm.” Journal of Experimental Psychology: Human Perception and Performance 31 (4): 778–89. https://doi.org/10.1037/0096-1523.31.4.778.
Raftery, Adrian E. 1995. “Bayesian Model Selection in Social Research.” In Sociological Methodology, edited by P V Marsden, 25:111–63. 1995. Cambridge: Blackwells.
Scandola, Michele, and Daniele Romano. 2020. “Bayesian Multilevel Single Case Models Using ’Stan’. A New Tool to Study Single Cases in Neuropsychology.” PsyArXiv. https://doi.org/10.31234/osf.io/sajdq.
Tessari, Alessia, Giovanni Ottoboni, Giulia Baroni, Ed Symes, and Roberto Nicoletti. 2012. “Is access to the body structural description sensitive to a body part’s significance for action and cognition? A study of the sidedness effect using feet.” Experimental Brain Research 218 (4): 515–25. https://doi.org/10.1007/s00221-012-3045-4.