I’m in Tampa, Florida, for the annual Vision Sciences Society (VSS) meeting. I brought one of my pet projects with me. Performance discontinuities have been used in a sort-of-informally-eye-balling-graphs way to estimate working memory capacity. Some of this is cited by Cowan (2000) as evidence for a “magical” capacity of four chunks in working memory.

I try to formalise the estimate of working memory capacity from performance discontinuities using a Bayesian analysis. In brief, I think that most of the current scores on serial recall tasks are either hard to interpret theoretically or very complex. Existing packages to estimate discontinuities (switch point analysis, change point, regression discontinuity, etc.) either estimate changes in offsets (not slopes) or do not provide a posterior distribution of the point of discontinuity.

I’m currently writing up a more detailed paper on this, accompanied by an R notebook and a small R package to do the analysis.

Here is our poster for a project in which we use Virtual Reality, head-tracking and eye-tracking to assess hemispatial neglect. On paper and from our piloting, I think that this approach makes a whole lot of sense. There are great perspectives, including tele-rehabilitation. How well it works in the long run is an empirical question!

We currently use the HTC Vive with the Pupil-labs eye-tracker. I should mention that the author list here is an initial start-up group and that more collaborators take part moving forward from here.

Update (Aug 7th, 2018): after reading this preprint by Liddel & Krusche (2017), I am convinced that it would be even better to analyzeLikert scales is using ordered-probit models. This is still a parametric model; just with non-metric intervals between response category thresholds. What I write below still holds for the non-parametric vs. parametric discussion.

Whether to use parametric or non-parametric analyses for questionnaires is a very common question from students. It is also an excellent question since there seem to be strong opinions on both sides and that should make you search for deeper answers. It is the difference between modeling your data using parametric statistics (means and linear relationships, e.g., ANOVA, t-test, Pearson correlation, regression) or non-parametric statistics (medians and ranks, e.g., Friedman/Mann-Whitney/Wilcoxon/Spearman).

Consider this 5-item response. What do you think better represents this respondent’s underlying attitude? The parametric mean (SD) or the non-parametric median?

Here, we will leave armchair-dogma and textbook-arguments aside and look to the extensive empirical literature for answers. I dived into a great deal of papers to compose an answer to my students:

Be aware that this is a debate between the ordinalists (saying that you should use non-parametric) and the intervalists (arguing for parametric) which is still ongoing. So any answer would be somewhat controversial. That said, I judge that, for common analyses, the intervalist position is much better justified. The literature is big, but most of the conclusions are well presented by Harpe (2015). In brief, I recommend the following:

You would often draw similar conclusions from parametric and non-parametric analyses, at least in the context of Likert scales. For presenting data and effect sizes, always take a descriptive look at your data and see what best represents it. As it turns out, (parametric) means are usually fine for Likert scales, i.e., the mean of multiple Likert items. But (non-parametric) counts are often the correct level of analysis for Likert items, though this can be further reduced to the median if you have enough effective response options (i.e., 7 or more points which your respondens actually use). Due to measurement inaccuracy, interpreting single Likert items is often unallowably fragile, and no statistical tricks can undo that. So you should operationalize your hypotheses using scales rather than items as indeed all standardized questionnaires do. As you see from the above, this, in turn, means that your important statistical tests can be parametric. Because parametric inferences are much easier to interpret and allows for a wider range of analyses, it is not only an option but really a recommendation to use parametric statistics for Likert scales.

I would personally add to this that you should not dismiss the ordinalist-intervalist debate since its exactly the lines of thought that we ought to have when we chose our statistical model, namely to what extent the numbers represent the mental phenomena we are investigating. Others (e.g., the censor) may be ordinalists, so make sure (as always) to justify your choice using empirical literature. This makes your conclusions accessible to the widest audience possible. I provide here a short reading guide to help you make those justifications.

Reading guide

Students and newcomers are recommended to read the papers in the stated order to get a soft introduction. Readers more familiar with the topic can jump straight to Harpe (2015). I would say that Sullivan & Artino (2013) and Carifo & Perla (2008) gets you 75% of the way and Harpe (2015) gets you 95% of the way. Norman (2010) is included for its impact on the debate and because it presents the arguments slightly more statistically, but content-wise it adds little over and above Harpe (2015).

Note that this is an extensive literature, including some papers leaning ordinalist. However, I have failed to find ordinalist-leaning papers that did not commit the error of either (1) a conflation of Likert items and Likert scales without empirical justification for doing so, or (2) extrapolating from analysis of single items to analysis of scales – again without empirical justification that this is reasonable. If I learn about a paper which empirically uncovered that parametric analyses of Likert scales are unforgivingly inaccurate, I would not hesitate to include it. However, I feel like all major arguments are represented and addressed in this list.

(15 minutes) Sullivan, G. M., & Artino, A. R. (2013). Analyzing and Interpreting Data From Likert-Type Scales. Journal of Graduate Medical Education, 5(4), 541–542. https://doi.org/10.4300/JGME-5-4-18 A light read for novices which could serve as an introduction to Likert-scales understood statistically and the idea of using parametric analyses on Likert data. However, it is too superficial to constitute a justification for doing so.

(15 minutes) Carifio, J., & Perla, R. (2008). Resolving the 50-year debate around using and misusing Likert scales. Medical Education, 42(12), 1150–1152. https://doi.org/10.1111/j.1365-2923.2008.03172.x
A very concise list of arguments on the statistical side of the intervalist-ordinalist debate, heavily favoring the intervalist side for most situations. As a side note, this is a continuation and summary of Carifio & Perla (2007), but while the fundamental arguments of that paper are strong, it is so poorly written that I do not include it in this reading guide. Maybe this is why they needed this 2008 paper.

(60 minutes) Harpe, S. E. (2015). How to analyze Likert and other rating scale data. Currents in Pharmacy Teaching and Learning, 7(6), 836–850. https://doi.org/10.1016/j.cptl.2015.08.001
This paper introduces both the history, rating scale methodology, and empirically-based review of inferring ratio parameters (like means) from ordinal data (like Likert-items). Here too, the conclusion is that parametric analyses are appropriate for most situations. Most importantly, Harpe presents practical recommendations and nuanced discussion of when it is appropriate to deviate from those recommendations. Also, it has one of the most extensive reference lists, pointing the reader to relevant sources of evidence. As a reading guide, you may skip straight to the title “statistical analysis issues” on page 839 while studying Figure 1 on your way. Even though this paper is very fluently written, do take note of the details too because the phrasing is quite accurate.

(40 minutes) Norman, G. (2010). Likert scales, levels of measurement and the “laws” of statistics. Advances in Health Sciences Education, 15(5), 625–632. https://doi.org/10.1007/s10459-010-9222-y This is the most cited paper on the topic, so I feel like I need to comment on it here since you are likely to encounter it. Recommendation-wise, it adds little new that Harpe (2015) did not cover. Some advantages of the paper are that it brings you to the nuts and bolts of the consequences of going parametric instead of non-parametric, e.g., by presenting some simulations and actual analyses. The paper is fun to read because Norman is clearly angry, but unfortunately, it also reads largely as a one-sided argument, so retain a bit of skepticism. For example, Norman simulates correlations on approximately linearly related variables and concludes that Spearman and Pearson correlations yield similar results. While this is a good approximation to many real-world phenomena, the correlation coefficients can differ around 0.1 when the variables are not linearly related (Pearson inaccurate) but still monotonically increasing/decreasing (Spearman accurate). This can change the label from “small” to “medium” cf. Cohen’s (1992) criteria which are (too) conventionally used.

Additional comments

Many “non-parametric” analyses are actually parametric. If the paper used the mean Likert rating of multiple items, they are largely parametric, no matter if they do non-parametric tests of this mean. This is because taking the mean embodies the parametric assumption that the response options are equidistant, e.g., that the mean of “strongly disagree” and “neutral” is “disagree.” Similarly, if the paper used Cronbach’s alpha to assess reliability or unidimensionality, they are parametric since it’s a generalized Pearson correlation, i.e., modeling a continuous linear relationship between Likert items. The vast majority of the academic literature does this, including every single standardized questionnaire. A practical consensus is not a convincing defense of going parametric on Likert data, but it does indicate that it requires little to get to the level of current publication practices.

Prediction of responses to single responses must be ordinal. Predictions of responses should only yield actual response options. E.g., not 2.5 or 6 on a 5-point Likert scale. For scales or predictions across subjects (i.e., the mean of items) the parametric estimate will often be good enough. I have not found literature which has tried to predict responses on individual items by individual subjects, but if you were to do so, you would have to do some transformation of the inferred parametric estimates back into predicted discrete ordinal responses (e.g., probit transformation).

Multilevel models are superior. Always beware when “manually” computing differences, means, analyzing subsets of data, etc. since you usually through away valuable data. Similarly in the context of Likert scales where you compute a mean. It is self-evident that the mean of 100 items would much better approximate the true underlying attitude of your respondent than the mean of 4 items. Yet, Mann-Whitney U or other analyses would not “know” this difference in certainty. Multilevel models would much better represent the data, seeing the response to particular items as samples of a more general attitude of the respondent (with a mean and a standard deviation) rather than pure measures. However, I have not presented or discussed multilevel solutions above, since the learning curve can be steep and the classical scales-as-means approach is accurate enough for most purposes.

Hallelujah and Eureka!! I think that these terms may help solve (some of) the long-standing confusion about the difference between “fixed” and “random” effects.

TL;DR: To shrink or not to shrink – that is the question

The mathematical distinction is that Varying (“random”) parameters have an associated variance while Population-level (“fixed”) parameters do not. Population-level effects model a single mean in the population. Varying effects model a mean and a variance term in the population, i.e., two rather than one parameters. The major practical implication is that Varying parameters haveshrinkage in a regression towards the mean-like way whereas the Population-level parameters do not.

The figure below shows this shrinkage in action for the following three models:

fit_mean=lm(WMI~session,D)# No subject-intercept (black lines)

The figure shows five illustrative participants (panels), each tested four times. A model with varying subject-intercept (red lines) shrinks subjects closer to the group mean (black line) than the model with the population-level subject-intercept (blue lines). The green lines are actual scores. Furthermore, the shrinkage is stronger the further away the data is from the overall mean. See the accompanying R notebook for all details and all participants.

So shrinkage is the only practical difference. This is true for both frequentist and Bayesian inference. Understanding when to model real-world phenomena using shrinkage, however, is not self-evident. So let me try to unpack why I think that the terms “Population-Level” and “Varying” convey this understanding pretty well.

Population-level parameters

General definition:

Values of Population-level parameters are modeled as identical for all units.

Example:

Everybody in the population of individuals who could ever be treated with X would get an underlying improvement of exactly 4.2 points more than had they been in the control group. Mark my words: Every. Single. Individual! The fact that observed changes deviate from this true underlying effect is due to other sources of noise not accounted for by the model.

The example above could be a 2×2 RM-ANOVA model of RCT data (
outcome~treatment *time+(1|id)) with treatment-specific improvement (the
treatment:time interaction) as the population-level parameter of interest. Populations could be all people in the history of the universe, all stocks in the history of the German stock market, etc. Again, the estimated parameters are modeled as if they were exactly the same for everyone. The only thing separating you from seeing that all-present value is a single residual term of the model, reflecting unaccounted-for noise. The residual is an error term so it is not the model itself. As seen from the model, everybody is identical and the residual is simply an error term (which is not part of the model) indicating how far this view of the world deviates from observations.

I think that modeling anything as a Population-level parameter is an incredibly bold generalization of the sort that we hope to discover using the scientific method: the simplest model with the greatest predictive accuracy.

Now, it’s easy to see why this would be called “fixed” when you have a good understanding of what it is, but as a newcomer, the term “fixed” may lead you astray thinking that either (1) it is not estimated, (2) that it is fixed to something, or that its semantically self-contradictory to call a variable fixed! Andrew Gellman calls them non-varying parameters, and I think this term suffers a bit from the same possible confusions. Population-level goes a long way here. The only ambiguity left is whether parameters that apply to the population also apply to individuals, but I can’t think of a better term. “Universal”, “Global”, or “Omnipresent” are close competitors but they seem to generalize beyond a specific population so let’s stick with Population-level.

Varying parameters

General definition:

Values of Varying parameters are modeled as drawn from a distribution.

Example for
(1|id) :

Patient-specific baseline scores vary with SD = 14.7.

Example for
(0+treatment:time|id) :

The patient-specific responses to the treatment effect vary with SD = 3.2 points.

This requires a bit of background explaining so bear with me: Most statistics assume that the residuals are independent. Independence is a fancy way of saying that if you know any one residual point, you would not be able to guess above chance about any other residuals. Thus, the independence assumption is violated if you have multiple measurements from the same unit, e.g., multiple outcomes from each participant since knowing one residual from an extraordinary well-performing participant would lead you to predict above-chance that other residuals from that participants would also be positive.

You could attempt to solve this by modeling a Population-level intercept for each participant (
outcome~treatment *time+id), effectively subtracting that source of dependence in the model’s overall residual. However, which of these participant-specific means would you apply to an out-of-sample participant? Answer: none of them; you are stuck (or fixed?). Varying parameters to the rescue! Dropping the ambition to say that all units (people) exhibit the same effect, you could estimate the recipe on how to generate those intercepts for each participant which helped you get rid of the dependence of the residuals (or more precisely: model it as a covariance matrix). This is a generative model in the form of the parameter(s) of a distribution and in GLM this would be the standard deviation of a normal distribution with mean zero.

One way to represent this clustering of variation to units is a hierarchical model where outcomes are sampled from individuals which are themselves sampled from the nested Population-level parameter structure:

For this reason, I think that we could also call Varying parameters sampled parameters. This is true whether those sampled parameters are intercepts, slopes, or interaction terms. Crossed Varying effects are just parameters sampled from the joint distribution of two non-nested populations (e.g., subject and questionnaire item). Simple as that!

Again, it’s easy to see why one would call this a “random” effect. However, as with “fixed effects,” it is just easy to confuse this for (1) the random-residual term in the whole model, or (2) the totally unrelated difference between frequentist and Bayesian inference as to whether data or parameters are fixed or random. Varying seems to capture the what it’s all about – that units can vary in a way that we can model. With variation comes regression towards the mean so it follows naturally.

Two derived properties of Varying

Firstly, it models regression towards the mean for the varying parameters: Extreme units are shrunk towards the mean of the varying parameter since those units are unlikely to reflect a true underlying value. For example, if you observe an exceptionally large treatment effect for a particular participant, he or she is likely to have experienced a lesser underlying improvement, but unaccounted-for factors exaggerated this by chance. Similarly, when you observe exceptionally small observed treatment effects, it is likely to reflect a larger underlying effect masked by chance noise.

Secondly, it requires enough levels (“samples”) of the Varying factor to estimate its variance. You just can’t make a very relevant estimate of variance using two or three levels (e.g. ethnicity). Similarly, sex would definitely make no sense as Varying since there is basically just two levels. Participant number, institution, or vendor would be good for analyses where there are many different of those. For frequentist models like lme4::lmer, a rule of thumb is more than 5-6 levels. For Bayesian models, you could have even one level (because Bayes rocks!) but the influence of the prior and the width of the posterior would be (unforgivably?) big.

Some potential misunderstandings

I hope that I conveyed the impression that the distinction between Population-level and Varying modeling is actually quite simple. However, the Fixed/Random confusion has caused people to exaggerate their difference for illustrative purposes, giving the impression that they do have distinct “magical properties”. I think they are more similar:

Both random and fixed can de-correlate residuals: It is sometimes said that you model effects as “fixed” to model effects of theoretical interest and other effects as “random” to account for correlated residuals, thus respecting the independence assumption (e.g., repeated measures). However, both “fixed” and “random” effects de-correlate residuals with respect to that effect. No magic!

Both random and fixed can account for nuisance effects: It is often said that random effects are for nuisance effects and fixed effects model effects of theoretical importance. However, as with the point above about de-correlating residuals, they can both do this. Say you want to model some time-effect (e.g., due to practice or fatigue) of repeated testing to get rid of this potential systematic disturbance if your theory-heavy parameters. You could model time as a fixed effect and just ignore its estimate or you could model it as random. The decision should not be theory vs. nuisance but rather whether the effect of time is modeled as identical for everyone or as Varying between units. No magic!

Both Varying and Population-level are model population-wide parameters. The word “population-level” effect may sound like it is the only parameter that says something about the population or that it is easier to generalize. In fact, I highlighted above that for “sampled” parameters, it was easier to see how Varying effects would generalize. Is this self-contradictory? No. Population-level effects are the postulate that there is a single mean in the population. Varying effects are the postulate that there is a mean and a variance term in the population.

Helpful sources

A few sources that helped me arrive at this understanding was:

Writing mixed models in BUGS helped me to de-mysticise most of linear models, including fixed/random and what interactions are. I started in JAGS. Here’s a nice example.

This answer on Cross-Validated which made me realize that shrinkage is the only practical difference between modeling parameters as “fixed” or “random.”

The explanation in the FAQ to R mailing list on GLMM, primarily written by the developers of the lme4 package.

Hodges & Clayton (2011) makes a distinction between “old-style” random effects (draws from a population), which I have mentioned here, and “new-style” random effects where the variance term is used more for mathematical convenience, e.g. when there is no population, the observed units exhaust the population, or when no new draws are possible. It is my impression that “new-style” is seldom used in psychology and human clinical trials.

TO DO

IMPORTANT: Rename terms? Varying is just two population-level parameters instead of one. How about: “

Mention that population-level has superior fit to data.

Fix plot annotations

Varying as uncertainty around fixed and residual as errors from model!

My first goal is to present solutions to things that I found difficult in the respective packages and which are relatively undocumented. A second goal was to show a side-by-side comparison on whether the packages converge on the same Bayes factor estimates.

I hope to keep the document updated. In particular, I’m keeping an eye on the development of,
BASand I need to learn how to specify a full JZS prior in
brms.

Personally, this is a lot of firsts for me and took way too much time: my first R notebook (including Markdown), first tutorial in many years, and first use of
brms and
BAS.

I’m currently writing a paper on a new Bayesian method for scoring Complex Span tasks. I needed some software to represent it using plate notation of a directed acyclic graph (DAG). Many people have pointed to the Tikz/pgf drawing library for latex, but I did not want to install latex for this simple task. Here I briefly review yEd, Daft, and Libreoffice Draw.

yEd

I ended up using yEd and produced this graph which contains plates, estimated nodes, deterministic nodes (double-lined), observed nodes (shaded), and categorical nodes (rectangles).yEd is purely graphical editing which is fast to work with and great for tweaking small details. A very handy yEd feature is its intelligent snapping to alignments and equal distances when positioning objects. Actually, I don’t understand why yEd almost never makes it to the “top 10 diagram/flowchart programs” lists.

A few things I learned: To make subscripts, you have to use HTML code. For example, the \(R_{trial_{i,j}}\) node is

1

<html>R<sub>triali,j</sub></html>

However, it is not possible to do double-subscripts. Also, the double-edges node is made completely manually by placing an empty ellipse above another. I did not manage to align the \(WMC_i\) label a bit lower in the node. A final limitation is that arrowhead sizes cannot be changed. You can, however, zoom. Therefore, your very first decision has to be the arrowhead size. Zoom so that it is appropriate and make your graphical model. I didn’t think about this so the arrows are too large for me in the graph above.

I’m pretty pleased with the result. For the final paper I may try and redo this in Libreoffice Draw to see if I can fix the final details.

Libreoffice Draw

In retrospect, I think that Draw could have done better than yEd. First, you can scale arrowheads to your liking! Furthermore, you can write math in LibreOffice Math, so double-subscripting is no problem. However, you have to group a math object with an ellipse rather than entering it as “content”, which is a bit convoluted. Speaking of math, LibreOffice Math was great for entering the model specifications for the graphical model:

One small annoyance is that you have to choose between left-aligning everything including the denominators in fractions (which of course should be centered), or center-align everything. I would have liked a center-aligned denominator while left-aligning everything lines.

The above was created using the following code. The matrix was used to align the terms.

I have to say that there’s something to the lacking snap-to-alignment and the general interface in Draw that makes it feel less nice than yEd, even though it is probably more versatile for the present purpose. I may update this blog post with a Draw model when I get around doing it.

Daft

Daft is a python module for rendering graphical models. The syntax is quite nice, but I quickly learned that you have to choose between shaded or double-edged nodes as indicators of observed variables. You cannot have both. You can draw an empty smaller node on top to make it double-edged, but using the \(scale\) argument makes it non-aligned with the outer line. I raised this as a GitHub issue, but Daft has not been maintained for years, so I don’t expect this to be fixed. Also, you have to install an old fork to draw rectangular nodes. This is as far as I got:

Here’s the code to do it, very much inspired (/ripped off) by this example.

At a family dinner, my brother told me that he had stumbled upon a curious number. Divide five by forty-nine and take a look at the digits:

$$5 / 49 = 0.10204081632653061…$$

Do you see a pattern in the digits? Yes! It’s powers of two: (0.)1, 02, 04, 08, 16, 32, 65, … Huh, 65? Yes! The next number, 128, is a three-digit number and so the first digit “overlaps” the last digit of 64, making it 65. And this continues for the first digits of 256 (28 + 2 = 30), 512, the first two digits of 1024, etc. Then floating point errors kicked in, and we stopped there.

I immediately googled it, but no one seems to have noticed this pattern in the digits before. It began to dawn on us that we may just be looking at a hitherto undiscovered “interesting number” such as pi, e, the golden ratio, etc. But the social norms of family meetings do not allow for simulations and mathematical derivations, so we put it aside.

Simulations

The number kept lingering in the back of my head. A few weeks later, I had several pressing deadlines, and as usual, procrastination kicked in full force. I needed to see how far this sequence continued. I fired up Python and wrote a small script which generates the power sequence first, then calculates the number 5/49, and finally identifies at what decimal place the two diverge:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

# Power of two to evaluate up to

max_power=20

# Generate the perfect 102040... sequence in integers instead of decimals

# to avoid floating point errors

power_decimals=0

foriinrange(max_power+1):

trailing_zeroes=10**(max_power*2-i*2)

power_decimals+=trailing_zeroes*2**i

print('0.%41d'%(trailing_zeroes*2**i),'2^%i'%i)

power_number='0.%i'%power_decimals# Add "0."

print(power_number,'Power')

# Print 5/49 without floating point errors

fromdecimalimportDecimal,getcontext

getcontext().prec=max_power*2+1

fraction=str(Decimal(5)/Decimal(49))

print(fraction,'Fraction')

# Compare them digit for digit: when do they diverge?

foriinrange(len(fraction)):

iffraction[i]!=power_number[i]:

print('Identical until decimal %i'%(i-2))# Don't count "0." as decimals

break

The print-function in the first for-loop visualizes the adding of the power sequence:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

0.10000000000000000000000000000000000000000 2^0

0. 200000000000000000000000000000000000000 2^1

0. 4000000000000000000000000000000000000 2^2

0. 80000000000000000000000000000000000 2^3

0. 1600000000000000000000000000000000 2^4

0. 32000000000000000000000000000000 2^5

0. 640000000000000000000000000000 2^6

0. 12800000000000000000000000000 2^7

0. 256000000000000000000000000 2^8

0. 5120000000000000000000000 2^9

0. 102400000000000000000000 2^10

0. 2048000000000000000000 2^11

0. 40960000000000000000 2^12

0. 819200000000000000 2^13

0. 16384000000000000 2^14

0. 327680000000000 2^15

0. 6553600000000 2^16

0. 131072000000 2^17

0. 2621440000 2^18

0. 52428800 2^19

0. 1048576 2^20

and the other print functions compare the “ideal” number to the fraction:

Ok, wow, so it goes on for quite a while. Setting max_power = 10.000 and…

1

Identical until decimal 16992

Now I’m convinced that this is an infinite pattern. But it gets even better: the sequence starts over every 42 digits. I repeat: Every. 42!!! Digits. Here are the first 168 decimals:

1

2

3

4

5

5/49 = 0.

102040816326530612244897959183673469387755

102040816326530612244897959183673469387755

102040816326530612244897959183673469387755

102040816326530612244897959183673469387755...

Is the key to the universe hidden in there somewhere!!?? I was exploding with excitement when I ran and presented this to my wife late in the evening.

The math

My wife is a mathematician. She was not impressed. “Jonas, do you know how to divide five by forty-nine by hand?”. No, I must admit that I never properly learned to do long division (even though I later enjoyed doing long polynomial division). And sure enough: the remainder of the first operation is 50-49 = 1. The remainder of the second operation is two decimal places further to the right where 1 is 100, and the equation becomes 100 – 98 = 2. Then two further to the right and doubling again: 200 – 196 = 4. Then 400 – 392 = 8, etc.

My wife went to bed. For a few seconds, I felt bad. My intellectual love affair crashed hard upon realizing that 5/49 was in many respects just a normal number. But as with human relationships, so with numbers: I’ve learned a great deal about rational numbers through this journey and that’s a comfort.

Conclusion

I do still think that 5/49 stands out. A 42-decimal period with powers of two is an intriguing property only shared by x/49 fractions. Furthermore, 5/49 is the only one of these fractions to start at 2^{0}=1 – the true beginning of the power sequence.

The long-division property can be exploited to generate numbers with custom power-sequences in the decimals. So without further ado, I give you:

$$10/(10^n-x)$$

… where x ∈ R_{≥0} is the root of the exponential and n ∈ N_{≥2} is the number of decimal places between each number. Let’s try this for a few numbers:

1

2

3

4

10/(10^4–3)=0.00100030009002700...# Powers of three with four spaces

10/(10^3-2)=0.01002004008016032...# Powers of two with three spaces

10/(10^2-4)=0.10416666666666667...# Powers of four rises too quickly for two spaces

10/(10^3-pi)=0.0100315149336198...# Powers of pi, just for fun.

For two decimal places between powers of two, we get

$$10/(10^2 – 2) = 10/98 = 5/49$$

I hope that this number will henceforth be known as Lindeløv’s number. At least it’s on Google now.

I have been overwhelmed with requests following our article about the effect of hypnotic suggestion on working memory performance following acquired brain injury. I present frequent questions and very short answers below. My answers to the questions all originate in the same point: there is insufficient evidence right now to establish whether to use or not to use hypnosis for the treatment of cognitive problems following acquired brain injury. I predict that we should be able to give a recommendation (or refusal) regarding hypnosis following acquired brain injury in 2020.

Frequent questions from patients and relatives:

Question: I have a brain injury. Where can I get hypnosis?
Answer: I won’t give any recommendations. I am a researcher and there is not a firm research-based basis for such a recommendation. If you choose to find a hypnotist, make sure to avoid hypnotists who offer regression to past lives, healing of aura, contact with dead relatives, etc. They have demonstrated an inability to learn from science.

Question: Can I be a test subject in an experiment? Answer: Unfortunately, not by asking me. If we had experiments with open recruitment, we would recruit through other sources to avoid the strong selection bias.

Question: My daughter is diagnosed with ADHD/Schizophrenia/chronic pain/other – can hypnosis help her? Answer: The results from our experiment should not be uncritically transferred to other conditions. I do not know enough about the effectiveness of hypnosis for the condition you mentioned here.

Frequent questions from rehabilitation professionals and hypnotists:

Question: Can you send me the manuscript?
Answer: No, the manuscripts will be made freely available if (and only if) there is sufficient evidence to decide if it can be recommended as a generalized treatment. We expect to have that evidence in 2020. Here is a list of all public information about the our experiment, which is more than for most scientific articles. This includes some excerpts from the manuscript in the supplementary materials.

Question: Can you help us get started using this intervention for our clients?
Answer: I’m eager to work with you to set up a research project if you are an institution working professionally with brain-injured patients. That means that you can offer the treatment but you will have less control over to whom and how the treatment is administered while the research is ongoing. These projects are needed to accumulate the evidence to decide if hypnosis can be recommended as a treatment for cognitive problems following acquired brain injury.

Background to the answers

Although our study is very convincing in and of itself, it is just one study and one should remember to factor in prior skepticism and that skepticism should be quite high in this case. The results are surprising exactly because they seem unlikely given prior evidence. One also should consider the fact that the results from scientific studies on humans and animals often fail to replicate. I know of several widely used “treatments” in neurorehabilitation that was introduced because of positive early studies but they are still lingering on even though the collective evidence points to small or no effects. If our results should fail to replicate, its better that hypnosis was never brought to use in neurorehabilitation than spending everybody’s precious time on something ineffective.

I am personally optimistic because I have more (less scientific) sources of evidence than what has been published, but I am also a strong believer in science as the right way to make clinical recommendations. With the ongoing and planned studies, I expect that we have sufficient evidence to make a recommendation (whether for or against) in 2020. We keep the final details about the intervention to research projects until then.

I may come across as dismissive. I really am both personally moved by individual stories and intellectually baffled by the scale of the problem on a world scale. I believe that the solution above will, on average, be the best. One potential advantage of it is that it increases the probability that the treatment, if effective, will be implemented in standard treatment instead of on the all-too-grey market of private hypnotists.

Jeg får mange henvendelser omkring vores artikel om effekten af hypnotiske suggestioner på arbejdshukommelsen efter erhvervet hjerneskade. Nedenfor giver jeg korte svar på ofte stillede spørgsmål.All svarene bunder i den samme pointe: der er endnu ikke tilstrækkelig evidens til at afgøre, om hypnose kan.- eller ikke kan anbefales som behandling af kognitive problemer efter erhvervet hjerneskade. Jeg forventer, at vi kan anbefale (eller afvise) hypnose efter erhvervet hjerneskade omkring år 2020.

Ofte stillede spørgsmål af patienter og pårørende:

Spørgsmål: Jeg har en hjerneskade. Hvor kan jeg få hypnose? Svar: Jeg vil ikke anbefale nogen. Jeg arbejder med forskning og der er ikke et stærkt nok forskningsbaseret grundlag for at give en anbefaling. Hvis du vælger at opsøge en hypnotisør, så undgå alle der tilbyder regression til tidligere liv, healing af aura, kontakt med døde pårørende osv. De har dermed vist, at de ikke arbejder videnskabeligt.

Spørgsmål: Can jeg være med i et eksperiment? Svar: Desværre ikke ved at spørge mig. Hvis vi havde eksperimenter der manglede deltagere ville vi rekruttere igennem andre kilder (fx hjerneskadeforeningen og hjernesagen) for at mindske selektionsbias.

Spørgsmål: Min datter er diagnosticeret med ADHD/Skizofreni/kronisk smerte/andet – kan hypnose hjælpe hende?
Svar: Resultaterne fra vores eksperiment kan næppe overføres så direkte til andre patientgrupper. Jeg kender ikke nok til hvor effektiv hypnose er på den problematik du nævner til at give et svar.

Ofte stillede spørgsmål af professionelle indenfor rehabilitering og hypnose:

Spørgsmål: Kan du sende mig manuskriptet?
Svar: Nej, manuskripterne gøres frit tilgængelige, hvis (og kun hvis) der er tilstrækkelig med forskning der viser, at det kan anbefales som en generel behandling. Her er en oversigt over al offentlig information omkring vores eksperiment, hvilket er mere end for det meste forskning. Dette inkluderer nogle uddrag fra manuskriptet, som du finder i supplementary materials.

Spørgsmål: kan du hjælpe os i gang med at bruge denne intervention til vores klienter?
Svar: Jeg vil meget gerne samarbejde med jer om at starte et forskningsprojekt hvis I er en institution som arbejder professionelt med borgere med erhvervet hjerneskade. Det betyder, at I får muligheden for at tilbyde det til borgerne, men også at I får mindre kontrol over hvordan det udbydes, mens forskningen pågår. Denne slags forskningsprojekter er nødvendige for at afgøre, om hypnose kan abefales som en behandoling af kognitive problematikker efter erhvervet hjerneskade.

Baggrund for svarene

Selvom vores resultater i ovenstående artikel er meget overbevisende isoleret set, er det stadig kun et enkelt eksperiment. Man skal huske at medregne den skepsis man ville have overfor resultaterne, da de bryder med en del veletableret forskning. Man skal også huske, at resultaterne fra videnskabelige studier på dyr og mennesker ofte ikke kan gentages. Jeg er bekendt med flere udbredte “behandlinger” i neurorehabilitering, som netop blev taget på grund af nogle positive resultater, og som stadig er i brug selvom den samlede evidens nu viser at effekten er meget lille eller helt fraværende. Hvis vores resultater ikke kan gentages, er det bedre at hypnose aldrig blev taget i brug i neurorehabilitering, end at alle bruger deres kostbare tid på noget ineffektivt.

Personligt er jeg optimistisk fordi jeg kender til flere (mindre videnskabelige) resultater end det der er blevet publiceret. Men jeg står også fast på, at videnskab må være det rigtige grundlag for kliniske anbefalinger. Med de igangværende og planlagte studier forventer jeg at vi har nok resultater til at give en egentlig anbefalinger (hvad end den er for eller imod) i år 2020. Indtil da holder vi visse detaljer om metoden lukket til forskningsprojekter.

Jeg kan måske give indtryk af at være afvisende. Jeg er personligt rørt over historierne fra alle de personer med hjerneskade jeg har mødt. Jeg er også intellektuelt oprørt over størrelsen af problemet på verdensplan. Jeg tror at ovenstående strategi vil give det bedste udfald i gennemsnit. En mulig fordel er fx at det øger sandsynligheden for, at behandlingen (hvis den er effektiv) bliver implementeret som en del af den offentlige behandling i stedet for at blive overladt til det grå marked af privatpraktiserende hypnotisører.

I was invited to do a fun task by my office colleague, Hazel Anderson. She researches synesthesia, and she wanted to induce grapheme-color synesthesia by having participants learn pi using digit-color mapping as one available strategy. So she needed something that could a Word document with pi with an arbitrary number of decimal places. Approximately 40 minutes of the pure joy of structured procrastination and:

Here’s the python script to generate this beauty:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

# Settings here

DIGITS=5000# Number of decimal places to print

BLOCK_SIZE=4# Number of digits between spaces

BLOCK_SPACING=4# Size of space between blocks

GRAPHEME_COLOR_MAPPING={# Type in your digit-color mapping

'0':(128,128,255),

'1':(255,0,255),

'2':(128,0,255),

'3':(0,128,255),

'4':(0,255,255),

'5':(255,255,0),

'6':(100,255,0),

'7':(0,0,0),

'8':(70,255,70),

'9':(255,128,128)

}

# Set up document

fromdocx importDocument

fromdocx.shared importRGBColor

document=Document()

paragraph=document.add_paragraph('')# Start with an empty text

# Create Pi

frommpmath importmp

mp.dps=DIGITS# Set number of decimals

pi=str(mp.pi)[2:]# Get all decimals as (iterable) string