Email Notification of Changes: Click here and add TITLE of the topic to the body of the email.
Please add comments and then click on the "Add comment" button.
I agree with the point from Knut and in the recorded discussion that “verifying” assumptions is not a reasonable goal, or even possible. Instead, we just want the model to be reasonably accurate, not literally true. To meet this goal, I advocate performing “due diligence” assessments of model assumptions, and I’ve argued that large Pvalues are of some value in such situations (in contrast to their usual worthlessness). As long as there is no particular a priori suspicion about an important departure from the assumptions, it may be enough to show that some reasonable precaution was taken and no alarming evidence was found.
Knut raises the important issue of how best to measure central tendency (or more generally, effects of predictors on the outcome). I agree that this should usually be decided a priori on conceptual grounds, rather than by empirically looking to see what better approximates statistical modeling assumptions. This is a key issue when deciding whether or not to logarithmically transform an outcome variable for modeling. If modeling the outcome on the most meaningful scale results in violations of statistical assumptions (such as Gaussian residuals), then I advocate using bootstrapping or other advanced methods to obtain valid confidence intervals; this seems preferable to modeling the wrong thing for statistical convenience. A common example is when cost is the outcome. Costs are often skewed, with a small number of patients with very high costs. Nevertheless, the handful of highcost patients really are very important and should not be downweighted by use of logarithmic transformation or nonparametric methods. The raw arithmetic mean cost is what matters for policy or for a hospital’s bottom line, and the geometric mean and median are usually not relevant.
[Excerpted from a different thread.]
Re 3.5., the focus on the "assumption [of a] Gaussian distribution" may also me misleading. First, leastsquare methods are highly robust against deviations of the empirical distribution of residuals from the Gaussian distribution (Scheffe 1959). Second, lack of a "significant" result of a test for deviation does not prove the null hypothesis (of a Gaussian distribution). Hence, requiring "empirical verification" of assumptions could create the problems it is supposed to address. Finally, the focus on the Gaussian distribution may result in other assumptions, such as the adequatness of the measure of central tendency being used (arithmetic mean, geometric mean, median, ...) being overlooked. Which measure of central tendency to choose can rarely be decided (or verified) based on the data. Instead, knowledge of the subject matter are needs to be applied to select this measure. In particular, an approximate answer for the correct question may be better than the exact answer for the wrong question.
Title_Discussion_Topic  Verification of Assumptions 
Name_Topic_Initiator  Jonathan Shuster 
Online_Journal_Club_Meeting  Meeting 1 
Description  Problem to be explored 

See Also 

Disclaimer  The views expressed within CTSpedia are those of the author and must not be taken to represent policy or guidance on the behalf of any organization or institution with which the author is affiliated. 