Video of my recent invited State-of-the-Art lecture at the European Association for Methodology (EAM) conference in Jena, Germany. (Slides below)
Somewhere towards the beginning there is a terrible audio mishap that sends audience members’ hands flying towards their ears in a hopeless effort to protect their eardrums. After this traumatic event they powered through the rest of the talk, and, now, with this video, so can you!
Online latent class tutorial in R
I have created an online tutorial (in development) on latent class analysis and the EM algorithm in R. The examples are discussed in the slides below. Please find the tutorial here:
A small selection of presentations I’ve held relatively recently.
Interview for the International Program of Survey and Data Science
Interview by Florian Keusch of the international program in survey and data science (IPSDS) in which I talk about my work at the European Social Survey and Survey Quality Predictor.
Measurement error in administrative registers
Talk for methods@Manchester reporting ideas and progress of my Veni project.
Talk about lavaan.survey at the Odum institute, University of North Carolina (2013)
Talk about Survey Quality Predictor 2 at National Center for Health Statistics (2011)
Invited talk at the National Center for Health Statistics (NCHS), Centers for Disease Control in Hyattsville, Maryland, October 2011.
Introduction: Aaron Maitland, PhD (NCHS)
Slides are at: daob.org/media/homepage/files/Oberski-NCHS-2011.zip
It is well-known that design characteristics of survey questions such as the number of categories, full versus partial labeling of answer scales, the linguistic complexity of the request, etc. can influence the response obtained. Although each question’s design must be tailored to the intended measure, there is also evidence that some question designs are better than others in general (Dijkstra & van der Zouwen 1982, Alwin & Krosnick 1991, Alwin 2007).
I report on the findings of several large cross-national surveys where the response reliability and validity of 3011 questions could be estimated from built-in Multitrait-Multimethod (MTMM) experiments. For each of the 3011 questions analyzed, many design characteristics were coded by a team of coders. These codes for design characteristics were then related to the estimated reliabilities and validities in a predictive meta-analysis (see Saris & Gallhofer 2007 for an early analysis of a much smaller dataset).
I built the predictive meta-analysis and coding scheme into an online web application called “Survey Quality Predictor” (SQP2). SQP2 provides a forecast of a question’s reliability based on its design characteristics.
In the talk I will discuss the approach taken to estimate the reliability and validity of survey questions, some results of the predictive meta-analysis, and demonstrate the alpha version of the new computer web application SQP2. The demonstration shows how a given survey question may be coded on its design characteristics to obtain an estimate of its reliability and internal validity.