Do not use what I am about to teach you

I am gearing up to teach Structural Equation Modeling this fall term. (We are on quarters, so we start late — our first day of classes is next Monday.)

Here’s the syllabus. (pdf)

I’ve taught this course a bunch of times now, and each time I teach it I add more and more material on causal inference. In part it’s a reaction to my own ongoing education and evolving thinking about causation, and in part it’s from seeing a lot of empirical work that makes what I think are poorly supported causal inferences. (Not just articles that use SEM either.)

Last time I taught SEM, I wondered if I was heaping on so many warnings and caveats that the message started to veer into, “Don’t use SEM.” I hope that is not the case. SEM is a powerful tool when used well. I actually want the discussion of causal inference to help my students think critically about all kinds of designs and analyses. Even people who only run randomized experiments could benefit from a little more depth than the sophomore-year slogan that seems to be all some researchers (AHEM, Reviewer B) have been taught about causation.

A dubious textbook marketing proposal

I got an email the other day:

*****

Dear Professor Srivastava,

My name is [NAME] and I am a consultant working with the [PUBLISHING COMPANY THAT YOU HAVE ALMOST CERTAINLY HEARD OF] team on the new textbook, [TEXTBOOK], by [AUTHOR]. I am emailing to see if you would be interested in class testing a chapter from this new textbook.  In exchange for your class test, [PUBLISHER] will give you a one year membership to the APS as a stipend for your help. This is a $194 value.

If you teach the [COURSE THAT I DON'T ACTUALLY TEACH] course, please read on.

[PUBLISHER] is looking for instructors to class test either of the following chapters:

    Chapter 3: [SOMETHING ABOUT THE BRAIN]
    Chapter 8: [SOMETHING ABOUT THE MIND]

You can integrate the chapter you select into your course as you see fit – we will ask you and your students to fill out a very brief online survey after the class test.

[AUTHOR] is [IMPRESSIVE-SOUNDING LIST OF AWARDS AND CREDENTIALS]

If you would like to be considered for this class test, please click the following link and sign up for the project: [LINK]

This is a terrific way for you to learn about an exciting new textbook for the [COURSE THAT I DON'T ACTUALLY TEACH] course and see if it is a good fit for you and your students.  

I look forward to hearing from you.

[NAME]

Consultant for [PUBLISHER]

*****

This sounds ethically problematic to me, for at least two reasons:

1. It is a conflict of interest. My students are paying tuition money to my employer, and my employer is paying a salary to me, to provide a high-quality education. If I choose course materials based on outside financial compensation rather than what I think is the best for their education, that is a conflict of interest.

2. My students would be forced to participate in a marketing study without their consent. In response to my query, the consultant said the students would not be paid. But compensation or no, I can see no practical way to incorporate these materials into the course and still allow students to fully opt out. Even if students choose not to fill out the survey, it is still shaping the content of their course.

I suppose I could make the test readings optional, spend no classroom time on them, base no assignments or test questions on them, and fully disclose the arrangement to my students. But my experience of college students and non-required reading assignments tells me that exactly nobody would do the reading or fill out the survey, unless they thought it would curry favor with me (so maybe the disclosure is a bad idea). I don’t imagine that is what the consultant has in mind.

It is possible that I have misconstrued an important part of this invitation. So I have offered the emailer to write a response, and if he does I will post it. I’ve also decided to redact the identifying details. I realize that lowers the probability of getting a response, but my purpose is to make it known that this kind of thing goes on  — not to embarrass the specific parties involved.

Is it still a bad idea for psychology majors to rent their intro textbook?

Inside Higher Ed reports that the number of students who rent textbooks is increasing. Interestingly, e-books have not caught on — most students are still using printed textbooks (though iPads might change that).

When I teach intro, I have always suggested to my students that if they are going to major in psychology, it is a good idea to purchase and keep their intro textbook. My argument has been that it will be a good reference for their upper-division classes, which might assume that they already know certain concepts. For example, when I teach an upper-division class in motivation and emotion, I assume that my students understand classical and operant conditioning (and I tell them in the syllabus that they should go back to their intro textbook and review the relevant sections).

A downside of this advice is that textbooks are very expensive. Renting a book, or selling one on the used market after the term ends, is a way for students to reduce costs.

Anyway, what this got me wondering is whether it’s still helpful or necessary for students to keep their intro textbooks. Is there enough good info on the internet now that they could just google whatever topics they need to review? A few years ago I looked around on the web for a well-written, introductory-level account of classical conditioning and wasn’t impressed with what I found. I still don’t think I’d assign the current entry for classical conditioning as a review. But with the APS Wikipedia project, for example, maybe things will get better soon.

I remember finding my intro textbook especially helpful when I studied for the psychology GRE, but not many undergrads will go on to do that. Next time I teach an upper-division class I’ll probably ask my students how much use they’ve gotten out of their intro text afterward.

Prepping for SEM

I’m teaching the first section of a structural equation modeling class tomorrow morning. This is the 3rd time I’m teaching the course, and I find that the more times I teach it, the less traditional SEM I actually cover. I’m dedicating quite a bit of the first week to discussing principles of causal inference, spending the second week re-introducing regression as a modeling framework (rather than a toolbox statistical test), and returning to causal inference later when we talk about path analysis and mediation (including assigning a formidable critique by John Bullock et al. coming out soon in JPSP).

The reason I’m moving in that direction is that I’ve found that a lot of students want to rush into questionable uses of SEM without understanding what they’re getting into. I’m probably guilty of having done that, and I’ll probably do it again someday, but I’d like to think I’m learning to be more cautious about the kinds of inferences I’m willing to make. To people who don’t know better, SEM often seems like magical fairy dust that you can sprinkle on cross-sectional observational data to turn it into something causally conclusive. I’ve probably been pretty far on the permissive end of the spectrum that Andrew Gelman talks about, in part because I think experimental social psychology sometimes overemphasizes internal validity to the exclusion of external validity (and I’m not talking about the special situations that Mook gets over-cited for). But I want to instill an appropriate level of caution.

BTW, I just came across this quote from Donald Campbell and William Shadish: “When it comes to causal inference from quasi-experiments, design rules, not statistics.” I’d considered writing “IT’S THE DESIGN, STUPID” on the board tomorrow morning, but they probably said it nicer.

Learning styles and education: good practice requires good science

Cedar Riener has a terrific article on learning styles and cognitive science in the latest Teacher Magazine. The piece, Learning Styles: What’s Being Debunked, concerns Hal Pashler and colleagues’ recent review of the lack of evidence for learning styles, which was published in Psychological Science in the Public Interest and which I’ve talked about before.

Cedar’s piece is a rebuttal to a critique [subscription required] published in Teacher. In it he does several important things. First, he clarifies what the theory of multiple learning styles is, and he makes clear how that theory is different from other perspectives on individual differences in how students learn (such as theories that posit multiple ability domains, or student diversity based on cultural background). He restates Pashler et al.’s central arguments and findings — in short, that there is zero empirical evidence for the existence multiple learning styles.

Second, he discusses the real costs of building one’s teaching practice around a theory of learning styles. Teachers have finite time and resources. If they focus their efforts on teaching the same content in multiple sensory modalities (as learning-styles advocates tell them they must), they will necessarily have less time and energy to do other things that might have real benefits for students.

Third, Cedar makes a broader case for the critical role that cognitive science can and should play in shaping classroom practices. The critique he is responding to is disdainful of science, preferring an individual teacher’s idiosyncratic observations and pet theories over practices supported by real evidence. Educators need to embrace the science of learning; but Cedar also calls psychologists to task for not doing a better job of speaking to policymakers and practitioners:

We must also dispel myths, and we in psychology have a larger set of myths to dispel than others. When these myths exist, they are corrosive to science, because while seeming to represent science (“well, it says it’s a theory”) they do not provide the measurable, reliable results that science demands. These myths are perpetuating identity theft of science, calling themselves science and wrecking havoc on our credit scores, yet many scientists don’t connect the bankruptcy of public trust in science with the myths that we let roam freely… As scientists we must take greater efforts to rein in this misapplication of science.

In this vein, I’d say psychology has an important but difficult task ahead of itself. If you look at the applied domain where psychology has traditionally been the most involved — clinical treatment of mental disorders — the shift toward evidence-based treatment has been slow, though it is finally picking up momentum and having real benefits. Hooray for those like Cedar, Hal Pashler, and Daniel Willingham who are pushing for the same in educational practice.

UPDATE: If you want to read Heather Wolpert-Gawron’s critique (the one that inspired Cedar’s article in response), you can read it on her blog, no subscription required, at TweenTeacher.com.

Rethinking intro to psych

Inside Higher Ed has a really interesting article, Rethinking Science Education, about how some universities are trying to break the mold of the traditional intro-to-a-science course. From the article:

Too many college students are introduced to science through survey courses that consist of facts “often taught as a laundry list and from a historical perspective without much effort to explain their relevance to modern problems.” Only science students with “the persistence of Sisyphus and the patience of Job” will reach the point where they can engage in the kind of science that excited them in the first place, she said.

This is exactly how Intro to Psych is taught pretty much everywhere — as a laundry list of topics and findings, usually old ones. The scientific method is presented didactically as another topic in the list (usually the first one), rather than being woven into the daily experience of the class.

It’s a problem that’s easy to point out, but hard to solve. You almost couldn’t do it as a single instructor working within a traditional curriculum. Our majors take a 4-course sequence: 2 terms of intro, then statistics, then research methods. You’d essentially need to flip that around — start with a course called “The Process of Scientific Discovery in Psychology” and have students start collecting and analyzing data before they’ve even learned most of the traditional Intro topics. Such an approach is described in the article:

One approach to breaking out of this pattern, she said, is to create seminars in which first-year students dive right into science — without spending years memorizing facts. She described a seminar — “The Role of Asymmetry in Development” — that she led for Princeton freshmen in her pre-presidential days.

She started the seminar by asking students “one of the most fundamental questions in developmental biology: how can you create asymmetry in a fertilized egg or a stem cell so that after a single cell division you have two daughter cells that are different from one another?” Students had to discuss their ideas without consulting texts or other sources. Tilghman said that students can in fact engage in such discussions and that in the process, they learn that they can “invent hypotheses themselves.”

Would this work in psychology? I honestly don’t know. One of the big challenges in learning psychology — which generally isn’t an issue for biology or physics or chemistry — is the curse of prior knowledge. Students come to the class with an entire lifetime’s worth of naive theories about human behavior. Intro students wouldn’t invent hypotheses out of nowhere — they’d almost certainly recapitulate cultural wisdom, introspective projections, stereotypes, etc. Maybe that would be a problem. Or maybe it would be a tremendous benefit — what better way to start off learning psychology than to have some of your preconceptions shattered by data that you’ve collected yourself?

Do learning styles really exist? Pashler et al. say no

Do different people have different learning styles? It has become almost an article of faith among educators and students that the answer is yes, in large part due to the work of Howard Gardner (who recently went so far as to suggest that computerized assessment of learning styles may someday render traditional classroom teaching obsolete).

But a new review by Hal Pashler and colleagues suggests otherwise. They find ample evidence that people believe they have different learning styles — but almost no evidence that such styles actually exist.

When I first encountered Gardner’s theory of multiple intelligences as an undergrad, I found it fascinating. But I’ll admit that the more I teach, the more I’ve become skeptical when people invoke it. In principle it could lead to an optimistic, proactive attitude about learning: if a student isn’t making progress, let’s try teaching and learning in another modality. But in my experience, people invoke learning styles to almost the opposite effect. “I [or you] have a different learning style” has 2 problems with it. One, it’s an attributional “out” for somebody who isn’t doing well in class — it’s kind of a socially acceptable way of excusing poor performance by both teacher and student. And two, it’s an entity-theorist explanation (in the Carol Dweck sense) that can lead students to disengage from a class.

But skepticism about how people invoke it isn’t as deep as skepticism about the very existence of the phenomenon, which is where Pashler et al. are aiming. They acknowledge something well known among intelligence researchers, that there are subdomains of intellectual ability — e.g., in comparing two people with the same general IQ, one might be better at verbal tasks and the other better at visual-spatial tasks. But that’s about ability — Person A is better at one thing and Person B is better at another. Learning styles suggest that Persons A and B could both be good at the same thing if it was only presented to each in a custom-tailored way. Pashler et al. call this the “meshing hypothesis” and they say that well-designed, controlled studies find no support for it.

I don’t think this is the death-knell for multimodal teaching. When I teach statistics, I try to present each concept in as many modes as possible — a verbally narrated explanation, a visual depiction, a formal-symbolic representation (i.e., words, pictures, and equations). I still think that is a good way to teach. But the surviving rationale is that any one student will benefit from seeing the same underlying concept represented 3 different ways — not because the 3 modalities will reach 3 different kinds of students.

Of course, I’m sure this won’t be the last word. I expect there will be a vigorous response from Gardner and others. Stay tuned.

UPDATE: In re-reading this post, I realized I should probably clarify my references to Gardner. Gardner’s theory of multiple intelligences is centrally about abilities, not learning styles; in that sense, it is not directly challenged by this research. However, I think Gardner is relevant for two reasons. One, I think a lot of people who discuss learning styles look to him as a role model and a leader. Multiple intelligences is often mentioned in conjunction with learning styles, and they both fall under a larger umbrella of proposing that we need to respect and work around cognitive diversity. Two, Gardner himself has discussed the idea that different students learn in different ways — not just that different people are good at different things. So even though MI theory is more about abilities, I think Gardner is an important influence on a set of related ideas.