Do not use what I am about to teach you

I am gearing up to teach Structural Equation Modeling this fall term. (We are on quarters, so we start late — our first day of classes is next Monday.)

Here’s the syllabus. (pdf)

I’ve taught this course a bunch of times now, and each time I teach it I add more and more material on causal inference. In part it’s a reaction to my own ongoing education and evolving thinking about causation, and in part it’s from seeing a lot of empirical work that makes what I think are poorly supported causal inferences. (Not just articles that use SEM either.)

Last time I taught SEM, I wondered if I was heaping on so many warnings and caveats that the message started to veer into, “Don’t use SEM.” I hope that is not the case. SEM is a powerful tool when used well. I actually want the discussion of causal inference to help my students think critically about all kinds of designs and analyses. Even people who only run randomized experiments could benefit from a little more depth than the sophomore-year slogan that seems to be all some researchers (AHEM, Reviewer B) have been taught about causation.

A dubious textbook marketing proposal

I got an email the other day:

*****

Dear Professor Srivastava,

My name is [NAME] and I am a consultant working with the [PUBLISHING COMPANY THAT YOU HAVE ALMOST CERTAINLY HEARD OF] team on the new textbook, [TEXTBOOK], by [AUTHOR]. I am emailing to see if you would be interested in class testing a chapter from this new textbook.  In exchange for your class test, [PUBLISHER] will give you a one year membership to the APS as a stipend for your help. This is a $194 value.

If you teach the [COURSE THAT I DON'T ACTUALLY TEACH] course, please read on.

[PUBLISHER] is looking for instructors to class test either of the following chapters:

    Chapter 3: [SOMETHING ABOUT THE BRAIN]
    Chapter 8: [SOMETHING ABOUT THE MIND]

You can integrate the chapter you select into your course as you see fit – we will ask you and your students to fill out a very brief online survey after the class test.

[AUTHOR] is [IMPRESSIVE-SOUNDING LIST OF AWARDS AND CREDENTIALS]

If you would like to be considered for this class test, please click the following link and sign up for the project: [LINK]

This is a terrific way for you to learn about an exciting new textbook for the [COURSE THAT I DON'T ACTUALLY TEACH] course and see if it is a good fit for you and your students.  

I look forward to hearing from you.

[NAME]

Consultant for [PUBLISHER]

*****

This sounds ethically problematic to me, for at least two reasons:

1. It is a conflict of interest. My students are paying tuition money to my employer, and my employer is paying a salary to me, to provide a high-quality education. If I choose course materials based on outside financial compensation rather than what I think is the best for their education, that is a conflict of interest.

2. My students would be forced to participate in a marketing study without their consent. In response to my query, the consultant said the students would not be paid. But compensation or no, I can see no practical way to incorporate these materials into the course and still allow students to fully opt out. Even if students choose not to fill out the survey, it is still shaping the content of their course.

I suppose I could make the test readings optional, spend no classroom time on them, base no assignments or test questions on them, and fully disclose the arrangement to my students. But my experience of college students and non-required reading assignments tells me that exactly nobody would do the reading or fill out the survey, unless they thought it would curry favor with me (so maybe the disclosure is a bad idea). I don’t imagine that is what the consultant has in mind.

It is possible that I have misconstrued an important part of this invitation. So I have offered the emailer to write a response, and if he does I will post it. I’ve also decided to redact the identifying details. I realize that lowers the probability of getting a response, but my purpose is to make it known that this kind of thing goes on  — not to embarrass the specific parties involved.

Is it still a bad idea for psychology majors to rent their intro textbook?

Inside Higher Ed reports that the number of students who rent textbooks is increasing. Interestingly, e-books have not caught on — most students are still using printed textbooks (though iPads might change that).

When I teach intro, I have always suggested to my students that if they are going to major in psychology, it is a good idea to purchase and keep their intro textbook. My argument has been that it will be a good reference for their upper-division classes, which might assume that they already know certain concepts. For example, when I teach an upper-division class in motivation and emotion, I assume that my students understand classical and operant conditioning (and I tell them in the syllabus that they should go back to their intro textbook and review the relevant sections).

A downside of this advice is that textbooks are very expensive. Renting a book, or selling one on the used market after the term ends, is a way for students to reduce costs.

Anyway, what this got me wondering is whether it’s still helpful or necessary for students to keep their intro textbooks. Is there enough good info on the internet now that they could just google whatever topics they need to review? A few years ago I looked around on the web for a well-written, introductory-level account of classical conditioning and wasn’t impressed with what I found. I still don’t think I’d assign the current entry for classical conditioning as a review. But with the APS Wikipedia project, for example, maybe things will get better soon.

I remember finding my intro textbook especially helpful when I studied for the psychology GRE, but not many undergrads will go on to do that. Next time I teach an upper-division class I’ll probably ask my students how much use they’ve gotten out of their intro text afterward.

Prepping for SEM

I’m teaching the first section of a structural equation modeling class tomorrow morning. This is the 3rd time I’m teaching the course, and I find that the more times I teach it, the less traditional SEM I actually cover. I’m dedicating quite a bit of the first week to discussing principles of causal inference, spending the second week re-introducing regression as a modeling framework (rather than a toolbox statistical test), and returning to causal inference later when we talk about path analysis and mediation (including assigning a formidable critique by John Bullock et al. coming out soon in JPSP).

The reason I’m moving in that direction is that I’ve found that a lot of students want to rush into questionable uses of SEM without understanding what they’re getting into. I’m probably guilty of having done that, and I’ll probably do it again someday, but I’d like to think I’m learning to be more cautious about the kinds of inferences I’m willing to make. To people who don’t know better, SEM often seems like magical fairy dust that you can sprinkle on cross-sectional observational data to turn it into something causally conclusive. I’ve probably been pretty far on the permissive end of the spectrum that Andrew Gelman talks about, in part because I think experimental social psychology sometimes overemphasizes internal validity to the exclusion of external validity (and I’m not talking about the special situations that Mook gets over-cited for). But I want to instill an appropriate level of caution.

BTW, I just came across this quote from Donald Campbell and William Shadish: “When it comes to causal inference from quasi-experiments, design rules, not statistics.” I’d considered writing “IT’S THE DESIGN, STUPID” on the board tomorrow morning, but they probably said it nicer.

Learning styles and education: good practice requires good science

Cedar Riener has a terrific article on learning styles and cognitive science in the latest Teacher Magazine. The piece, Learning Styles: What’s Being Debunked, concerns Hal Pashler and colleagues’ recent review of the lack of evidence for learning styles, which was published in Psychological Science in the Public Interest and which I’ve talked about before.

Cedar’s piece is a rebuttal to a critique [subscription required] published in Teacher. In it he does several important things. First, he clarifies what the theory of multiple learning styles is, and he makes clear how that theory is different from other perspectives on individual differences in how students learn (such as theories that posit multiple ability domains, or student diversity based on cultural background). He restates Pashler et al.’s central arguments and findings — in short, that there is zero empirical evidence for the existence multiple learning styles.

Second, he discusses the real costs of building one’s teaching practice around a theory of learning styles. Teachers have finite time and resources. If they focus their efforts on teaching the same content in multiple sensory modalities (as learning-styles advocates tell them they must), they will necessarily have less time and energy to do other things that might have real benefits for students.

Third, Cedar makes a broader case for the critical role that cognitive science can and should play in shaping classroom practices. The critique he is responding to is disdainful of science, preferring an individual teacher’s idiosyncratic observations and pet theories over practices supported by real evidence. Educators need to embrace the science of learning; but Cedar also calls psychologists to task for not doing a better job of speaking to policymakers and practitioners:

We must also dispel myths, and we in psychology have a larger set of myths to dispel than others. When these myths exist, they are corrosive to science, because while seeming to represent science (“well, it says it’s a theory”) they do not provide the measurable, reliable results that science demands. These myths are perpetuating identity theft of science, calling themselves science and wrecking havoc on our credit scores, yet many scientists don’t connect the bankruptcy of public trust in science with the myths that we let roam freely… As scientists we must take greater efforts to rein in this misapplication of science.

In this vein, I’d say psychology has an important but difficult task ahead of itself. If you look at the applied domain where psychology has traditionally been the most involved — clinical treatment of mental disorders — the shift toward evidence-based treatment has been slow, though it is finally picking up momentum and having real benefits. Hooray for those like Cedar, Hal Pashler, and Daniel Willingham who are pushing for the same in educational practice.

UPDATE: If you want to read Heather Wolpert-Gawron’s critique (the one that inspired Cedar’s article in response), you can read it on her blog, no subscription required, at TweenTeacher.com.

Rethinking intro to psych

Inside Higher Ed has a really interesting article, Rethinking Science Education, about how some universities are trying to break the mold of the traditional intro-to-a-science course. From the article:

Too many college students are introduced to science through survey courses that consist of facts “often taught as a laundry list and from a historical perspective without much effort to explain their relevance to modern problems.” Only science students with “the persistence of Sisyphus and the patience of Job” will reach the point where they can engage in the kind of science that excited them in the first place, she said.

This is exactly how Intro to Psych is taught pretty much everywhere — as a laundry list of topics and findings, usually old ones. The scientific method is presented didactically as another topic in the list (usually the first one), rather than being woven into the daily experience of the class.

It’s a problem that’s easy to point out, but hard to solve. You almost couldn’t do it as a single instructor working within a traditional curriculum. Our majors take a 4-course sequence: 2 terms of intro, then statistics, then research methods. You’d essentially need to flip that around — start with a course called “The Process of Scientific Discovery in Psychology” and have students start collecting and analyzing data before they’ve even learned most of the traditional Intro topics. Such an approach is described in the article:

One approach to breaking out of this pattern, she said, is to create seminars in which first-year students dive right into science — without spending years memorizing facts. She described a seminar — “The Role of Asymmetry in Development” — that she led for Princeton freshmen in her pre-presidential days.

She started the seminar by asking students “one of the most fundamental questions in developmental biology: how can you create asymmetry in a fertilized egg or a stem cell so that after a single cell division you have two daughter cells that are different from one another?” Students had to discuss their ideas without consulting texts or other sources. Tilghman said that students can in fact engage in such discussions and that in the process, they learn that they can “invent hypotheses themselves.”

Would this work in psychology? I honestly don’t know. One of the big challenges in learning psychology — which generally isn’t an issue for biology or physics or chemistry — is the curse of prior knowledge. Students come to the class with an entire lifetime’s worth of naive theories about human behavior. Intro students wouldn’t invent hypotheses out of nowhere — they’d almost certainly recapitulate cultural wisdom, introspective projections, stereotypes, etc. Maybe that would be a problem. Or maybe it would be a tremendous benefit — what better way to start off learning psychology than to have some of your preconceptions shattered by data that you’ve collected yourself?

Do learning styles really exist? Pashler et al. say no

Do different people have different learning styles? It has become almost an article of faith among educators and students that the answer is yes, in large part due to the work of Howard Gardner (who recently went so far as to suggest that computerized assessment of learning styles may someday render traditional classroom teaching obsolete).

But a new review by Hal Pashler and colleagues suggests otherwise. They find ample evidence that people believe they have different learning styles — but almost no evidence that such styles actually exist.

When I first encountered Gardner’s theory of multiple intelligences as an undergrad, I found it fascinating. But I’ll admit that the more I teach, the more I’ve become skeptical when people invoke it. In principle it could lead to an optimistic, proactive attitude about learning: if a student isn’t making progress, let’s try teaching and learning in another modality. But in my experience, people invoke learning styles to almost the opposite effect. “I [or you] have a different learning style” has 2 problems with it. One, it’s an attributional “out” for somebody who isn’t doing well in class — it’s kind of a socially acceptable way of excusing poor performance by both teacher and student. And two, it’s an entity-theorist explanation (in the Carol Dweck sense) that can lead students to disengage from a class.

But skepticism about how people invoke it isn’t as deep as skepticism about the very existence of the phenomenon, which is where Pashler et al. are aiming. They acknowledge something well known among intelligence researchers, that there are subdomains of intellectual ability — e.g., in comparing two people with the same general IQ, one might be better at verbal tasks and the other better at visual-spatial tasks. But that’s about ability — Person A is better at one thing and Person B is better at another. Learning styles suggest that Persons A and B could both be good at the same thing if it was only presented to each in a custom-tailored way. Pashler et al. call this the “meshing hypothesis” and they say that well-designed, controlled studies find no support for it.

I don’t think this is the death-knell for multimodal teaching. When I teach statistics, I try to present each concept in as many modes as possible — a verbally narrated explanation, a visual depiction, a formal-symbolic representation (i.e., words, pictures, and equations). I still think that is a good way to teach. But the surviving rationale is that any one student will benefit from seeing the same underlying concept represented 3 different ways — not because the 3 modalities will reach 3 different kinds of students.

Of course, I’m sure this won’t be the last word. I expect there will be a vigorous response from Gardner and others. Stay tuned.

UPDATE: In re-reading this post, I realized I should probably clarify my references to Gardner. Gardner’s theory of multiple intelligences is centrally about abilities, not learning styles; in that sense, it is not directly challenged by this research. However, I think Gardner is relevant for two reasons. One, I think a lot of people who discuss learning styles look to him as a role model and a leader. Multiple intelligences is often mentioned in conjunction with learning styles, and they both fall under a larger umbrella of proposing that we need to respect and work around cognitive diversity. Two, Gardner himself has discussed the idea that different students learn in different ways — not just that different people are good at different things. So even though MI theory is more about abilities, I think Gardner is an important influence on a set of related ideas.

A student’s perspective on PowerPoint lectures

A student blogger who goes by Carolyn Blogs has an interesting entry on PowerPoint lectures from the perspective of someone taking the class:

Recently I came to the conclusion that I do not learn well from classes in which the lectures are based on PowerPoint presentations… Professors who use PowerPoint tend to present topics very quickly when they don’t have to do anything but talk. If every example and every diagram is on the screen, there isn’t much time for me to take notes on the subject of each slide. Lectures aided by chalkboard visuals are easier to take notes from because I can write what the professor writes on the board at the same time. Also, because there is usually more chalkboard space than screen space, if I am behind on note-taking, the visual will probably still be on the board for me to copy a few minutes later. A lot of professors try to solve this problem by handing out the lecture slides before class, or by posting them online. While this is great for a lot of students, it doesn’t work for me because I learn best and am most engaged if I have to take notes as if my grade depended on having a great record of the class and I would never see the material again. In classes with handouts, I tend to zone out and have to work harder to pay attention. Studies have shown[pdf] that taking high-quality notes improves organic memory: I rarely use my notes after the lecture because the act of physically writing information down helps me remember more of what goes on in class.

A few years ago I started phasing out PowerPoint from my upper-division classes (I never used it for grad classes). Carolyn hits on pretty much all the major reasons.

Teaching with PowerPoint has a different pace and structure than teaching with chalk or markers. It’s not just about overall fast vs. slow (though that’s part of it), but about when you go fast and when you go slow. When I use the board, I write down the major points, terms, definitions, etc. That forces me to slow down at exactly the moment when I’m making a big point and students should be attending closely. Once the critical information is on the board, I can elaborate, discuss with the class, ask questions, etc. while it hangs up there behind me for students to refer to. And since writing slows me down, I don’t give as much emphasis to relatively minor points — giving students an additional cue as to what’s more and less important. (“Don’t ignore this completely, but it’s not as central as what I said earlier.”) You can reproduce this kind of pacing and structure with PowerPoint, but in practice it’s difficult to do during a live performance in front of a classroom. You have to write your presentation with delivery (not just content) in mind. Otherwise it’s just too easy to blow through major and minor points at a constant pace.

Another point that she makes… I still use PowerPoint in my big introductory classes (though I make my own slides from scratch, use animation to help regulate my delivery, and try to avoid the mind-numbing bullety templates). I always have a few students ask me to post the notes before class. I don’t — I post them after class, but honestly, I have sometimes wondered if I’d be better off not posting them at all. Carolyn modestly writes “while [posting notes] is great for a lot of students, it doesn’t work for me…” but I actually think this describes most students. A lot of students misread their internal cues — if it feels like they are expending a lot of effort then they think they must be struggling with the material. Actually, though, if the professor is presenting challenging material, then you shouldn’t feel relaxed — relaxation is a sign that you’re probably thinking superficially or zoning out, not that you’ve quickly mastered the material.

I also found it impressive that Carolyn reached this conclusion on her own. Because frankly, it’s fundamentally very difficult to introspect into your own learning processes. A few years back, when I started moving away from PowerPoint, I got feedback on my student evaluations from people who wanted more PowerPoint. When I talked with students who felt that way, they thought they’d be able to focus more on the material if they didn’t have to bother taking notes. I realized that reflects a fundamental misunderstanding of what note-taking does for you. I’ve been getting less of that feedback lately — maybe because I’ve gotten better at using the board, or maybe because recent students have been around PowerPoint longer and see its limitations more clearly.

Here’s eight grand to adopt our textbook

I got the following email this morning. Note the part I’ve underlined:

***

Dear Introductory Psychology Professor:

[Redacted] Press was created as a faculty venture six years ago focusing solely on interactive low cost digital text packages with free printed texts. This concept has been widely accepted by faculty and students alike. The rising price of textbooks is well known to college faculty, students, and even government agencies.  Our digital textbooks offer a low cost alternative to traditional expensive textbooks.
We would like to introduce you to our Introductory Psychology low cost interactive package including:

A $40 digital interactive text with embedded videos and audio and words with internet links — a better way for today’s students
A free printed text called a student text supplement
Access to a password protected website with interactive updates and materials
A test marketing program with stipends up to $8,000 for individual professors and up to $15,000 or more for departments
An online test center for each chapter of the interactive text, plus instructor’s manual
Test bank questions to upload to any online platform such as Blackboard
Technical and consulting support — 24/7
We invite you to take a narrated tour of [Redacted] Press before you review the interactive Introductory Psychology text. It is a brief tour of [Redacted] Press and interactive texts and will enable you to better understand the benefits of our program within minutes. You start the tour by going to: [URL redacted] (you can cut and paste this URL directly into your browser).This tour will demonstrate the interactive elements of our texts and give you an opportunity to review the [Redacted] interactive Introductory Psychology text at your leisure.

After you have taken the tour, if you email me your mailing address and the number of students in your upcoming classes, we will send you the digital text and brochure on the Introductory Psychology package and tailor a test marketing stipend program for you and even for your department.

We are confident you will see the numerous advantages of moving towards digital, interactive texts and will help us faculty move students into the digital age of education.

Thank you in advance for your time and interest,

***

I went to the website and looked at the text briefly, and I wouldn’t ask a student to pay $40 for it. It’s just not that good, and for a few bucks more, a student can get an ebook edition of a name-brand textbook.

But more to the point, is it just me, or does that “test marketing program” sound like a pretext for a kickback? Awfully close to the consulting fees and conference junkets that doctors and pharmaceutical companies are always getting in trouble for.

(Of course, I’m also suspicious of the numbers. At $40 a pop, you’d need to sell 200 ebooks just to cover the $8000 kickback stipend.)

Evidence-based policy

I’m all for basing social policy on good social science evidence. But as Dean Dad writes:

We have anecdotal evidence that suggests that students who actually take math for all four years of high school do better in math here than those who don’t. We also have anecdotal evidence that bears crap in the woods. Why the hell do the high schools only require two years of math?

I say we can bypass the regression analysis on this one.