Conditional indirect effect models

ENT5587B - Research Design & Theory Testing II

Brian S. Anderson, Ph.D.
Assistant Professor
Department of Global Entrepreneurship & Innovation
andersonbri@umkc.edu


RIEI Logo
© 2017 Brian S. Anderson

  • Mediation assessment debrief
  • Paper discussion
  • Data collection discussion
  • Conditional indirect effect models
  • Lab 6 April – Conditional indirect effect assessment
  • Schedule change – 14 April Seminar & Lab

Spoiler alert…

I really don’t like this topic. Mostly because I’m not convinced that we’ve found a way to causally model a conditional indirect effect (moderated mediation/mediated moderation).

That said, you see them more and more in the literature, and you are seeing increased interest in the methods literature on the topic.

So, I’m hopeful that we’ll make these models better, because they are kind of cool :)

Conditional indirect effect model

There are a lot of different ways to depict conditional indirect effect models. The above is the simplest though, and will be the one that we focus on.

Once again, we can write this model with two equations…

Eq 1: \(m =\alpha+\beta{x}+\epsilon\)

Eq 2: \(y =\alpha+\beta{x}+\beta{m}+\beta{xm}+\epsilon\)

Wait a second…

What happened to the ‘direct’ effect model?

Well done—if we had tested just the direct effect model, it would be misspecified.

As with all types of moderation models, there is really just one hypothesis to test…

The nature (or strength, or degree) of the indirect effect between \(x\) and \(y\) transmitted through \(m\) changes as a function of the level of \(m\).

The problem is, this model violates a key assumption of mediation.

For mediation to hold, there can be no interaction between \(x\) and the mediator \(m\).

Wait, what?

Yes, you heard me right.

Under the product of coefficients method for deriving the indirect effect—\(a\) x \(b\)—the model makes the assumption that the indirect effect is stable across values of both \(x\) and \(m\).

Kline (2015: 209) has a great discussion on it, but the gist is that “In mediation analysis, this means that the effect of X on Y does not depend on the level of M just as the effect of M on Y has nothing to do with X.”

Violating the homogeneity assumption precludes the ability to draw causal inference about the indirect effect.

Now, here’s the hard part…why?

Remember, no causation without manipulation.

The mean of the disturbance term for \(y\) will not be zero across the varying levels of \(m\), because the direct effect—always present in our equation—changes as a function of the level of \(m\).

This means that we cannot decompose the total effect into its singular direct and indirect effect components in the presence of \(m\) also being a moderator.

Notably, many have argued that it’s simply not realistic to assume that there is no interaction effect between \(x\) and \(m\). I agree—that’s a tough assumption to make, particularly in the social sciences.

There has been some recent work—Imai and colleagues in particular—on models that relax this assumption. The mediate package also implements a test for the interaction assumption, predicated on the original product of coefficients method for identifying an indirect effect.

We’re not going to go in to a lot of that though.

Why you ask? Well, consider this statement from a recent paper on moderated-mediation…

“This article also does not take into account the emerging perspective on causal inferences or other approaches such as instrumental variable approach. While calculation of \(\rho\) may flag issues with the causal model at the data analysis stage, it is important to take into account endogeneity concerns during study design.”

In my opinion, moderated mediation falls into the camp where statistical theory (and mathematics) progressed faster than our understanding—much less due care for—causal inference in these models.

When you layer in easy to use software, cough cough cough, and simplistic macros, cough cough cough, the result is a proliferation of papers that add a lot of noise to the literature.

So rather than going through the plethora of other ways in which moderation can integrate with mediation, I want to focus on one design where, I think, there is the most reasonable chance at recovering consistent parameter estimates.

Moderated mediation

This depiction shows up in a couple of papers on conditional indirect effects. Lets walk through the assumptions.

The most important assumption is that the moderator, \(w\), influences the \(a\) path in the model, and the \(b\) path in the model, in the same direction and in the same magnitude.

Why must this assumption be true to draw causal inference of the indirect effect?

We also assume that \(w\) is exogenous to \(m\) and to \(y\).

Why that assumption?

We also assume that \(x\) does not causally relate to \(w\).

And why this assumption?

If these assumptions hold, we retain the interpretation of \(ab\) as the causal indirect effect of \(x\) on \(y\) transmitted through \(m\), at varying levels of \(w\).

Oh yeah, the same rules about ‘the forbidden regression’ still apply when it comes to \(w\), so if you can’t make the assumption that \(w\) is exogenous, you’ll need instruments for it otherwise everything else is likely to be wrong.

Snap.

So, we need to think about potential \(w\)’s in terms of variables likely to be a priori exogenous.

Age, gender, group membership (although even then, have to be careful) are possibilities. Again though, in any research question, making the assumption that \(w\) is exogenous requires careful understanding of the phenomenon at hand.

Side note…

Seriously, there are a lot of other mediated/moderation possibilities (e.g., first stage only, second stage only, etc.). Why do we just care about the \(w\) —> \(ab\) model?

Well, think about how we generally determine the strength—and hence significance—of an indirect effect (whether it’s product of coefficients or difference of coefficients). What would it mean, conceptually, if say the \(a\) path changed as a function of the level of \(w\) but that the \(b\) path doesn’t?

Yeah, I can’t make sense of that either.

Oh, it happens in the literature a lot. I just don’t think it carries any substantive—theoretical or practical—meaning.

Ok, back to our model and the assumption that \(w\) is exogenous.

Lets revisit our model from last week on causal mediation, and assume that we have an experimental design where we’ve randomly assigned participants to a treatment, \(x\), but measured \(m\) and measured \(y\).

What type of mediation model are we testing?

Good!

Now, given that we’re using measurement-of-mediation, what must we do to \(m\)?

Love it!

That’s right, we need instruments, and we’ll use a 2SLS approach.

Ok, lets say that we’re interested in determining whether a person that perceives him or herself to be lucky is more likely to invest in a high-risk new venture. We believe that the mechanism connecting luck (\(x\)) to investing (\(y\)) is entrepreneurial self-efficacy (\(m\)).

But, we think that the indirect effect of luck on investing that passes though self-efficacy varies between men and women.

Side note…

Come to think of it, this strikes me as a fun experiment. Anybody interested?

Ok, back to our model.

We’re going to experimentally assign a luck condition. We could, for example, make people believe that they’ve accurately picked stock market winners and losers based on some random criteria. So \(x\) = 1 is our treatment condition for believing that one is lucky in business.

We then use a standard entrepreneurial self-efficacy scale after administering the manipulation. If our theory is correct, people who think they are lucky in business should rate themselves as having demonstrably higher belief in their skills as an entrepreneur.

We then present the participant with a sample business plan of a high-risk new venture, and the opportunity to invest a portion of his/her savings into the business; \(y\) is a continuous ratio variable.

Again our theory suggests that higher self-efficacy should result participants investing a larger portion of his or her income into the venture.

Based on prior literature, we expect, however, that there are substantive differences in our proposed indirect effect as a function of gender.

Specifically, we anticipate that the observed indirect effect will be higher (stronger) for men than it is for women.

Modeling this actually isn’t all that complicated. We’re going to borrow from our playbook on moderation here and simply make this a group comparison—\(w\) takes on a value of 0 (male) or 1 (female).

Lets get some data…

library(tidyverse)
my.ds <- read_csv("http://a.web.umkc.edu/andersonbri/ConditionalIndirect.csv")
my.df <- as.data.frame(my.ds)

Now lets specify our model. Note the group = "w" option in the model fitting syntax. This tells lavaan to test two different models, splitting on \(w\). Also note the additional naming options to create different labels by group, and don’t forget robust standard errors!

mediation.iv.model <- 'm ~ c(a1,a0) * x  # a paths
                       y ~ c(b1,b0) * m  # b paths

                       # Instrument variable paths
                       m ~ I1 + I2

                       # Specify the error term covariance
                       m ~~ y

                       # Indirect effects
                       ab1 := a1*b1
                       ab0 := a0*b0'
mediation.iv.fit <- sem(mediation.iv.model, data = my.df, 
                        group = "w", se="robust.sem")

You can—and should—run summary(mediation.iv.fit) after the preceding code. But it makes for a heck of an output. Lets do a different summary instead…

library(tidyverse)
display.results <- data.frame(parameterEstimates(mediation.iv.fit))
display.results <- display.results %>%
  filter(label != '') %>%
  select(label, est, se, pvalue, ci.lower, ci.upper) %>%
  mutate_each(funs(round(.,3)), est, se, pvalue, ci.lower, ci.upper)
display.results
##   label   est    se pvalue ci.lower ci.upper
## 1    a1 0.186 0.093  0.045    0.004    0.369
## 2    b1 0.407 0.102  0.000    0.207    0.606
## 3    a0 0.586 0.090  0.000    0.409    0.762
## 4    b0 0.398 0.098  0.000    0.207    0.589
## 5   ab1 0.076 0.042  0.070   -0.006    0.158
## 6   ab0 0.233 0.067  0.000    0.102    0.364

What we’re looking for are the two \(ab\) paths. The \(ab1\) path is the indirect effect when \(w\) = 1; for women in our example. The \(ab0\) path is the indirect effect for men.

##   label   est    se pvalue ci.lower ci.upper
## 1    a1 0.186 0.093  0.045    0.004    0.369
## 2    b1 0.407 0.102  0.000    0.207    0.606
## 3    a0 0.586 0.090  0.000    0.409    0.762
## 4    b0 0.398 0.098  0.000    0.207    0.589
## 5   ab1 0.076 0.042  0.070   -0.006    0.158
## 6   ab0 0.233 0.067  0.000    0.102    0.364

For women, we observed a statistically marginal indirect effect of .076 (p = .07). For men, the indirect effect is stronger (both effect size and statistical significance) at .233 (p < .001).

Note that these effects are average indirect effects, assuming that \(w\) is exogenous, that the \(ab\) effect is linear, and that \(w\) impacts \(a\) and \(b\) in the same way and in the same magnitude.

Yes, it is easy to mathematically estimate an indirect effect conditional on a continuous \(w\), and the bane of the statistics world makes it easy to do this.

Recall that when we’re looking at continuous moderators, we’re really interested in marginal effects, because the effect of \(x\) on \(y\) changes as a function of a unit change in \(m\).

Go ahead, tell me what the marginal effect of a continuous moderator is in relation to an indirect effect. No really, go ahead.

I’m waiting.

Ok, that wasn’t fair, because I have no idea how to make sense of it either It’s one reason why the adage that just because you can multiply something together does not mean that you should.

So lets get back to our dichotomous \(w\).

The two effects seem different, but what we really need to know is whether they are statistically different from each other.

The easiest way to do this is with a Wald test of the equivalency of coefficients.

In the lavaan package, there is a built in function—lavTestWald that does this.

lavTestWald(mediation.iv.fit, constraints = "ab1 == ab0")
## $stat
## [1] 3.969213
## 
## $df
## [1] 1
## 
## $p.value
## [1] 0.04633942
## 
## $se
## [1] "robust.sem"

## $stat
## [1] 3.969213
## 
## $df
## [1] 1
## 
## $p.value
## [1] 0.04633942
## 
## $se
## [1] "robust.sem"

The null hypothesis is that the difference between \(ab1\) and \(ab0\) is zero. The Wald test generates a \(\chi\)2 statistic and associated p-value. Rejecting the null suggests that the two estimates are statistically different from each other, which in this case, they are.

So in our example, we conclude that the indirect effect of luck on investing that passes though self-efficacy varies between men and women, with men exhibiting a statistically higher indirect effect than women, and women exhibiting no indirect effect.

We could also interpret this as being that the indirect effect of luck on investing that passes though self-efficacy is present only among men, in this (made up) sample.

Like I said, I think conditional indirect effects are really very cool. However, as with all of our research designs and statistical tools, simpler is generally better, because you are less likely to screw something up.

Test a simple model rigorously well, replicate yourself, and minimize—or better yet completely eliminate—researcher degrees of freedom.

Wrap-up.

Lab 6 April – Conditional indirect effect assessment

Seminar 14 April – The basics of panel data

Lab 14 April – Panel data assessment