In this paper I attempt to replication the results of Schuit and Rogowski (2017) their results replicate well. I test their results to sensitivity analysis using the method developed by Blackwell (2014), the ATT appears very insensitive. Finally I examine the use of a less conservative definition of treatment.
Did the preclearance restriction of section 5 of the Voting Rights Act make congress members more responsive to black interests? Schuit and Rogowski (2017) examines this, they test whether members of Congress were more responsive to black interests when any part of their district was subject to preclearance restrictions. They do so by creating an index of a congress members pro or negative votes for civil rights legislation the index is coded 0 to 1. They find strong results that members of congress subject to the the preclearance restriction of section 5 are more likely to vote in favor of black interests. Their methodological model is fairly strong, they inlcude fixed effect for each Congress to account for time trends, as well as use fixed effects for each particular state.
Schuit and Rogowski (2017) methods are fairly strong but they can still be enhanced, since they only use fixed effects, regression with a few interactions, and placebos replacing civil rights voting index with a foreign policy index to control for section 5 treatment making a member more liberal in general, they still only use basic regression in their analysis. Their dataset only includes a limited number of variables including percent black, Democratic vote, competitiveness, percent Republican, Independent, and a dummy variable for if the legislator is black. The lack of controls, raises concerns that an omitted variable may be driving the results, omitted variable bias is tested by using sensitivity analysis to determine if a confounding variable is driving the ATT. This is especially concerning becasue the treatment of preclearance was not done randomly, or even “as-if” randomly, but was instead conditioned on a history of voting restrictions that disfavored minorities.
Addittionally, Schuit and Rogowski (2017) uses a very conservative definition of treatment on the congress member, if any part of their district is subject to preclearance they count the member as treated. I test the magnitude of the treatment by differentiating between the amount of treatment a Congress member’s district receives. This allows for a better understanding of what full treatment under preclearance would look like.
My next goal was to test the ignorability assumption of their ATT held up under testing. Unmeasured confounding or omitted variable bias exists if there are any differences between preclearance and non-preclearance districts in their members potential outcomes. We can imagine this a real possibility given the non-random assignment of what districts/members were subject to preclearance, becuase preclearance was conditioned on some type of test to restrict the vote as well as less than 50 percent voters being registered in a district.
Since section 5 preclearance was not randomly assigned this assumption is typically justified based on substantive knowledge, e.g. controlling for other important independent variables we believe could be driving the results. It is not possible to directly test the ignorability assumption, however by using sensitivity anaylysis we can test the presense of a confounding variable that may be driving our outcomes.
Blackwell (2014) is a prefereable method for conducting sensitivty anaylsis as he combines the approaches of Robins (1999) and Imbens (2003). Robins (1999) uses a confounding function, which measures the amount of unmeasured confounding, this approach avoids imagining specific omitted variables. Imbens (2003) uses easily interprettable results, but still relies on a specific unmeasured confounder. Blackwell (2014) determines how specific violations of confounding alter the magnitude and direction of causal estimates. Blackwell (2014) introduces a confounding or selection bias function that determines what amount of variance would need be explained to alter the ATT.
Using Blackwell (2014) I test for both alignmnet bias, and one-sided bias. Alignment bias occurs when units are selected into treatment or control based on their predicted effects. For example, if the criteria for being subject to section 5 was narrowly tailored to specific districts in order to have a specific effect alignment bias might have occurred. Under alignment bias we let the confounding function vary according to a single parameter, \(α\). One-sided bias occurs when either the treatment or control groups are selected because one is better off then the other. This could very plausibly have occurred within this treatment either non-preclearance or preclearance districts could of had much better outcomes in terms of civil rights voting than the other. Under one-sided bias confounding function varies by the treatment assignment. Below are the results of one-sided bias tests, using the causalsens package created by Matthew Blackwell and based off of Blackwell (2014). The ATT using the one-sided bias test appears to be very insensitive to the sensitiviy analysis with 60% of the confounding needed to be explained to bring the ATT to zero.
In addition to the one-sided ATT test, I also ran an alignment bias test, to determine if the treatment mechanism was driving the results. Results are below, similar to the one-sided test the ATT again appears to be very insensitive with again roughly 60 percent of confounding needed to be explained to bring the ATT to zero.
In their Appendix C Schuit and Rogowski (2017) introduces and tests a linear model with treatment either being substantial or partly applited to a member. This model does not include fixed effects, the table states, “fixed-effects are also included where indicated but not reported.” (Schuit and Rogowski 2017, app. c, table c-3). Yet, nowhere on the table does it report fixed effects. So I test this model with same fixed effects and clusters they used in their first model in table 2 (Schuit and Rogowski 2017, 517). My results do differ from those presented in their appendix, with the treatment affect of substantial coverage with fixed effects being slightly smaller than what Schuit and Rogowski (2017) report, the treatment effect of partial coverage is roughly the same. I am not sure why they did not include fixed effects in their appendix model. The results from this test are below.
Dependent variable: | ||||||
civil_rights | ||||||
(1) | (2) | (3) | (4) | (5) | (6) | |
partly | -0.004 | 0.039** | 0.035** | |||
(0.021) | (0.017) | (0.017) | ||||
dempres | 0.042*** | 0.044*** | 0.040*** | 0.045*** | ||
(0.003) | (0.003) | (0.003) | (0.003) | |||
rep | -0.334*** | -0.333*** | -0.334*** | -0.333*** | ||
(0.007) | (0.007) | (0.007) | (0.007) | |||
ind | -0.165 | -0.150 | -0.156 | -0.140 | ||
(0.106) | (0.106) | (0.105) | (0.105) | |||
competitive | 0.028*** | 0.030*** | ||||
(0.004) | (0.004) | |||||
percentblack | -0.145*** | -0.174*** | ||||
(0.032) | (0.032) | |||||
black_legis | 0.137*** | 0.136*** | ||||
(0.020) | (0.019) | |||||
substantial1 | 0.173*** | 0.155*** | 0.162*** | |||
(0.015) | (0.012) | (0.012) | ||||
cluster | -0.00000 | 0.00000 | 0.00000 | -0.00000 | 0.00000 | 0.00000 |
(0.00000) | (0.00000) | (0.00000) | (0.00000) | (0.00000) | (0.00000) | |
Constant | 0.610*** | 0.689*** | 0.719*** | 0.531*** | 0.625*** | 0.646*** |
(0.072) | (0.058) | (0.059) | (0.071) | (0.058) | (0.058) | |
cong.i | Yes | Yes | Yes | Yes | Yes | Yes |
state.i | Yes | Yes | Yes | Yes | Yes | Yes |
Observations | 7,938 | 7,938 | 7,938 | 7,938 | 7,938 | 7,938 |
R2 | 0.238 | 0.520 | 0.525 | 0.249 | 0.529 | 0.536 |
Adjusted R2 | 0.231 | 0.515 | 0.521 | 0.243 | 0.525 | 0.531 |
Residual Std. Error | 0.303 (df = 7867) | 0.241 (df = 7864) | 0.239 (df = 7861) | 0.301 (df = 7867) | 0.238 (df = 7864) | 0.237 (df = 7861) |
F Statistic | 35.020*** (df = 70; 7867) | 116.642*** (df = 73; 7864) | 114.500*** (df = 76; 7861) | 37.361*** (df = 70; 7867) | 121.050*** (df = 73; 7864) | 119.248*** (df = 76; 7861) |
Note: | p<0.1; p<0.05; p<0.01 |
This dataset was somewhat difficult to work with due to the use of proprietary numbers assigned to Congress members. This made it difficult to assign new data to the dataframe including W-nominate score, or even specific congressional districts. This made it so treatment could only be assessed at the congress member level. In the future if someone were to use this data, they should either try to get the index for the “member” numbers, or re-collect the data (a task I was not up to).
Schuit and Rogowski (2017)’s work does replicate, though I still have concerns with their over all data. They use a lot of posttreatment variables in their model as controls. I would have liked to have used something like g-estimation to examine possible mediators that could be affecting their outcome variable Acharya, Blackwell, and Sen (2016). Potential post-treatment mediators like blacks moving to section 5 districts could potential affect our final observed treatment effect. Unfortunately, because their dataset has so few pre-treatment varaibles I was unable to get g-estimation to work. Further, to really test g-estimation in this context you would have to measure treatment at the congressional district level and not specifically the member.
I would be interested in seeing if a similar treatment effect could seen at the congressional level, that is even if the congress member changes does the initial treatment effect of preclearance remain? Creating the Schuit and Rogowski (2017) dataset at this level would also allow for a better assessment of the responsive mechanism that is occurring, one could examine if their is either replacement or adaptation at the congressional district level. Finally, with Supreme Court decision in Shelby County v. Holder, 570 U.S. 2 (2013) striking down the preclearance restriction, it would be interesting to see if the treatment effect dissapears after the decision.
Dependent variable: | |||
civil_rights | |||
(1) | (2) | (3) | |
preclearance.i1 | 0.124*** | 0.129*** | 0.132*** |
(0.013) | (0.011) | (0.011) | |
competitive | 0.030*** | ||
(0.004) | |||
dempres | 0.041*** | 0.045*** | |
(0.003) | (0.003) | ||
rep | -0.334*** | -0.333*** | |
(0.007) | (0.007) | ||
ind | -0.154 | -0.139 | |
(0.105) | (0.105) | ||
percentblack | -0.158*** | ||
(0.032) | |||
black_legis | 0.135*** | ||
(0.020) | |||
cluster | -0.00000 | 0.00000 | 0.00000 |
(0.00000) | (0.00000) | (0.00000) | |
Constant | 0.556*** | 0.637*** | 0.662*** |
(0.071) | (0.058) | (0.058) | |
cong.i | Yes | Yes | Yes |
state.i | Yes | Yes | Yes |
Observations | 7,938 | 7,938 | 7,938 |
R2 | 0.246 | 0.528 | 0.534 |
Adjusted R2 | 0.239 | 0.524 | 0.530 |
Residual Std. Error | 0.301 (df = 7867) | 0.238 (df = 7864) | 0.237 (df = 7861) |
F Statistic | 36.615*** (df = 70; 7867) | 120.704*** (df = 73; 7864) | 118.712*** (df = 76; 7861) |
Note: | p<0.1; p<0.05; p<0.01 |
Dependent variable: | ||
civil_rights | ||
(1) | (2) | |
preclearance.i1 | 0.119*** | 0.135*** |
(0.011) | (0.011) | |
percentblack | -0.260*** | -0.160*** |
(0.035) | (0.032) | |
percentblack_mean_centered | ||
competitive_mean_centered | 0.024*** | |
(0.004) | ||
competitive | 0.027*** | |
(0.004) | ||
dempres | 0.048*** | 0.044*** |
(0.003) | (0.003) | |
rep | -0.333*** | -0.334*** |
(0.007) | (0.007) | |
ind | -0.136 | -0.139 |
(0.104) | (0.105) | |
black_legis | 0.132*** | 0.137*** |
(0.019) | (0.020) | |
cluster | 0.00000 | 0.00000 |
(0.00000) | (0.00000) | |
preclearance.i1:percentblack_mean_centered | 0.335*** | |
(0.047) | ||
preclearance.i1:competitive_mean_centered | 0.022*** | |
(0.008) | ||
Constant | 0.681*** | 0.636*** |
(0.058) | (0.058) | |
cong.i | Yes | Yes |
state.i | Yes | Yes |
Observations | 7,938 | 7,938 |
R2 | 0.537 | 0.535 |
Adjusted R2 | 0.533 | 0.530 |
Residual Std. Error (df = 7860) | 0.236 | 0.237 |
F Statistic (df = 77; 7860) | 118.567*** | 117.378*** |
Note: | p<0.1; p<0.05; p<0.01 |
Acharya, Avidit, Matthew Blackwell, and Maya Sen. 2016. “Explaining Causal Findings Without Bias: Detecting and Assessing Direct Effects.” American Political Science Review 110 (03): 512–29. doi:10.1017/S0003055416000216.
Blackwell, Matthew. 2014. “A Selection Bias Approach to Sensitivity Analysis for Causal Effects.” Political Analysis 22 (2): 169–82. http://www.jstor.org/stable/24573220.
Imbens, Guido W. 2003. “Sensitivity to Exogeneity Assumptions in Program Evaluation.” American Economic Review 93 (2): 126–32. doi:10.1257/000282803321946921.
Robins, James M. 1999. “Association, Causation, and Marginal Structural Models.” Synthese 121 (1/2,): 151–79. http://www.jstor.org/stable/20118224.
Schuit, Sophie, and Jon C. Rogowski. 2017. “Race, Representation, and the Voting Rights Act: RACE, REPRESENTATION, AND THE VOTING RIGHTS ACT.” American Journal of Political Science 61 (3): 513–26. doi:10.1111/ajps.12284.