### Mplus Data Analysis Examples Probit Regression

Note: This example was done using Mplus version 5.2.  The syntax may not work, or may function differently, with other versions of Mplus.

Probit regression, also called a probit model, is used to model dichotomous or binary outcome variables. In the probit model, the inverse standard normal distribution of the probability is modeled as a linear combination of the predictors.

Please note: The purpose of this page is to show how to use various data analysis commands. It does not cover all aspects of the research process which researchers are expected to do. In particular, it does not cover data cleaning and checking, verification of assumptions, model diagnostics and potential follow-up analyses.

#### Examples

Example 1:  Suppose that we are interested in the factors that influence whether a political candidate wins an election.  The outcome (response) variable is binary (0/1);  win or lose.  The predictor variables of interest are the amount of money spent on the campaign, the amount of time spent campaigning negatively and whether the candidate is an incumbent.

Example 2:  A researcher is interested in how variables, such as GRE (Graduate Record Exam scores), GPA (grade point average) and prestige of the undergraduate institution, effect admission into graduate school. The response variable, admit/don't admit, is a binary variable.

#### Description of the data

For our data analysis below, we are going to expand on Example 2 about getting into graduate school.  We have generated hypothetical data, which can be obtained by clicking on binary.dat. You can store this anywhere you like, but our examples will assume it has been stored in c:\data.  (Note that the names of variables should NOT be included at the top of the data file.  Instead, the variables are named in the  variable command.)  You may want to do your descriptive statistics in a general use statistics package, such as SAS, Stata or SPSS, because the options for obtaining descriptive statistics are limited in Mplus. Even if you chose to run descriptive statistics in another package, it is a good idea to run a model with type=basic before you do anything else, just to make sure the dataset is being read correctly.

Data:
File is C:\data\binary.dat ;
Variable:
Names are admit gre gpa rank rank1 rank2 rank3 rank4;
Analysis:
Type = basic;

As we mentioned above, you will want to look at this carefully to be sure that the dataset was read into Mplus correctly.  You will want to make sure that you have the correct number of observations, and that the variables all have means that are close to those from the descriptive statistics generated in a general purpose statistical package. If there are missing values for some or all of the variables, the descriptive statistics generated by Mplus will not match those from a general purpose statistical package exactly, because by default, Mplus versions 5.0 and later use maximum likelihood based procedures for handling missing values.

<output omitted>

SUMMARY OF ANALYSIS

Number of groups                                                 1
Number of observations                                         400

<output omitted>

SAMPLE STATISTICS

Means
________      ________      ________      ________      ________
1         0.318       587.700         3.390         2.485         0.152

Means
RANK2         RANK3         RANK4
________      ________      ________
1         0.378         0.302         0.168

#### Analysis methods you might consider

Below is a list of some analysis methods you may have encountered. Some of the methods listed are quite reasonable while others have either fallen out of favor or have limitations.

• Logistic regression. A logit model will produce results similar probit regression. The choice of probit versus logit depends largely on individual preferences.
• OLS regression.  When used with a binary response variable, this model is known as a linear probability model and can be used as a way to describe conditional probabilities. However, the errors (i.e., residuals) from the linear probability model violate the homoskedasticity and normality of errors assumptions of OLS regression, resulting in invalid standard errors and hypothesis tests. For a more thorough discussion of these and other problems with the linear probability model, see Long (1997, p. 38-40).
• Two-group discriminant function analysis. A multivariate method for dichotomous outcome variables.
• Hotelling's T2.  The 0/1 outcome is turned into the grouping variable, and the former predictors are turned into outcome variables. This will produce an overall test of significance but will not give individual coefficients for each variable, and it is unclear the extent to which each "predictor" is adjusted for the impact of the other "predictors."

#### Using the probit model

The Mplus input file for a probit regression model is shown below. Because the data file contains variables that are not used in the model, the usevariables subcommand is used to list the variables that are used in the model (i.e., admit, gre, gpa, rank1, rank2 and rank3). Note that because Mplus uses the names subcommand to determine the order of variables in the data file, the number and order of variables in the names subcommand should not be changed unless the data file is also changed. The categorical subcommand is used to identify binary and ordinal outcome variables. Categorical predictor variables should be included as a series of dummy variables (e.g., rank1, rank2, and rank3). We do not need to specify that we wish to run a probit model because probit models are the default for binary outcome variables. Finally, under model we specify that the outcome (i.e., admit) should be regressed on the predictor variables (i.e., gre, gpa, rank1, rank2 and rank3).

Data:
File is D:\documents\dae_updating\binary.dat ;
Variable:
names = admit gre gpa rank rank1 rank2 rank3 rank4;
usevariables = admit gre gpa rank1 rank2 rank3;
Model:
admit on gre gpa rank1 rank2 rank3;
SUMMARY OF ANALYSIS

Number of groups                                                 1
Number of observations                                         400

Number of dependent variables                                    1
Number of independent variables                                  5
Number of continuous latent variables                            0

Observed dependent variables

Binary and ordered categorical (ordinal)

Observed independent variables
GRE         GPA         RANK1       RANK2       RANK3

Estimator                                                    WLSMV
Maximum number of iterations                                  1000
Convergence criterion                                    0.500D-04
Maximum number of steepest descent iterations                   20
Parameterization                                             DELTA
• At the top of the output we see that 400 observations were used.
• From the output we see that the model includes one binary dependent (i.e., outcome) variable and 5 independent (predictor) variables.
• The analysis summary is followed by a block of technical information about the model, we won't discuss most of this information, but we will note that the estimator was a type of weighted least squares (i.e., WLSMV). In Mplus, weighted least squares estimators are associated with probit models when the dependent variable is binary (as opposed to maximum likelihood estimators, which are associated with logit models).
Input data file(s)
C:\data\binary.dat

Input data format  FREE

SUMMARY OF CATEGORICAL DATA PROPORTIONS

Category 1    0.683
Category 2    0.317

THE MODEL ESTIMATION TERMINATED NORMALLY

TESTS OF MODEL FIT

Chi-Square Test of Model Fit

Value                              0.000*
Degrees of Freedom                     0**
P-Value                           0.0000

*   The chi-square value for MLM, MLMV, MLR, ULSMV, WLSM and WLSMV cannot be used
for chi-square difference tests.  MLM, MLR and WLSM chi-square difference
testing is described in the Mplus Technical Appendices at www.statmodel.com.
See chi-square difference testing in the index of the Mplus User's Guide.

**  The degrees of freedom for MLMV, ULSMV and WLSMV are estimated according to
a formula given in the Mplus Technical Appendices at www.statmodel.com.
See degrees of freedom in the index of the Mplus User's Guide.

Chi-Square Test of Model Fit for the Baseline Model

Value                             33.821
Degrees of Freedom                     5
P-Value                           0.0000

CFI/TLI

CFI                                1.000
TLI                                1.000

Number of Free Parameters                        6

RMSEA (Root Mean Square Error Of Approximation)

Estimate                           0.000

WRMR (Weighted Root Mean Square Residual)

Value                              0.005
• Several measures of model fit are included in the output. Because the model is saturated, the chi-square test of model fit (for the current model, not the baseline model), as well as the CFI, TLI, and RMSEA all show perfect fit. This does not necessarily mean that the model fits well in the sense that the model does a good job of predicting the outcome, just that these measures are not the appropriate measures of fit for a saturated model.
• The warning message below the chi-square test of model fit indicates that the standard chi-square difference tests cannot be used with models estimated using WLSMV. We are not comparing models, so this isn't a problem.
MODEL RESULTS

Two-Tailed
Estimate       S.E.  Est./S.E.    P-Value

GRE                0.001      0.001      2.122      0.034
GPA                0.478      0.189      2.529      0.011
RANK1              0.936      0.248      3.781      0.000
RANK2              0.520      0.211      2.464      0.014
RANK3              0.124      0.224      0.553      0.580

Thresholds

R-SQUARE

Observed                   Residual
Variable        Estimate   Variance

ADMIT              0.165      1.000
• The section titled MODEL RESULTS includes the coefficients (labeled Estimate), their standard errors, the ratio of each estimate to its standard error (i.e., the z-score, labeled Est./S.E.), and the associated p-values. Both gre and gpa are statistically significant, as are the terms for rank=1 and rank=2 (versus the omitted category rank=4).
• The probit regression coefficients give the change in the z-score or probit index for a one unit change in the predictor. For example:
• For a one unit increase in gre, the z-score increases by 0.001.
• For each one unit increase in gpa, the z-score increases by 0.478.
• The indicator variables for rank have a slightly different interpretation. For example, having attended an undergraduate institution of rank of 1, versus an institution with a rank of 4 (the reference group), increases the z-score by 0.936.
• Below the coefficients for each of the predictor variables, under the heading Thresholds, is the threshold for the model (sometimes also called a cutpoint). Mplus reports a threshold in place of the intercept, the two are the same except that they have opposite signs (so the intercept for this model would be -3.315). For more information on the differences between intercepts and thresholds, please see http://www.stata.com/support/faqs/stat/oprobit.html.

We can also test that the coefficients for rank1, rank2 and rank3, are all equal to zero using the model test command. This type of test could also be described as an overall test for the effect of rank. There are multiple ways to test this type of hypothesis, the model test command requests a Wald test. The Mplus input file shown below is similar to the first model, except that the coefficients for rank1, rank2 and rank3 are assigned the names r1, r2 and r3, respectively. In the model test command, these coefficient names (i.e., r1, r2 and r3) are used to test that each of the coefficients is equal to 0.

Data:
File is C:\data\binary.dat ;
Variable:
names = admit gre gpa rank rank1 rank2 rank3 rank4;
usevariables = admit gre gpa rank1 rank2 rank3;
Model:
rank1 (r1)
rank2 (r2)
rank3 (r3);
Model test:
r1 = 0;
r2 = 0;
r3 = 0;

The majority of the output from this model is the same as the first model, so we will only show the part of the output that is associated with the model test command.

Wald Test of Parameter Constraints

Value                             21.132
Degrees of Freedom                     3
P-Value                           0.0001

The portion of the output associated with the model test command is labeled "Wald Test of Parameter Constraints" and appears under the heading TESTS OF MODEL FIT. The test statistic is 21.132, with three degrees of freedom (one for each of the parameters tested), with an associated p-value of 0.0001. This indicates that the overall effect of rank is statistically significant.

We can also use the model test command to make pairwise comparisons among the terms for rank. The Mplus input below tests the hypothesis that the coefficient for rank2 (i.e., rank=2) is equal to the coefficient for rank3 (i.e., rank=3).

Data:
File is C:\data\binary.dat;
Variable:
names = admit gre gpa rank rank1 rank2 rank3 rank4;
usevariables = admit gre gpa rank1 rank2 rank3;
Model:
rank1 (r1)
rank2 (r2)
rank3 (r3);
Model test:
r2 = r3;

Below is the output associated with the model test command (as before, most of the model output is omitted).

Wald Test of Parameter Constraints

Value                              5.682
Degrees of Freedom                     1
P-Value                           0.0171

#### Things to consider

• Empty cells or small cells:  You should check for empty or small cells by doing a crosstab between categorical predictors and the outcome variable.  If a cell has very few cases (a small cell), the model may become unstable or it might not run at all.
• Separation or quasi-separation (also called perfect prediction), a condition in which the outcome does not vary at some levels of the independent variables. See our page FAQ: What is complete or quasi-complete separation in logistic/probit regression and how do we deal with them? for information on models with perfect prediction.
• Sample size:  Both probit and logit models require more cases than OLS regression because they use maximum likelihood estimation techniques. It is sometimes possible to estimate models for binary outcomes in datasets with only a small number of cases using exact logistic regression (using the exlogistic command). For more information see our data analysis example for exact logistic regression. It is also important to keep in mind that when the outcome is rare, even if the overall dataset is large, it can be difficult to estimate a probit model.
• Pseudo-R-squared:  Many different measures of pseudo-R-squared exist. They all attempt to provide information similar to that provided by R-squared in OLS regression; however, none of them can be interpreted exactly as R-squared in OLS regression is interpreted. For a discussion of various pseudo-R-squareds see Long and Freese (2006) or our FAQ page What are pseudo R-squareds?
• Diagnostics:  The diagnostics for probit regression are different from those for OLS regression. The diagnostics for probit models are similar to those for logit models. For a discussion of model diagnostics for logistic regression, see Hosmer and Lemeshow (2000, Chapter 5).
• Clustered data: Sometimes observations are clustered into groups (e.g., people within families, students within classrooms). In such cases, you may want to consider using either a multilevel model or the cluster option of the variable command.

#### References

Hosmer, D. & Lemeshow, S. (2000). Applied Logistic Regression (Second Edition). New York: John Wiley & Sons, Inc.

Long, J. Scott (1997). Regression Models for Categorical and Limited Dependent Variables. Thousand Oaks, CA: Sage Publications.

The content of this web site should not be construed as an endorsement of any particular web site, book, or software product by the University of California.