### Annotated Mplus Output Ordinary Least Squares Regression

This page was created using Mplus 5.1.

Below is an example of ordinary least squares (OLS) regression with footnotes explaining the output. To summarize the output, both predictors in this model, x1 and x3, are significantly related to the outcome variable, y1.

Here is the same example illustrated in Mplus based on the ex3.1.dat data file.

TITLE:
this is an example of a simple linear
regression for a continuous observed
dependent variable with two covariates
DATA:
FILE IS ex3.1.dat;
VARIABLE:
NAMES ARE y1 x1 x3;
MODEL:
y1 ON x1 x3;

SUMMARY OF ANALYSIS

Number of groups                                                 1
Number of observations                                         500

Number of dependent variables                                    1
Number of independent variables                                  2
Number of continuous latent variables                            0

<output omitted>

TESTS OF MODEL FIT

Chi-Square Test of Model Fita

Value                              0.000
Degrees of Freedom                     0
P-Value                           0.0000

Chi-Square Test of Model Fit for the Baseline Modelb

Value                            469.585
Degrees of Freedom                     2
P-Value                           0.0000

CFI/TLIa

CFI                                1.000
TLI                                1.000

Loglikelihoodc

H0 Value                       -2124.388
H1 Value                       -2124.388

Information Criteriad

Number of Free Parameters              4
Akaike (AIC)                    4256.776
Bayesian (BIC)                  4273.634
Sample-Size Adjusted BIC        4260.938
(n* = (n + 2) / 24)

RMSEA (Root Mean Square Error Of Approximation)a

Estimate                           0.000
90 Percent C.I.                    0.000  0.000
Probability RMSEA <= .05           0.000

SRMR (Standardized Root Mean Square Residual)a

Value                              0.000

1. Chi-Square Test of Model Fit, CFI/TLI, RMSEA, and the SRMR. - Because the current model (like all ols style regression models)  is "saturated" (meaning all possible coefficients are being estimated), many fit indices will show perfect fit (either zero or 1.0 for most fit indices). This does not mean that the model actually fits the data perfectly, nor does it indicate a problem with the model, it is just a result of the model having zero degrees of freedom. What the "perfect" fit statistics do mean is that these statistics cannot be used to help you determine how well your model fits.
2. Chi-Square Test of Model Fit for the Baseline Model - The baseline model is the model to which the current model is compared for many fit statistics. For regression models in Mplus, the baseline model is typically a "null" models, that is a model which assume no relationship among the variables. In this case, the baseline model is a model in which the regression coefficients (paths from x1 to y1, and from x3 to y1) are constrained to zero. (If you'd like to test this yourself, change the model statement to read y1 ON x1@0 x3@0 and rerun the model, the "Chi-Square test of Model Fit" and the "Chi-Square Test of Model Fit for the Baseline Model" will match each other, because in that case, the model specified will be the baseline model.)
3. Loglikelihood - This section of the output gives the log likelihood for the model, which can be used to compare the fit of nested models, using a likelihood ratio test. Note that unless you have used the difftest command, the H0 value and the H1 value should be the same for this type of model.
4. Information Criteria - Information criteria can also be used to compare competing models, a particularly useful feature is that they can be used to compare non-nested models. The first thing Mplus lists in the information criteria section is the number of free parameters, that is, the number of parameters estimated in the model. Next the Akaike information criteria (AIC), the Bayesian information criteria (BIC), and the sample-size adjusted BIC are each listed. The three criteria each have their advantages and disadvantages, but all three are used to compare the fit of models. To do this one runs the models one wishes to compare, and then compares the two models in terms of one or more of the information criteria (among other things). The interpretation of the AIC and the BIC varies somewhat based on which formulae are being used to calculate the statistic (and different statistical software uses slightly different formulae). In the case of Mplus, higher values indicate worse fit, (and lower values indicate better fit). The larger the difference in criterion statistics, the greater the difference in fit between the two models. In other words, large differences in the information criterion of choice indicates one model fits much better than the other, while small differences indicate that one model fits the data about as well as the other. There are no hypothesis (significance) tests for differences in information criteria, but various sets of guidelines for interpreting them exist.

#### Model Results

MODEL RESULTS
Two-Tailed
Estimatef      S.E.g   Est./S.E.h   P-Valuei

Y1e       ON
X1                 0.969      0.042     23.357      0.000
X3                 0.649      0.044     14.626      0.000

Interceptsj
Y1                 0.511      0.043     11.765      0.000

Residual Variancesk
Y1                 0.941      0.060     15.811      0.000
1. ON - The dependent variable is y1. The "ON" following it indicates that the coefficients listed are for the regression of y1 on the variables listed below (x1 and x3).
2. Estimate - The estimate column gives the estimate of the parameter in question. In the case of regression (denoted with "ON"), the parameters being estimated are the regression coefficients. These estimates tell you about the relationship between the independent variables and the dependent variable. More specifically, the estimates of the regression coefficients indicate the amount of increase in y1 that would be predicted by a 1 unit increase in the predictor (i.e. x1 or x3).

Note: For the independent variables which are not significant, the coefficients are not significantly different from 0, which should be taken into account when interpreting the coefficients. (See items h and i for a discussion of testing whether the coefficients are significantly different from zero.)

We can interpret the regression coefficients shown in the following way:

x1 - The coefficient (parameter estimate) is 0.969, so for every one unit increase in x1 the expected (predicted) value of y1 increases by 0.969, holding all other variables in the model constant.

x3 - The coefficient for x3 is 0.649, meaning that for every one unit increase in x3 the predicted value of y1 increases by 0.649, holding all other variables in the model constant.

We can use the estimates of the regression coefficients along with the intercept (see j below) to predict values of y1. One way of writing the equation looks like this:

y1_predicted = b0 + b1*x1 + b2*x3

Using the values from this section of the output, as well as the intercept (discussed below in item j), we get the following equation:

y1_predicted = 0.511 + 0.969*x1 + 0.649*x3

3. S.E. - The column labeled S.E. contains the standard errors associated with the coefficients. The standard error is used for testing whether the parameter is significantly different from 0 by dividing the parameter estimate by the standard error. (See items h and i for a discussion of testing whether the coefficients are significantly different from zero.) The standard errors can also be used to form a confidence interval for the parameter.
4. Est./S.E - The column labeled "Est./S.E." contains exactly that, the coefficient estimate divided by the standard error. This value is used compute the p-value, that is, to perform a significance test.
5. Two-Tailed P-Value - The column labeled "Two-Tailed P-Value" contains the p-values for a two-tailed test, testing the null hypothesis that the coefficient (estimated parameter) is 0. If you use a 2-tailed test, then you would compare each p-value to your preselected value of alpha. Coefficients having p-values less than alpha are statistically significant. For example, if you chose alpha to be 0.05, coefficients having a p-value of 0.05 or less would be statistically significant (i.e., you can reject the null hypothesis and say that the coefficient is significantly different from 0). If you use a 1-tailed test (i.e., you predict that the parameter will go in a particular direction), then you can divide the p-value by 2 before comparing it to your preselected alpha level. Looking at the example, we can say the following. The coefficient for x1 (0.969) is significantly different from 0 using an alpha of 0.05 because its p-value is listed as 0.000, which is smaller than 0.05. The coefficient for x3 (0.649) is significantly different from 0 using an alpha of 0.05 because its p-value is listed as 0.000*, which is smaller than 0.05.

Note that a p-value listed as 0.000 does not mean that the actual p-value is zero, instead it indicates that the p-value is less than 0.0005 (0.0004 being approximately the largest number which rounds to 0.000). Values less than 0.0005 are indeed very close to zero in many practical applications, but this is conceptually distinct from an actual value of zero.

6. Intercepts - Intuitively, the intercepts for the model appear under the heading "Intercepts" in the case of an OLS regression, only the intercept of the dependent variable (i.e. y1) is part of the model. The intercept is the expected value of y1 when both x1 and x3 are equal to zero. The actual estimate of the intercept is in the first column (i.e. under "Estimates" see item f), followed by the standard error (listed under S.E., see item g). The estimate and the standard error are followed by the estimate over the standard error (listed under "Est./S.E." see item h), and finally the p-value listed under "Two-Tailed P-Value" (see item i). In this case the intercept of 0.511 is (statistically) significantly different from zero using an alpha of 0.05 because its p-value is listed as 0.000 (see item i for a discussion of p-values listed as 0.000), which is smaller than 0.05.
7. Residual Variances - The section labeled "Residual Variances" lists the estimated variances for the dependent variable. This is an estimate of the variance of the residuals (also known as errors) from the regression.

The content of this web site should not be construed as an endorsement of any particular web site, book, or software product by the University of California.