Libraries
We are going to use psych
(Revelle, 2023),
lavaan
version 0.6.15 (Rosseel, Jorgensen, &
Rockwood, 2023), semTools
version 0.5.6 (Jorgensen, Pornprasertmanit, Schoemann, &
Rosseel, 2022) and semPlot
version 1.1.6 (Epskamp,
2022) in our analysis. Load the packages.
library(foreign)
library(psych)
library(lavaan)
library(semTools)
library(semPlot)
Load data
data = read.spss("Attitude_Statistics v3.sav", F, T)
dim(data)
## [1] 150 13
names(data)
## [1] "ID" "Q1" "Q2" "Q3" "Q4" "Q5" "Q6" "Q7" "Q8" "Q9" "Q10" "Q11" "Q12"
describe(data[-1])
Q1 |
1 |
150 |
3.126667 |
1.1009250 |
3 |
3.125000 |
1.4826 |
1 |
5 |
4 |
-0.0995748 |
-0.7324083 |
0.0898901 |
Q2 |
2 |
150 |
3.506667 |
1.0346135 |
3 |
3.550000 |
1.4826 |
1 |
5 |
4 |
-0.1441109 |
-0.4692084 |
0.0844758 |
Q3 |
3 |
150 |
3.180000 |
1.0302147 |
3 |
3.166667 |
1.4826 |
1 |
5 |
4 |
-0.0321343 |
-0.4202388 |
0.0841167 |
Q4 |
4 |
150 |
2.813333 |
1.1723130 |
3 |
2.775000 |
1.4826 |
1 |
5 |
4 |
0.1888890 |
-0.8077519 |
0.0957190 |
Q5 |
5 |
150 |
3.313333 |
1.0109247 |
3 |
3.316667 |
1.4826 |
1 |
5 |
4 |
-0.2242453 |
-0.4760474 |
0.0825417 |
Q6 |
6 |
150 |
3.053333 |
1.0916810 |
3 |
3.050000 |
1.4826 |
1 |
5 |
4 |
-0.0432196 |
-0.7058165 |
0.0891354 |
Q7 |
7 |
150 |
2.920000 |
1.1901159 |
3 |
2.925000 |
1.4826 |
1 |
5 |
4 |
-0.0366766 |
-1.0562073 |
0.0971726 |
Q8 |
8 |
150 |
3.326667 |
0.9999776 |
3 |
3.341667 |
1.4826 |
1 |
5 |
4 |
-0.0816209 |
-0.1204552 |
0.0816478 |
Q9 |
9 |
150 |
3.440000 |
1.0457970 |
3 |
3.483333 |
1.4826 |
1 |
5 |
4 |
-0.2091593 |
-0.3234321 |
0.0853890 |
Q10 |
10 |
150 |
3.313333 |
1.0999492 |
3 |
3.358333 |
1.4826 |
1 |
5 |
4 |
-0.2157639 |
-0.3888402 |
0.0898105 |
Q11 |
11 |
150 |
3.353333 |
0.9350496 |
3 |
3.366667 |
1.4826 |
1 |
5 |
4 |
-0.3074829 |
-0.3281186 |
0.0763465 |
Q12 |
12 |
150 |
2.826667 |
0.9813473 |
3 |
2.833333 |
1.4826 |
1 |
5 |
4 |
0.0945084 |
-0.6761637 |
0.0801267 |
Correlational &
causal
Correlational
Observed
variables
# Q4 & Q11
model.c = "
Q4 ~~ Q11
"
corr.c = sem(model.c, data = data, meanstructure = T) # meanstructure -> display mean
summary(corr.c, fit.measures = T, standardized = T)
## lavaan 0.6.15 ended normally after 13 iterations
##
## Estimator ML
## Optimization method NLMINB
## Number of model parameters 5
##
## Number of observations 150
##
## Model Test User Model:
##
## Test statistic 0.000
## Degrees of freedom 0
##
## Model Test Baseline Model:
##
## Test statistic 32.305
## Degrees of freedom 1
## P-value 0.000
##
## User Model versus Baseline Model:
##
## Comparative Fit Index (CFI) 1.000
## Tucker-Lewis Index (TLI) 1.000
##
## Loglikelihood and Information Criteria:
##
## Loglikelihood user model (H0) -422.299
## Loglikelihood unrestricted model (H1) -422.299
##
## Akaike (AIC) 854.599
## Bayesian (BIC) 869.652
## Sample-size adjusted Bayesian (SABIC) 853.828
##
## Root Mean Square Error of Approximation:
##
## RMSEA 0.000
## 90 Percent confidence interval - lower 0.000
## 90 Percent confidence interval - upper 0.000
## P-value H_0: RMSEA <= 0.050 NA
## P-value H_0: RMSEA >= 0.080 NA
##
## Standardized Root Mean Square Residual:
##
## SRMR 0.000
##
## Parameter Estimates:
##
## Standard errors Standard
## Information Expected
## Information saturated (h1) model Structured
##
## Covariances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Q4 ~~
## Q11 0.479 0.097 4.934 0.000 0.479 0.440
##
## Intercepts:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Q4 2.813 0.095 29.490 0.000 2.813 2.408
## Q11 3.353 0.076 44.070 0.000 3.353 3.598
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Q4 1.365 0.158 8.660 0.000 1.365 1.000
## Q11 0.868 0.100 8.660 0.000 0.868 1.000
semPaths(corr.c, what = "path", whatLabels = "par", edge.color = "black",
edge.label.cex = 1, residuals = F, sizeInt = 4)
cor(data$Q4, data$Q11)
## [1] 0.4401738
Latent
variables
model.cl = "
F1 =~ Q4 + Q6 + Q7 + Q11
F2 =~ Q8 + Q9 + Q10
"
corr.cl = sem(model.cl, data = data)
summary(corr.cl, fit.measures = T, standardized = T)
## lavaan 0.6.15 ended normally after 22 iterations
##
## Estimator ML
## Optimization method NLMINB
## Number of model parameters 15
##
## Number of observations 150
##
## Model Test User Model:
##
## Test statistic 20.451
## Degrees of freedom 13
## P-value (Chi-square) 0.085
##
## Model Test Baseline Model:
##
## Test statistic 380.263
## Degrees of freedom 21
## P-value 0.000
##
## User Model versus Baseline Model:
##
## Comparative Fit Index (CFI) 0.979
## Tucker-Lewis Index (TLI) 0.966
##
## Loglikelihood and Information Criteria:
##
## Loglikelihood user model (H0) -1380.510
## Loglikelihood unrestricted model (H1) -1370.284
##
## Akaike (AIC) 2791.020
## Bayesian (BIC) 2836.179
## Sample-size adjusted Bayesian (SABIC) 2788.707
##
## Root Mean Square Error of Approximation:
##
## RMSEA 0.062
## 90 Percent confidence interval - lower 0.000
## 90 Percent confidence interval - upper 0.111
## P-value H_0: RMSEA <= 0.050 0.314
## P-value H_0: RMSEA >= 0.080 0.306
##
## Standardized Root Mean Square Residual:
##
## SRMR 0.063
##
## Parameter Estimates:
##
## Standard errors Standard
## Information Expected
## Information saturated (h1) model Structured
##
## Latent Variables:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## F1 =~
## Q4 1.000 0.941 0.806
## Q6 0.830 0.103 8.040 0.000 0.781 0.718
## Q7 0.960 0.115 8.348 0.000 0.904 0.762
## Q11 0.504 0.088 5.742 0.000 0.474 0.509
## F2 =~
## Q8 1.000 0.651 0.653
## Q9 1.351 0.170 7.951 0.000 0.880 0.844
## Q10 1.444 0.182 7.927 0.000 0.940 0.858
##
## Covariances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## F1 ~~
## F2 0.048 0.060 0.800 0.424 0.078 0.078
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .Q4 0.479 0.097 4.932 0.000 0.479 0.351
## .Q6 0.574 0.088 6.530 0.000 0.574 0.485
## .Q7 0.591 0.101 5.825 0.000 0.591 0.420
## .Q11 0.644 0.081 7.997 0.000 0.644 0.741
## .Q8 0.569 0.076 7.526 0.000 0.569 0.573
## .Q9 0.313 0.077 4.054 0.000 0.313 0.288
## .Q10 0.318 0.086 3.693 0.000 0.318 0.264
## F1 0.886 0.168 5.279 0.000 1.000 1.000
## F2 0.424 0.101 4.191 0.000 1.000 1.000
semPaths(corr.cl, what = "path", whatLabels = "par", edge.color = "black",
layout = "tree2", edge.label.cex = 1, residuals = F)
Causal effects
Observed
variables
# Q4 & Q11
model.cs = "
Q4 ~ Q11
"
cause.cs = sem(model.cs, data = data)
summary(cause.cs, fit.measures = T, standardized = T)
## lavaan 0.6.15 ended normally after 1 iteration
##
## Estimator ML
## Optimization method NLMINB
## Number of model parameters 2
##
## Number of observations 150
##
## Model Test User Model:
##
## Test statistic 0.000
## Degrees of freedom 0
##
## Model Test Baseline Model:
##
## Test statistic 32.305
## Degrees of freedom 1
## P-value 0.000
##
## User Model versus Baseline Model:
##
## Comparative Fit Index (CFI) 1.000
## Tucker-Lewis Index (TLI) 1.000
##
## Loglikelihood and Information Criteria:
##
## Loglikelihood user model (H0) -220.034
## Loglikelihood unrestricted model (H1) -220.034
##
## Akaike (AIC) 444.067
## Bayesian (BIC) 450.088
## Sample-size adjusted Bayesian (SABIC) 443.759
##
## Root Mean Square Error of Approximation:
##
## RMSEA 0.000
## 90 Percent confidence interval - lower 0.000
## 90 Percent confidence interval - upper 0.000
## P-value H_0: RMSEA <= 0.050 NA
## P-value H_0: RMSEA >= 0.080 NA
##
## Standardized Root Mean Square Residual:
##
## SRMR 0.000
##
## Parameter Estimates:
##
## Standard errors Standard
## Information Expected
## Information saturated (h1) model Structured
##
## Regressions:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Q4 ~
## Q11 0.552 0.092 6.004 0.000 0.552 0.440
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .Q4 1.101 0.127 8.660 0.000 1.101 0.806
summary(lm(formula = Q4 ~ Q11, data = data)) # compare with SLR
##
## Call:
## lm(formula = Q4 ~ Q11, data = data)
##
## Residuals:
## Min 1Q Median 3Q Max
## -2.7221 -0.6183 -0.1702 0.8298 2.9335
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 0.96274 0.32207 2.989 0.00328 **
## Q11 0.55187 0.09254 5.964 1.74e-08 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 1.056 on 148 degrees of freedom
## Multiple R-squared: 0.1938, Adjusted R-squared: 0.1883
## F-statistic: 35.57 on 1 and 148 DF, p-value: 1.737e-08
semPaths(cause.cs, what = "path", whatLabels = "par", edge.color = "black",
rotation = 2, edge.label.cex = 1, residuals = F)
Latent
variables
model.csl = "
F1 =~ Q4 + Q6 + Q7 + Q11
F2 =~ Q8 + Q9 + Q10
F2 ~ F1
"
cause.csl = sem(model.csl, data = data)
summary(cause.csl, fit.measures = T, standardized = T)
## lavaan 0.6.15 ended normally after 22 iterations
##
## Estimator ML
## Optimization method NLMINB
## Number of model parameters 15
##
## Number of observations 150
##
## Model Test User Model:
##
## Test statistic 20.451
## Degrees of freedom 13
## P-value (Chi-square) 0.085
##
## Model Test Baseline Model:
##
## Test statistic 380.263
## Degrees of freedom 21
## P-value 0.000
##
## User Model versus Baseline Model:
##
## Comparative Fit Index (CFI) 0.979
## Tucker-Lewis Index (TLI) 0.966
##
## Loglikelihood and Information Criteria:
##
## Loglikelihood user model (H0) -1380.510
## Loglikelihood unrestricted model (H1) -1370.284
##
## Akaike (AIC) 2791.020
## Bayesian (BIC) 2836.179
## Sample-size adjusted Bayesian (SABIC) 2788.707
##
## Root Mean Square Error of Approximation:
##
## RMSEA 0.062
## 90 Percent confidence interval - lower 0.000
## 90 Percent confidence interval - upper 0.111
## P-value H_0: RMSEA <= 0.050 0.314
## P-value H_0: RMSEA >= 0.080 0.306
##
## Standardized Root Mean Square Residual:
##
## SRMR 0.063
##
## Parameter Estimates:
##
## Standard errors Standard
## Information Expected
## Information saturated (h1) model Structured
##
## Latent Variables:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## F1 =~
## Q4 1.000 0.941 0.806
## Q6 0.830 0.103 8.040 0.000 0.781 0.718
## Q7 0.960 0.115 8.348 0.000 0.904 0.762
## Q11 0.504 0.088 5.742 0.000 0.474 0.509
## F2 =~
## Q8 1.000 0.651 0.653
## Q9 1.351 0.170 7.951 0.000 0.880 0.844
## Q10 1.444 0.182 7.927 0.000 0.940 0.858
##
## Regressions:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## F2 ~
## F1 0.054 0.067 0.803 0.422 0.078 0.078
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .Q4 0.479 0.097 4.932 0.000 0.479 0.351
## .Q6 0.574 0.088 6.530 0.000 0.574 0.485
## .Q7 0.591 0.101 5.825 0.000 0.591 0.420
## .Q11 0.644 0.081 7.997 0.000 0.644 0.741
## .Q8 0.569 0.076 7.526 0.000 0.569 0.573
## .Q9 0.313 0.077 4.054 0.000 0.313 0.288
## .Q10 0.318 0.086 3.693 0.000 0.318 0.264
## F1 0.886 0.168 5.279 0.000 1.000 1.000
## .F2 0.421 0.101 4.189 0.000 0.994 0.994
semPaths(cause.csl, what = "path", whatLabels = "par", edge.color = "black",
rotation = 2, edge.label.cex = 1,
sizeMan = 4, sizeLat = 6, residuals = F)
Multiple
variables
model.cs1 = "
Q4 ~ Q7 + Q11 + Q6
"
cause.cs1 = sem(model.cs1, data = data)
summary(cause.cs1, fit.measures = T, standardized = T)
## lavaan 0.6.15 ended normally after 1 iteration
##
## Estimator ML
## Optimization method NLMINB
## Number of model parameters 4
##
## Number of observations 150
##
## Model Test User Model:
##
## Test statistic 0.000
## Degrees of freedom 0
##
## Model Test Baseline Model:
##
## Test statistic 98.444
## Degrees of freedom 3
## P-value 0.000
##
## User Model versus Baseline Model:
##
## Comparative Fit Index (CFI) 1.000
## Tucker-Lewis Index (TLI) 1.000
##
## Loglikelihood and Information Criteria:
##
## Loglikelihood user model (H0) -186.964
## Loglikelihood unrestricted model (H1) -186.964
##
## Akaike (AIC) 381.928
## Bayesian (BIC) 393.970
## Sample-size adjusted Bayesian (SABIC) 381.311
##
## Root Mean Square Error of Approximation:
##
## RMSEA 0.000
## 90 Percent confidence interval - lower 0.000
## 90 Percent confidence interval - upper 0.000
## P-value H_0: RMSEA <= 0.050 NA
## P-value H_0: RMSEA >= 0.080 NA
##
## Standardized Root Mean Square Residual:
##
## SRMR 0.000
##
## Parameter Estimates:
##
## Standard errors Standard
## Information Expected
## Information saturated (h1) model Structured
##
## Regressions:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Q4 ~
## Q7 0.366 0.072 5.087 0.000 0.366 0.372
## Q11 0.259 0.080 3.218 0.001 0.259 0.206
## Q6 0.309 0.078 3.964 0.000 0.309 0.287
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .Q4 0.708 0.082 8.660 0.000 0.708 0.519
semPaths(cause.cs1, what = "path", whatLabels = "par", edge.color = "black",
rotation = 2, edge.label.cex = 1, residuals = F)
summary(lm(Q4 ~ Q7 + Q11 + Q6, data = data)) # compare with MLR
##
## Call:
## lm(formula = Q4 ~ Q7 + Q11 + Q6, data = data)
##
## Residuals:
## Min 1Q Median 3Q Max
## -2.36767 -0.61468 -0.05963 0.49242 2.42276
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -0.06699 0.28680 -0.234 0.81564
## Q7 0.36638 0.07301 5.018 1.49e-06 ***
## Q11 0.25881 0.08153 3.174 0.00183 **
## Q6 0.30872 0.07894 3.911 0.00014 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 0.853 on 146 degrees of freedom
## Multiple R-squared: 0.4812, Adjusted R-squared: 0.4706
## F-statistic: 45.15 on 3 and 146 DF, p-value: < 2.2e-16
Mediation
model.me = "
Q11 ~ a*Q7 # mediator
Q4 ~ c*Q7 + b*Q11
ab := a*b # indirect effect
total := c + a*b # total effect
"
med.me = sem(model.me, data = data)
summary(med.me, fit.measures = T, standardized = T)
## lavaan 0.6.15 ended normally after 1 iteration
##
## Estimator ML
## Optimization method NLMINB
## Number of model parameters 5
##
## Number of observations 150
##
## Model Test User Model:
##
## Test statistic 0.000
## Degrees of freedom 0
##
## Model Test Baseline Model:
##
## Test statistic 104.734
## Degrees of freedom 3
## P-value 0.000
##
## User Model versus Baseline Model:
##
## Comparative Fit Index (CFI) 1.000
## Tucker-Lewis Index (TLI) 1.000
##
## Loglikelihood and Information Criteria:
##
## Loglikelihood user model (H0) -386.085
## Loglikelihood unrestricted model (H1) -386.085
##
## Akaike (AIC) 782.169
## Bayesian (BIC) 797.222
## Sample-size adjusted Bayesian (SABIC) 781.398
##
## Root Mean Square Error of Approximation:
##
## RMSEA 0.000
## 90 Percent confidence interval - lower 0.000
## 90 Percent confidence interval - upper 0.000
## P-value H_0: RMSEA <= 0.050 NA
## P-value H_0: RMSEA >= 0.080 NA
##
## Standardized Root Mean Square Residual:
##
## SRMR 0.000
##
## Parameter Estimates:
##
## Standard errors Standard
## Information Expected
## Information saturated (h1) model Structured
##
## Regressions:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Q11 ~
## Q7 (a) 0.285 0.060 4.776 0.000 0.285 0.363
## Q4 ~
## Q7 (c) 0.511 0.065 7.811 0.000 0.511 0.518
## Q11 (b) 0.316 0.083 3.797 0.000 0.316 0.252
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .Q11 0.754 0.087 8.660 0.000 0.754 0.868
## .Q4 0.782 0.090 8.660 0.000 0.782 0.573
##
## Defined Parameters:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## ab 0.090 0.030 2.972 0.003 0.090 0.092
## total 0.601 0.064 9.422 0.000 0.601 0.610
semPaths(med.me, what = "path", whatLabels = "name", edge.color = "black",
layout = "spring", edge.label.cex = 1, residuals = F)
semPaths(med.me, what = "path", whatLabels = "par", edge.color = "black",
layout = "spring", edge.label.cex = 1, residuals = F)
Compare with model without mediator. Note c = 0.51 (with mediator)
< c = 0.60 (without mediator). All a, b, and c are significant. So
this is a partial mediation.
model.me1 = "
Q4 ~ c*Q7
"
med.me1 = sem(model.me1, data = data)
summary(med.me1, fit.measures = T, standardized = T)
## lavaan 0.6.15 ended normally after 1 iteration
##
## Estimator ML
## Optimization method NLMINB
## Number of model parameters 2
##
## Number of observations 150
##
## Model Test User Model:
##
## Test statistic 0.000
## Degrees of freedom 0
##
## Model Test Baseline Model:
##
## Test statistic 69.738
## Degrees of freedom 1
## P-value 0.000
##
## User Model versus Baseline Model:
##
## Comparative Fit Index (CFI) 1.000
## Tucker-Lewis Index (TLI) 1.000
##
## Loglikelihood and Information Criteria:
##
## Loglikelihood user model (H0) -201.317
## Loglikelihood unrestricted model (H1) -201.317
##
## Akaike (AIC) 406.634
## Bayesian (BIC) 412.655
## Sample-size adjusted Bayesian (SABIC) 406.326
##
## Root Mean Square Error of Approximation:
##
## RMSEA 0.000
## 90 Percent confidence interval - lower 0.000
## 90 Percent confidence interval - upper 0.000
## P-value H_0: RMSEA <= 0.050 NA
## P-value H_0: RMSEA >= 0.080 NA
##
## Standardized Root Mean Square Residual:
##
## SRMR 0.000
##
## Parameter Estimates:
##
## Standard errors Standard
## Information Expected
## Information saturated (h1) model Structured
##
## Regressions:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Q4 ~
## Q7 (c) 0.601 0.064 9.422 0.000 0.601 0.610
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .Q4 0.858 0.099 8.660 0.000 0.858 0.628
semPaths(med.me1, what = "path", whatLabels = "par", edge.color = "black",
rotation = 2, edge.label.cex = 1, residuals = F)
Moderation/Interaction
data$Q7.Q8 = data$Q7*data$Q8 # create interaction
head(data)
1 |
2 |
3 |
3 |
3 |
4 |
4 |
3 |
3 |
3 |
3 |
4 |
2 |
9 |
2 |
3 |
2 |
3 |
3 |
4 |
4 |
4 |
3 |
3 |
3 |
4 |
2 |
12 |
3 |
5 |
4 |
5 |
1 |
1 |
1 |
1 |
4 |
4 |
5 |
1 |
4 |
4 |
4 |
2 |
2 |
2 |
4 |
3 |
2 |
2 |
2 |
2 |
2 |
3 |
3 |
4 |
5 |
4 |
1 |
4 |
2 |
5 |
1 |
4 |
5 |
5 |
3 |
4 |
4 |
20 |
6 |
4 |
4 |
4 |
3 |
4 |
4 |
4 |
3 |
4 |
4 |
4 |
4 |
12 |
model.mo = "
Q4 ~ Q7 + Q8 + Q7.Q8
"
mod.mo = sem(model.mo, data = data)
summary(mod.mo, fit.measures = T, standardized = T) # complete moderation
## lavaan 0.6.15 ended normally after 1 iteration
##
## Estimator ML
## Optimization method NLMINB
## Number of model parameters 4
##
## Number of observations 150
##
## Model Test User Model:
##
## Test statistic 0.000
## Degrees of freedom 0
##
## Model Test Baseline Model:
##
## Test statistic 76.491
## Degrees of freedom 3
## P-value 0.000
##
## User Model versus Baseline Model:
##
## Comparative Fit Index (CFI) 1.000
## Tucker-Lewis Index (TLI) 1.000
##
## Loglikelihood and Information Criteria:
##
## Loglikelihood user model (H0) -197.940
## Loglikelihood unrestricted model (H1) -197.940
##
## Akaike (AIC) 403.880
## Bayesian (BIC) 415.923
## Sample-size adjusted Bayesian (SABIC) 403.264
##
## Root Mean Square Error of Approximation:
##
## RMSEA 0.000
## 90 Percent confidence interval - lower 0.000
## 90 Percent confidence interval - upper 0.000
## P-value H_0: RMSEA <= 0.050 NA
## P-value H_0: RMSEA >= 0.080 NA
##
## Standardized Root Mean Square Residual:
##
## SRMR 0.000
##
## Parameter Estimates:
##
## Standard errors Standard
## Information Expected
## Information saturated (h1) model Structured
##
## Regressions:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Q4 ~
## Q7 0.179 0.196 0.914 0.361 0.179 0.182
## Q8 -0.207 0.164 -1.260 0.207 -0.207 -0.176
## Q7.Q8 0.124 0.057 2.179 0.029 0.124 0.549
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .Q4 0.820 0.095 8.660 0.000 0.820 0.601
semPaths(mod.mo, what = "path", whatLabels = "par", edge.color = "black",
edge.label.cex = 1, residuals = F)
lm(Q4 ~ Q7 + Q8 + Q7*Q8, data = data)
##
## Call:
## lm(formula = Q4 ~ Q7 + Q8 + Q7 * Q8, data = data)
##
## Coefficients:
## (Intercept) Q7 Q8 Q7:Q8
## 1.7515 0.1793 -0.2069 0.1236
Exercise
Use built-in data sets:
data("HolzingerSwineford1939")
data("PoliticalDemocracy")
Mental ability test
data set
Descriptive
data1 = HolzingerSwineford1939
dim(data1)
## [1] 301 15
names(data1)
## [1] "id" "sex" "ageyr" "agemo" "school" "grade" "x1" "x2" "x3"
## [10] "x4" "x5" "x6" "x7" "x8" "x9"
head(data1)
1 |
1 |
13 |
1 |
Pasteur |
7 |
3.333333 |
7.75 |
0.375 |
2.333333 |
5.75 |
1.2857143 |
3.391304 |
5.75 |
6.361111 |
2 |
2 |
13 |
7 |
Pasteur |
7 |
5.333333 |
5.25 |
2.125 |
1.666667 |
3.00 |
1.2857143 |
3.782609 |
6.25 |
7.916667 |
3 |
2 |
13 |
1 |
Pasteur |
7 |
4.500000 |
5.25 |
1.875 |
1.000000 |
1.75 |
0.4285714 |
3.260870 |
3.90 |
4.416667 |
4 |
1 |
13 |
2 |
Pasteur |
7 |
5.333333 |
7.75 |
3.000 |
2.666667 |
4.50 |
2.4285714 |
3.000000 |
5.30 |
4.861111 |
5 |
2 |
12 |
2 |
Pasteur |
7 |
4.833333 |
4.75 |
0.875 |
2.666667 |
4.00 |
2.5714286 |
3.695652 |
6.30 |
5.916667 |
6 |
2 |
14 |
1 |
Pasteur |
7 |
5.333333 |
5.00 |
2.250 |
1.000000 |
3.00 |
0.8571429 |
4.347826 |
6.65 |
7.500000 |
str(data1)
## 'data.frame': 301 obs. of 15 variables:
## $ id : int 1 2 3 4 5 6 7 8 9 11 ...
## $ sex : int 1 2 2 1 2 2 1 2 2 2 ...
## $ ageyr : int 13 13 13 13 12 14 12 12 13 12 ...
## $ agemo : int 1 7 1 2 2 1 1 2 0 5 ...
## $ school: Factor w/ 2 levels "Grant-White",..: 2 2 2 2 2 2 2 2 2 2 ...
## $ grade : int 7 7 7 7 7 7 7 7 7 7 ...
## $ x1 : num 3.33 5.33 4.5 5.33 4.83 ...
## $ x2 : num 7.75 5.25 5.25 7.75 4.75 5 6 6.25 5.75 5.25 ...
## $ x3 : num 0.375 2.125 1.875 3 0.875 ...
## $ x4 : num 2.33 1.67 1 2.67 2.67 ...
## $ x5 : num 5.75 3 1.75 4.5 4 3 6 4.25 5.75 5 ...
## $ x6 : num 1.286 1.286 0.429 2.429 2.571 ...
## $ x7 : num 3.39 3.78 3.26 3 3.7 ...
## $ x8 : num 5.75 6.25 3.9 5.3 6.3 6.65 6.2 5.15 4.65 4.55 ...
## $ x9 : num 6.36 7.92 4.42 4.86 5.92 ...
Need to recode sex
variable from {1,2} to {0,1}
i.e. dummy variable,
data1$sex1 = data1$sex - 1
head(data1)
1 |
1 |
13 |
1 |
Pasteur |
7 |
3.333333 |
7.75 |
0.375 |
2.333333 |
5.75 |
1.2857143 |
3.391304 |
5.75 |
6.361111 |
0 |
2 |
2 |
13 |
7 |
Pasteur |
7 |
5.333333 |
5.25 |
2.125 |
1.666667 |
3.00 |
1.2857143 |
3.782609 |
6.25 |
7.916667 |
1 |
3 |
2 |
13 |
1 |
Pasteur |
7 |
4.500000 |
5.25 |
1.875 |
1.000000 |
1.75 |
0.4285714 |
3.260870 |
3.90 |
4.416667 |
1 |
4 |
1 |
13 |
2 |
Pasteur |
7 |
5.333333 |
7.75 |
3.000 |
2.666667 |
4.50 |
2.4285714 |
3.000000 |
5.30 |
4.861111 |
0 |
5 |
2 |
12 |
2 |
Pasteur |
7 |
4.833333 |
4.75 |
0.875 |
2.666667 |
4.00 |
2.5714286 |
3.695652 |
6.30 |
5.916667 |
1 |
6 |
2 |
14 |
1 |
Pasteur |
7 |
5.333333 |
5.00 |
2.250 |
1.000000 |
3.00 |
0.8571429 |
4.347826 |
6.65 |
7.500000 |
1 |
describe(data1)
id |
1 |
301 |
176.5548173 |
105.9384781 |
163.000000 |
176.7759336 |
140.847000 |
1.0000000 |
351.000000 |
350.000000 |
-0.0085838 |
-1.3626249 |
6.1061924 |
sex |
2 |
301 |
1.5149502 |
0.5006087 |
2.000000 |
1.5186722 |
0.000000 |
1.0000000 |
2.000000 |
1.000000 |
-0.0595295 |
-2.0030779 |
0.0288546 |
ageyr |
3 |
301 |
12.9966777 |
1.0503915 |
13.000000 |
12.8879668 |
1.482600 |
11.0000000 |
16.000000 |
5.000000 |
0.6945948 |
0.2045723 |
0.0605436 |
agemo |
4 |
301 |
5.3754153 |
3.4518488 |
5.000000 |
5.3195021 |
4.447800 |
0.0000000 |
11.000000 |
11.000000 |
0.0892744 |
-1.2186405 |
0.1989613 |
school* |
5 |
301 |
1.5182724 |
0.5004981 |
2.000000 |
1.5228216 |
0.000000 |
1.0000000 |
2.000000 |
1.000000 |
-0.0727744 |
-2.0013197 |
0.0288482 |
grade |
6 |
300 |
7.4766667 |
0.5002898 |
7.000000 |
7.4708333 |
0.000000 |
7.0000000 |
8.000000 |
1.000000 |
0.0929683 |
-1.9979835 |
0.0288842 |
x1 |
7 |
301 |
4.9357697 |
1.1674321 |
5.000000 |
4.9647303 |
1.235500 |
0.6666667 |
8.500000 |
7.833333 |
-0.2543455 |
0.3075338 |
0.0672897 |
x2 |
8 |
301 |
6.0880399 |
1.1774506 |
6.000000 |
6.0176349 |
1.111950 |
2.2500000 |
9.250000 |
7.000000 |
0.4700766 |
0.3323940 |
0.0678671 |
x3 |
9 |
301 |
2.2504153 |
1.1309794 |
2.125000 |
2.1991701 |
1.297275 |
0.2500000 |
4.500000 |
4.250000 |
0.3834294 |
-0.9075264 |
0.0651886 |
x4 |
10 |
301 |
3.0609081 |
1.1641163 |
3.000000 |
3.0248963 |
0.988400 |
0.0000000 |
6.333333 |
6.333333 |
0.2674867 |
0.0801268 |
0.0670985 |
x5 |
11 |
301 |
4.3405316 |
1.2904722 |
4.500000 |
4.3952282 |
1.482600 |
1.0000000 |
7.000000 |
6.000000 |
-0.3497961 |
-0.5525369 |
0.0743816 |
x6 |
12 |
301 |
2.1855719 |
1.0956031 |
2.000000 |
2.0883225 |
1.059000 |
0.1428571 |
6.142857 |
6.000000 |
0.8579486 |
0.8165572 |
0.0631495 |
x7 |
13 |
301 |
4.1859021 |
1.0895335 |
4.086957 |
4.1636298 |
1.095835 |
1.3043478 |
7.434783 |
6.130435 |
0.2490881 |
-0.3074039 |
0.0627997 |
x8 |
14 |
301 |
5.5270764 |
1.0126151 |
5.500000 |
5.4929461 |
0.963690 |
3.0500000 |
10.000000 |
6.950000 |
0.5252580 |
1.1715556 |
0.0583662 |
x9 |
15 |
301 |
5.3741233 |
1.0091517 |
5.416667 |
5.3660673 |
0.988400 |
2.7777778 |
9.250000 |
6.472222 |
0.2038709 |
0.2899079 |
0.0581665 |
sex1 |
16 |
301 |
0.5149502 |
0.5006087 |
1.000000 |
0.5186722 |
0.000000 |
0.0000000 |
1.000000 |
1.000000 |
-0.0595295 |
-2.0030779 |
0.0288546 |
Categorical
variable
sex.model = "
visual =~ x1 + x2 + x3
visual ~ sex1
"
sem.sex = sem(sex.model, data = data1)
summary(sem.sex, fit.measures = T, standardized = T)
## lavaan 0.6.15 ended normally after 25 iterations
##
## Estimator ML
## Optimization method NLMINB
## Number of model parameters 7
##
## Number of observations 301
##
## Model Test User Model:
##
## Test statistic 1.818
## Degrees of freedom 2
## P-value (Chi-square) 0.403
##
## Model Test Baseline Model:
##
## Test statistic 122.574
## Degrees of freedom 6
## P-value 0.000
##
## User Model versus Baseline Model:
##
## Comparative Fit Index (CFI) 1.000
## Tucker-Lewis Index (TLI) 1.005
##
## Loglikelihood and Information Criteria:
##
## Loglikelihood user model (H0) -1352.235
## Loglikelihood unrestricted model (H1) -1351.326
##
## Akaike (AIC) 2718.470
## Bayesian (BIC) 2744.420
## Sample-size adjusted Bayesian (SABIC) 2722.220
##
## Root Mean Square Error of Approximation:
##
## RMSEA 0.000
## 90 Percent confidence interval - lower 0.000
## 90 Percent confidence interval - upper 0.111
## P-value H_0: RMSEA <= 0.050 0.628
## P-value H_0: RMSEA >= 0.080 0.172
##
## Standardized Root Mean Square Residual:
##
## SRMR 0.018
##
## Parameter Estimates:
##
## Standard errors Standard
## Information Expected
## Information saturated (h1) model Structured
##
## Latent Variables:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## visual =~
## x1 1.000 0.695 0.596
## x2 0.804 0.142 5.645 0.000 0.558 0.475
## x3 1.198 0.222 5.392 0.000 0.832 0.737
##
## Regressions:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## visual ~
## sex1 -0.302 0.103 -2.929 0.003 -0.435 -0.217
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .x1 0.876 0.111 7.903 0.000 0.876 0.645
## .x2 1.070 0.104 10.284 0.000 1.070 0.774
## .x3 0.582 0.129 4.498 0.000 0.582 0.457
## .visual 0.460 0.114 4.026 0.000 0.953 0.953
semPaths(sem.sex, what = "path", whatLabels = "par", edge.color = "black",
edge.label.cex = 1, residuals = F, rotation = 2)
Numerical
variable
age.model = "
speed =~ x7 + x8 + x9
speed ~ ageyr
"
sem.age = sem(age.model, data = data1)
summary(sem.age, fit.measures = T, standardized = T)
## lavaan 0.6.15 ended normally after 23 iterations
##
## Estimator ML
## Optimization method NLMINB
## Number of model parameters 7
##
## Number of observations 301
##
## Model Test User Model:
##
## Test statistic 2.415
## Degrees of freedom 2
## P-value (Chi-square) 0.299
##
## Model Test Baseline Model:
##
## Test statistic 174.223
## Degrees of freedom 6
## P-value 0.000
##
## User Model versus Baseline Model:
##
## Comparative Fit Index (CFI) 0.998
## Tucker-Lewis Index (TLI) 0.993
##
## Loglikelihood and Information Criteria:
##
## Loglikelihood user model (H0) -1226.221
## Loglikelihood unrestricted model (H1) -1225.014
##
## Akaike (AIC) 2466.442
## Bayesian (BIC) 2492.392
## Sample-size adjusted Bayesian (SABIC) 2470.192
##
## Root Mean Square Error of Approximation:
##
## RMSEA 0.026
## 90 Percent confidence interval - lower 0.000
## 90 Percent confidence interval - upper 0.121
## P-value H_0: RMSEA <= 0.050 0.532
## P-value H_0: RMSEA >= 0.080 0.237
##
## Standardized Root Mean Square Residual:
##
## SRMR 0.021
##
## Parameter Estimates:
##
## Standard errors Standard
## Information Expected
## Information saturated (h1) model Structured
##
## Latent Variables:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## speed =~
## x7 1.000 0.634 0.583
## x8 1.340 0.205 6.543 0.000 0.850 0.841
## x9 0.857 0.122 6.998 0.000 0.543 0.539
##
## Regressions:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## speed ~
## ageyr 0.155 0.043 3.615 0.000 0.245 0.257
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .x7 0.781 0.084 9.302 0.000 0.781 0.660
## .x8 0.300 0.099 3.019 0.003 0.300 0.293
## .x9 0.720 0.071 10.079 0.000 0.720 0.709
## .speed 0.376 0.084 4.448 0.000 0.934 0.934
semPaths(sem.age, what = "path", whatLabels = "par", edge.color = "black",
edge.label.cex = 1, residuals = F, rotation = 2)
Polical Democracy
data set
Descriptive
data2 = PoliticalDemocracy
dim(data2)
## [1] 75 11
names(data2)
## [1] "y1" "y2" "y3" "y4" "y5" "y6" "y7" "y8" "x1" "x2" "x3"
CFA model Y
model.y = "
Y =~ y1 + y2 + y3 + y4 + y5 + y6 + y7
"
cfa.y = cfa(model.y, data2)
summary(cfa.y, fit.measures = T, standardized = T)
## lavaan 0.6.15 ended normally after 27 iterations
##
## Estimator ML
## Optimization method NLMINB
## Number of model parameters 14
##
## Number of observations 75
##
## Model Test User Model:
##
## Test statistic 31.150
## Degrees of freedom 14
## P-value (Chi-square) 0.005
##
## Model Test Baseline Model:
##
## Test statistic 367.244
## Degrees of freedom 21
## P-value 0.000
##
## User Model versus Baseline Model:
##
## Comparative Fit Index (CFI) 0.950
## Tucker-Lewis Index (TLI) 0.926
##
## Loglikelihood and Information Criteria:
##
## Loglikelihood user model (H0) -1180.866
## Loglikelihood unrestricted model (H1) -1165.292
##
## Akaike (AIC) 2389.733
## Bayesian (BIC) 2422.178
## Sample-size adjusted Bayesian (SABIC) 2378.053
##
## Root Mean Square Error of Approximation:
##
## RMSEA 0.128
## 90 Percent confidence interval - lower 0.067
## 90 Percent confidence interval - upper 0.189
## P-value H_0: RMSEA <= 0.050 0.023
## P-value H_0: RMSEA >= 0.080 0.910
##
## Standardized Root Mean Square Residual:
##
## SRMR 0.050
##
## Parameter Estimates:
##
## Standard errors Standard
## Information Expected
## Information saturated (h1) model Structured
##
## Latent Variables:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Y =~
## y1 1.000 2.230 0.856
## y2 1.327 0.171 7.756 0.000 2.958 0.755
## y3 1.050 0.146 7.212 0.000 2.341 0.718
## y4 1.263 0.136 9.320 0.000 2.816 0.846
## y5 0.930 0.110 8.473 0.000 2.073 0.799
## y6 1.140 0.146 7.819 0.000 2.541 0.759
## y7 1.190 0.137 8.716 0.000 2.653 0.813
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .y1 1.816 0.380 4.780 0.000 1.816 0.268
## .y2 6.620 1.208 5.480 0.000 6.620 0.431
## .y3 5.141 0.917 5.606 0.000 5.141 0.484
## .y4 3.140 0.643 4.885 0.000 3.140 0.284
## .y5 2.438 0.463 5.263 0.000 2.438 0.362
## .y6 4.765 0.872 5.464 0.000 4.765 0.425
## .y7 3.616 0.699 5.171 0.000 3.616 0.339
## Y 4.971 1.094 4.544 0.000 1.000 1.000
semPaths(cfa.y, what = "path", whatLabels = "std", rotation = 2,
edge.color = "black", edge.label.cex = 1, residuals = F)
CFA model X
model.x = "
X =~ x1 + x2 + x3
"
cfa.x = cfa(model.x, data2)
summary(cfa.x, fit.measures = T, standardized = T) # just identified model
## lavaan 0.6.15 ended normally after 22 iterations
##
## Estimator ML
## Optimization method NLMINB
## Number of model parameters 6
##
## Number of observations 75
##
## Model Test User Model:
##
## Test statistic 0.000
## Degrees of freedom 0
##
## Model Test Baseline Model:
##
## Test statistic 219.165
## Degrees of freedom 3
## P-value 0.000
##
## User Model versus Baseline Model:
##
## Comparative Fit Index (CFI) 1.000
## Tucker-Lewis Index (TLI) 1.000
##
## Loglikelihood and Information Criteria:
##
## Loglikelihood user model (H0) -241.345
## Loglikelihood unrestricted model (H1) -241.345
##
## Akaike (AIC) 494.690
## Bayesian (BIC) 508.595
## Sample-size adjusted Bayesian (SABIC) 489.684
##
## Root Mean Square Error of Approximation:
##
## RMSEA 0.000
## 90 Percent confidence interval - lower 0.000
## 90 Percent confidence interval - upper 0.000
## P-value H_0: RMSEA <= 0.050 NA
## P-value H_0: RMSEA >= 0.080 NA
##
## Standardized Root Mean Square Residual:
##
## SRMR 0.000
##
## Parameter Estimates:
##
## Standard errors Standard
## Information Expected
## Information saturated (h1) model Structured
##
## Latent Variables:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## X =~
## x1 1.000 0.667 0.917
## x2 2.193 0.142 15.403 0.000 1.464 0.976
## x3 1.824 0.153 11.883 0.000 1.217 0.872
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .x1 0.084 0.020 4.140 0.000 0.084 0.159
## .x2 0.108 0.074 1.455 0.146 0.108 0.048
## .x3 0.468 0.091 5.124 0.000 0.468 0.240
## X 0.446 0.087 5.135 0.000 1.000 1.000
semPaths(cfa.x, what = "path", whatLabels = "std", rotation = 2,
edge.color = "black", edge.label.cex = 1, residuals = F)
CFA model with X
& Y correlation
model.x.y = "
Y =~ y1 + y2 + y3 + y4 + y5 + y6 + y7
X =~ x1 + x2 + x3
"
cfa.x.y = cfa(model.x.y, data2)
summary(cfa.x.y, fit.measures = T, standardized = T)
## lavaan 0.6.15 ended normally after 33 iterations
##
## Estimator ML
## Optimization method NLMINB
## Number of model parameters 21
##
## Number of observations 75
##
## Model Test User Model:
##
## Test statistic 60.292
## Degrees of freedom 34
## P-value (Chi-square) 0.004
##
## Model Test Baseline Model:
##
## Test statistic 633.950
## Degrees of freedom 45
## P-value 0.000
##
## User Model versus Baseline Model:
##
## Comparative Fit Index (CFI) 0.955
## Tucker-Lewis Index (TLI) 0.941
##
## Loglikelihood and Information Criteria:
##
## Loglikelihood user model (H0) -1413.012
## Loglikelihood unrestricted model (H1) -1382.866
##
## Akaike (AIC) 2868.024
## Bayesian (BIC) 2916.691
## Sample-size adjusted Bayesian (SABIC) 2850.505
##
## Root Mean Square Error of Approximation:
##
## RMSEA 0.102
## 90 Percent confidence interval - lower 0.058
## 90 Percent confidence interval - upper 0.143
## P-value H_0: RMSEA <= 0.050 0.030
## P-value H_0: RMSEA >= 0.080 0.812
##
## Standardized Root Mean Square Residual:
##
## SRMR 0.062
##
## Parameter Estimates:
##
## Standard errors Standard
## Information Expected
## Information saturated (h1) model Structured
##
## Latent Variables:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Y =~
## y1 1.000 2.214 0.850
## y2 1.323 0.174 7.600 0.000 2.930 0.747
## y3 1.054 0.148 7.146 0.000 2.334 0.716
## y4 1.276 0.137 9.295 0.000 2.826 0.849
## y5 0.947 0.110 8.575 0.000 2.098 0.808
## y6 1.145 0.148 7.747 0.000 2.536 0.757
## y7 1.200 0.138 8.674 0.000 2.658 0.814
## X =~
## x1 1.000 0.670 0.920
## x2 2.178 0.139 15.678 0.000 1.460 0.973
## x3 1.818 0.152 11.975 0.000 1.218 0.872
##
## Covariances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Y ~~
## X 0.735 0.209 3.513 0.000 0.496 0.496
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .y1 1.883 0.386 4.878 0.000 1.883 0.277
## .y2 6.787 1.229 5.523 0.000 6.787 0.442
## .y3 5.172 0.920 5.624 0.000 5.172 0.487
## .y4 3.081 0.631 4.884 0.000 3.081 0.278
## .y5 2.333 0.447 5.224 0.000 2.333 0.346
## .y6 4.791 0.873 5.486 0.000 4.791 0.427
## .y7 3.589 0.692 5.185 0.000 3.589 0.337
## .x1 0.081 0.020 4.130 0.000 0.081 0.153
## .x2 0.121 0.071 1.708 0.088 0.121 0.054
## .x3 0.467 0.090 5.160 0.000 0.467 0.239
## Y 4.904 1.090 4.499 0.000 1.000 1.000
## X 0.449 0.087 5.175 0.000 1.000 1.000
semPaths(cfa.x.y, what = "path", whatLabels = "std", rotation = 2,
edge.color = "black", edge.label.cex = 1, residuals = F)
SEM model with Y to
X path
model.y_x = "
Y =~ y1 + y2 + y3 + y4 + y5 + y6 + y7
X =~ x1 + x2 + x3
X ~ Y
"
sem.y_x = cfa(model.y_x, data2)
summary(sem.y_x, fit.measures = T, standardized = T)
## lavaan 0.6.15 ended normally after 37 iterations
##
## Estimator ML
## Optimization method NLMINB
## Number of model parameters 21
##
## Number of observations 75
##
## Model Test User Model:
##
## Test statistic 60.292
## Degrees of freedom 34
## P-value (Chi-square) 0.004
##
## Model Test Baseline Model:
##
## Test statistic 633.950
## Degrees of freedom 45
## P-value 0.000
##
## User Model versus Baseline Model:
##
## Comparative Fit Index (CFI) 0.955
## Tucker-Lewis Index (TLI) 0.941
##
## Loglikelihood and Information Criteria:
##
## Loglikelihood user model (H0) -1413.012
## Loglikelihood unrestricted model (H1) -1382.866
##
## Akaike (AIC) 2868.024
## Bayesian (BIC) 2916.691
## Sample-size adjusted Bayesian (SABIC) 2850.505
##
## Root Mean Square Error of Approximation:
##
## RMSEA 0.102
## 90 Percent confidence interval - lower 0.058
## 90 Percent confidence interval - upper 0.143
## P-value H_0: RMSEA <= 0.050 0.030
## P-value H_0: RMSEA >= 0.080 0.812
##
## Standardized Root Mean Square Residual:
##
## SRMR 0.062
##
## Parameter Estimates:
##
## Standard errors Standard
## Information Expected
## Information saturated (h1) model Structured
##
## Latent Variables:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Y =~
## y1 1.000 2.214 0.850
## y2 1.323 0.174 7.600 0.000 2.930 0.747
## y3 1.054 0.148 7.146 0.000 2.334 0.716
## y4 1.276 0.137 9.295 0.000 2.826 0.849
## y5 0.947 0.110 8.575 0.000 2.098 0.808
## y6 1.145 0.148 7.747 0.000 2.536 0.757
## y7 1.200 0.138 8.674 0.000 2.658 0.814
## X =~
## x1 1.000 0.670 0.920
## x2 2.178 0.139 15.678 0.000 1.460 0.973
## x3 1.818 0.152 11.975 0.000 1.218 0.872
##
## Regressions:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## X ~
## Y 0.150 0.035 4.280 0.000 0.496 0.496
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .y1 1.883 0.386 4.878 0.000 1.883 0.277
## .y2 6.787 1.229 5.523 0.000 6.787 0.442
## .y3 5.172 0.920 5.624 0.000 5.172 0.487
## .y4 3.081 0.631 4.884 0.000 3.081 0.278
## .y5 2.333 0.447 5.224 0.000 2.333 0.346
## .y6 4.791 0.873 5.486 0.000 4.791 0.427
## .y7 3.589 0.692 5.185 0.000 3.589 0.337
## .x1 0.081 0.020 4.130 0.000 0.081 0.153
## .x2 0.121 0.071 1.708 0.088 0.121 0.054
## .x3 0.467 0.090 5.160 0.000 0.467 0.239
## Y 4.904 1.090 4.499 0.000 1.000 1.000
## .X 0.339 0.067 5.043 0.000 0.754 0.754
semPaths(sem.y_x, what = "path", whatLabels = "par", rotation = 2,
edge.color = "black", edge.label.cex = 1, residuals = F,
sizeMan = 8, sizeLat = 8)