Is it possible to have two independent variables




















It probably would not surprise you, for example, to hear that the effect of receiving psychotherapy is stronger among people who are highly motivated to change than among people who are not motivated to change. This is an interaction because the effect of one independent variable whether or not one receives psychotherapy depends on the level of another motivation to change.

If they were high in private body consciousness, then those in the messy room made harsher judgments. If they were low in private body consciousness, then whether the room was clean or messy did not matter. The effect of one independent variable can depend on the level of the other in several different ways.

This is shown in Figure 8. This is like the hypothetical driving example where there was a stronger effect of using a cell phone at night than during the day. One example of a crossover interaction comes from a study by Kathy Gilliland on the effect of caffeine on the verbal test scores of introverts and extraverts Gilliland, [2]. Introverts perform better than extraverts when they have not ingested any caffeine.

But extraverts perform better than introverts when they have ingested 4 mg of caffeine per kilogram of body weight.

In many studies, the primary research question is about an interaction. The study by Brown and her colleagues was inspired by the idea that people with hypochondriasis are especially attentive to any negative health-related information. This led to the hypothesis that people high in hypochondriasis would recall negative health-related words more accurately than people low in hypochondriasis but recall non-health-related words about the same as people low in hypochondriasis.

And of course this is exactly what happened in this study. In the top panel, one line remains constant while the other goes up. In the middle panel, both lines go up but at different rates. In the bottom panel, one line goes down and the other goes up so they cross. An approach to including multiple independent variables in an experiment where each level of one independent variable is combined with each level of the others to produce all possible combinations.

When one independent variable is manipulated between subjects and another is manipulated within subjects. In a factorial design, the researcher measures an independent variable but does not manipulate it. In factorial design, the statistical relationship between one independent variable and a dependent variable--averaging across the levels of the other independent variable. Skip to content Chapter 8: Complex Research Designs. Explain why researchers often include multiple independent variables in their studies.

Define factorial design, and use a factorial design table to represent and interpret simple factorial designs. Distinguish between main effects and interactions, and recognize and give examples of each. Sketch and interpret bar graphs and line graphs showing the results of studies with simple factorial designs.

Researchers often include multiple independent variables in their experiments. The most common approach is the factorial design, in which each level of one independent variable is combined with each level of the others to create all possible conditions. In a factorial design, the main effect of an independent variable is its overall effect averaged across all other independent variables. There is one main effect for each independent variable. There is an interaction between two independent variables when the effect of one depends on the level of the other.

Some of the most interesting research questions and results in psychology are specifically about interactions. Practice: Return to the five article titles presented at the beginning of this section. For each one, identify the independent variables and the dependent variable. Practice: Create a factorial design table for an experiment on the effects of room temperature and noise level on performance on the MCAT.

Be sure to indicate whether each independent variable will be manipulated between-subjects or within-subjects and explain why. Practice: Sketch 8 different bar graphs to depict each of the following possible results in a 2 x 2 factorial experiment: No main effect of A; no main effect of B; no interaction Main effect of A; no main effect of B; no interaction No main effect of A; main effect of B; no interaction Main effect of A; main effect of B; no interaction Main effect of A; main effect of B; interaction Main effect of A; no main effect of B; interaction No main effect of A; main effect of B; interaction No main effect of A; no main effect of B; interaction.

Brown, H. Perceptual and memory biases for health-related information in hypochondriacal individuals. Journal of Psychosomatic Research, 47 , 67— The interactive effect of introversion-extroversion with caffeine induced arousal on verbal performance. Remember that there is a thing called "variance. The reverse is not necessarily true. That is, two perfectly uncorrelated variables are not necessarily independent from each other. Correlation only measures the linear relationship.

Just look at the chart below from Wikipedia: The last row shows how two perfectly uncorrelated variables can have non-linear relationship and thus are dependent. Sign up to join this community. The best answers are voted up and rise to the top. Stack Overflow for Teams — Collaborate and share knowledge with a private group.

Create a free Team What is Teams? Learn more. Is it possible for two independent variables to be correlated, by chance? Ask Question. Asked 2 years, 10 months ago. Active 1 year, 7 months ago. Viewed 3k times. On page , they write "Thus, if X and Y are independent, they are also uncorrelated. Improve this question. Sebastian Sebastian 5 5 silver badges 14 14 bronze badges. This graph doesn't show it very well, but the regression problem can be thought of as a sort of response surface problem.

What is the expected height Z at each value of X and Y? An example animation is shown at the very top of this page rotating figure. The linear regression solution to this problem in this dimensionality is a plane.

The plotly package in R will let you 'grab' the 3 dimensional graph and rotate it with your computer mouse. This lets you see the response surface more clearly. A still view of the Chevy mechanics' predicted scores produced by Plotly:. Just as in simple regression, the dependent variable is thought of as a linear part and an error.

In multiple regression, the linear part has more than one X variable associated with it. When we run a multiple regression, we can compute the proportion of variance due to the regression the set of independent variables considered together. This proportion is called R-square. We use a capital R to show that it's a multiple R instead of a single variable r. We can also compute the correlation between Y and Y' and square that.

If we do, we will also find R-square. The mean of the residuals is 0. The variance of Y is 1. The variance of Y' is 1. Together, the variance of regression Y' and the variance of error e add up to the variance of Y 1. R-square is 1. Recall the scatterplot of Y and Y'. R-square is the proportion of variance in Y due to the multiple regression. Now R 2 represents the multiple correlation rather than the single correlation that we saw in simple regression.

For our most recent example, we have 2 independent variables, an R 2 of. Each X variable will have associated with it one slope or regression weight.

Each weight is interpreted as the unit change in Y given a unit change in X, holding the other X variables constant. If we want to make point predictions predictions of the actual value of the dependent variable given values of the independent variables, these are the weights we want. For example, if we have undergraduate grade point average and SAT scores for a person and want to predict their college freshman GPA, the unstandardized regression weights do the job.

Variables with large b weights ought to tell us that they are more important because Y changes more rapidly for some of them than for others.

The problem with unstandardized or raw score b weights in this regard is that they have different units of measurement, and thus different standard deviations. So when we measure different X variables in different units, part of the size of b is attributable to units of measurement rather than importance per se.

So what we can do is to standardize all the variables both X and Y, each X in turn. If we do that, then all the variables will have a standard deivation equal to one, and the connecton to the X variables will be readily apparent by the size of the b weights -- all will be interpreted as the number of standard deviations that Y changes when each X changes one standard deviation.

The standardized slopes are called beta weights. This is an extremely poor choice of words and symbols, because we have already used beta to mean the population value of b don't blame me; this is part of the literature.

Generally speaking, in multiple regression, beta will refer to standardized regression weights, that is, to estimates of parameters, unless otherwise noted. Because we are using standardized scores, we are back into the z-score situation. As you recall from the comparison of correlation and regression:. The earlier formulas I gave for b were composed of sums of square and cross-products. But with z scores, we will be dealing with standardized sums of squares and cross-products.

A standardized averaged sum of squares is 1. Bottom line on this is we can estimate beta weights using a correlation matrix. With two independent variables,. Note that the two formulas are nearly identical, the exception is the ordering of the first two symbols in the numerator.

Note that there is a surprisingly large difference in beta weights given the magnitude of correlations. Let's look at this for a minute, first at the equation for beta 1.



0コメント

  • 1000 / 1000