# «Working Paper No. 75 October 2010 b B A M B AMBERG E CONOMIC R ESEARCH GROUP k k* BERG Working Paper Series on Government and Growth Bamberg Economic ...»

**Identification of Interaction Effects in Survey Expectations:**

A Cautionary Note

Simone Alfarano

and Mishael Milaković

Working Paper No. 75

October 2010

b

B

A

M

## B AMBERG

## E CONOMIC

## R ESEARCH

GROUP

k

k*

BERG Working Paper Series on Government and Growth Bamberg Economic Research Group on Government and Growth Bamberg University Feldkirchenstraße 21 D-96045 Bamberg Telefax: (0951) 863 5547 Telephone: (0951) 863 2547 E-mail: public-finance@uni-bamberg.de http://www.uni-bamberg.de/vwl-fiwi/forschung/berg/ ISBN 978-3-931052-83-6 Reihenherausgeber: BERG Heinz-Dieter Wenzel Redaktion Felix Stübben∗ ∗ felix.stuebben@uni-bamberg.de Identiﬁcation of Interaction Eﬀects in Survey Expectations: A Cautionary Note∗ Simone Alfarano† Mishael Milakovi´‡ c October 2010 Abstract A growing body of literature reports evidence of social interaction eﬀects in survey expectations. In this note, we argue that evidence in favor of social interaction eﬀects should be treated with caution, or could even be spurious.

Utilizing a parsimonious stochastic model of expectation formation and dynamics, we show that the existing sample sizes of survey expectations are about two orders of magnitude too small to reasonably distinguish between noise and interaction eﬀects. Moreover, we argue that the problem is compounded by the fact that highly correlated responses among agents might not be caused by interaction eﬀects at all, but instead by model-consistent beliefs. Ultimately, these results suggest that existing survey data cannot facilitate our understanding of the process of expectations formation.

Keywords: Survey expectations; model-consistent beliefs; social interaction;

networks.

JEL codes: D84, D85, C83.

∗ Financial support by the Volkswagen Foundation through its g

1 Introduction Expectations play a central role in economic theory, yet we know rather little about the actual process of expectation formation. A growing body of literature emphasizes the importance of social interactions in the process of expectation formation, and mostly ﬁnds empirical support for interaction eﬀects in reported survey data.

These survey expectations typically consist of several hundred monthly responses by several hundred agents. Here we consider a generic stochastic model of expectation dynamics that contains both a social interaction component and an exogenous signal that represents model-consistent beliefs. The purpose of this note is to show that it is essentially not possible to disentangle the two eﬀects in survey data, and that even if social interactions were present, the required sample size to identify interaction eﬀects is about two orders of magnitude larger than existing sample sizes. Even if we are willing to make strong assumptions about the structure of multidimensional responses, existing survey data will probably remain a very fragile source for the identiﬁcation of interaction eﬀects or model-consistent beliefs.

Modern macroeconomics assumes that agents know the ‘true’ model underlying the macroeconomic laws of motion, and that their predictions of the future are on average correct. In their extensive review, Pesaran and Weale (2006) ﬁnd little if any evidence that survey expectations are model-consistent in this strong sense, which is hardly surprising given the complexity of our macroscopic environment.

Weaker forms of macroeconomic rationality acknowledge that agents face model uncertainty and instead focus on learning (see, e.g., Evans and Honkapohja, 2001;

Milani, 2010), informational rigidities (see, e.g., Mankiw and Reis, 2002; Mankiw et al., 2004; Coibion and Gorodnichenko, 2008), imperfect information (see, e.g., Woodford, 2001; Del Negro and Eusepi, 2009), and ‘rational inattention’ (see, e.g., Sims, 2003). While details of the forward-looking behavior of agents are crucial for the qualitative diﬀerences among these approaches, neither of them considers the actual process of expectations formation.

Recent econometric approaches are discussing the existence of heterogeneity in the updating behavior of forecasters (see, e.g., Clements, 2010), and laboratory experiments equally indicate heterogeneity in expectations (see, e.g., Hommes, 2010). The focus on heterogeneity intersects with another strand of research that emphasizes the importance of social interactions in the process of expectations formation. Empirical work on social interactions has traditionally employed discrete choice frameworks that allow for social spillovers in agents’ utility (see, e.g., Brock and Durlauf, 2001), but this approach has been rather static in the sense that cross-sectional conﬁgurations are viewed as self-consistent equilibria. The discrete choice framework has also been investigated in the context of macroeconomic expectations formation, for instance by positing that agents choose between forming extrapolative expectations and (costly) rational expectations (see, e.g., Lines and Westerhoﬀ, 2010), which can lead to endogenous ﬂuctuations in macroeconomic variables.

Carroll (2003) suggests an alternative route to social interactions, hypothesizing that the diﬀusion of news from professional forecasters to the rest of the public leads to ‘stickyness’ in aggregate expectations. The diﬀusion of expectations is also a deﬁning characteristic in several recent contributions that place greater emphasis on social interactions than on individual concepts of rationality in their study of (survey) expectations. These probabilistic approaches by and large aim for positive models of expectations formation, but yield mixed results so far. Bowden and McDonald (2008) study the diﬀusion of information in various network structures and ﬁnd a trade-oﬀ between volatility in aggregate expectations and the speed at which agents learn the correct state of the world. Secondly, they argue that certain network structures can lead to information cascades. This would be consistent with the empirical results of Flieth and Foster (2002), who ﬁnd that survey expectations are characterized with protracted periods of inertia punctuated by occasional switches from aggregate optimism to pessimism or vice versa.

They also calibrate a model of ‘interactive expectations’ with multiple probabilistic equilibria from the data, which indicates that social interactions would have become less important over time. Lux (2009) conﬁrms the empirical quality of survey expectations, with their pronounced swings in aggregate opinions, but he claims evidence in favor of strong interaction eﬀects. Since both consider German survey expectations and utilize similar probabilistic formalizations of the expectations process, the question why they ﬁnd conﬂicting results on the importance of interaction eﬀects warrants some attention.

The source of the diﬀerent ﬁndings might, at least in part, be due to the details of the probabilistic processes that the authors employ to model expectations formation. Both approaches formalize changes in expectations through transition probabilities that additively combine an autonomous and an interactive element.

Flieth and Foster (2002) use a three-state model that can only be solved numerically, while Lux (2009) uses a two-state model that exploits well-known results in Markov chain theory and allows for closed-form solutions not only of the limiting distribution but, in principle, of the entire time evolution of the expectations process. Yet irrespective of a model’s probabilistic details, we want to argue here that these diﬀerences are likely to originate from size limitations of existing surveys, because even if we knew the details of the interaction mechanism, including the exact parameterization of the expectations process and the network structure among agents, we would still not be capable of distinguishing between interaction eﬀects and essentially random correlations in survey responses, nor would we be able to distinguish model-consistent beliefs from social interactions.

We place a premium on analytical tractability and thus conduct our investigation in the probabilistic tradition employed by Lux (2009). A number of results are known in this parsimonious modeling tradition, including (statistical) equilibrium properties for a wide range of model parameters and the time evolution of the probability density of beliefs. Understanding how the qualitative nature of the model changes with the parameters permits us to isolate the behavioral details of the expectations process from the question whether it is feasible to detect interaction and network eﬀects from existing survey data.

2 Stochastic Model of Expectation Dynamics The model utilized by Lux (2009) traces back to earlier contributions by Weidlich and Haag (1983) and Weidlich (2006), and is very similar, both formally and qualitatively, to the herding model of Kirman (1991, 1993). A prototypical setup in this tradition considers a population of agents of size N that is divided into two groups, say, X and Y of sizes n and N − n, respectively. In the context of survey expectations, the two groups would correspond to agents who have optimistic or pessimistic beliefs regarding the future state of an economic or ﬁnancial indicator.

The basic idea is that agents change state (i) because they follow an exogenous signal, corresponding for instance to model-consistent beliefs, or (ii) because of the social interaction with their neighbors, i.e. agents they are communicating with during a given time period. The transition rate for an agent i to switch from state X to state Y is ρi (X → Y ) = ai + λi DY (i, j), (1) j=i where ai governs the possibility of self-conversion caused by model-consistent beliefs, and the sum captures the inﬂuence of the neighbors. The parameter λi governs the interaction strength between i and its neighbors, indexed by j, while DY (i, j) is an indicator function serving to count the number of i’s neighbors that are in state Y, 1 if j is a Y-neighbor of i, DY (i, j) = 0 otherwise.

Analogously the transition rates in the opposite direction, from a pessimistic to an optimistic state, are given by

where the dependence on J gets lost if we assume that inhomogeneities among the diﬀerent conﬁgurations of neighbors are solely due to the ﬂuctuations. Then we can replace the number of Y -neighbors around each agent i with the average number of neighbors that agents are linked to, say, D and nY (i) = D PY, with PY being the probability that an i-neighbor is in state Y, which we can approximate with the unconditional fraction (N − n)/N of agents in state Y, yielding

Basically, the mean-ﬁeld approximation reduces a complex system of heterogeneous interacting agents to a collection of independent agents who are acting “in the ﬁeld” that is created by other agents’ beliefs and their average behavior.

On the aggregate level, we are interested in the probability of observing a single switch on the system-wide level during some time interval ∆t, hence we have to sum (7) over all agents in state Y in order to ﬁnd the aggregate probability that an agent is switching from state Y to state X during ∆t, assuming that ∆t is small enough to constrain the switch to a single agent. Summing (7), which is permissible since the agents are now independent, we obtain

for the reverse switch, where a, b are the mean values of ai, bi averaged over all agents. It turns out that replacement of behavioral parameters by their ensemble averages is only sensible if the network structure observes some regularity conditions and if the fraction of agents with strictly positive bi is very large, i.e. as long as the fraction of isolated nodes in the agent network is very small (see Alfarano and Milakovi´, 2009, for details). We will return to the implications of this point c in the ﬁnal scenario of Section 3.

For notational convenience, we set

** b ≡ λD/N, (10)**

while setting c ≡ λD would recover the original formulation of Kirman’s ant model.1 The equilibrium concept associated with the generic transition rates (8) and (9) is a statistical equilibrium outcome: at any time, the state of the system refers to the concentration of agents in one of the two states. We deﬁne the state of the system through the concentration z = n/N of agents that are in state X.

For large N, the concentration can be treated as a continuous variable. Notice that none of the possible states of z ∈ [0, 1] is an equilibrium in itself, nor are there multiple equilibria in the usual economic meaning of the term.

The notion of equilibrium instead refers to a statistical distribution that describes the proportion of time the system spends in each state. Utilizing the Fokker-Planck equation, we can show that for large N the equilibrium distribution of z is a beta distribution (see Alfarano et al., 2008, for details)

Since is a ratio of quantities that depend (i) on the time scale at which the process operates (1/a and 1/λ), and (ii) on the spatial characteristics of the underlying network (D and N ), the parameter of the equilibrium distribution is a well-deﬁned dimensionless quantity. If 1 the distribution is bimodal, with probability mass having maxima at z = 0 and z = 1. Conversely, if 1 the distribution is unimodal, and in the “knife-edge” scenario = 1 the distribution becomes uniform. The mean value of z, E[z] = 1/2, is independent of, and intuitively follows from the diﬀerence of the transition rates (8) and (9), a(N − 2n), showing that in equilibrium the system approaches n = N/2.