Technology Shocks and Monetary Policy : Revisiting the Fed ’ s Performance

Would the US economy’s dynamic response to permanent technology shocks have been different from the actual responses if monetary authorities systematic response to these shocks had been optimal? To answer this question, we characterize the dynamic effects of permanent technology shocks and the way in which US monetary authorities reacted to these shocks over the sample 1955(1)-2002(4) using a structural VAR. A sticky pricesticky wage model is developed and estimated to reproduce these responses. We then formally compare these responses with the outcome of the optimal monetary policy.


Introduction
Following the provocative contribution by (Galí 1999), there has been a renewed interest in technology shocks over the recent years, especially within the theoretical framework of New Keynesian models. (Galí et al 2003) is a prominent example of this renewed interest, focussing on the link between US monetary policy and the economy's response to such shocks. 1 In this paper, the authors first characterize the Fed's systematic response to technology shocks identified by means of a SVAR model where only technology shocks account for the unit-root in average labor productivity. Second, using a small DSGE model with sticky prices calibrated to US data, they evaluate the extent to which the actual, SVAR-based Fed's response to these shocks is consistent with that implied by simple monetary rules. They find strong evidence in favor of the view that US monetary policy was not optimal during the period preceding the appointment of Paul Volcker as the Fed's Chairman.
The present paper contributes to this literature by revisiting the conclusions reached by (Galí et al 2003). In doing so, our study makes two important improvements with respect to the methodology used in the latter paper. First, we explicitly include sticky wages in our analysis of US monetary policy. Taking this modeling element into account is potentially important in such an analysis. Indeed, as shown by (Erceg et al 2001), considering prices and wages both sticky allows for a non trivial monetary policy, as opposed to the environment considered by (Galí et al 2003) where the optimal monetary policy calls for a zero response of inflation to a permanent technology shock. Thus, when comparing impulse responses generated from a DSGE model featuring only sticky prices with their SVAR-based counterparts, one must reject the optimality hypothesis as long as the empirical response of inflation is statistically different from zero. This need not be the case when the DSGE model features sticky wages in addition to sticky prices.
Second, we perform a more systematic evaluation of our DSGE model. To this end, we resort to the Minimum Distance technique advocated by (Christiano et al 2005) and (Rotemberg andWoodford 1997,1999), among others. More precisely, the structural parameters of our DSGE models are pinned down so as to minimize a weighted distance between theoretical and VAR-based impulse responses of key macroeconomic variables to a permanent technology shock. Additionally, we resort to a much more detailed DSGE model than (Galí et al 2003). In addition to sticky prices and wages, our setup incorporates material goods, and features various hybrid elements, including habit persistence and partial wage and price indexation schemes. All these modelling elements have been shown elsewhere in the literature to help New Keynesian DSGE models better fit US data. In this paper, we confirm this conclusion: most of the associated parameters are found significant and allow the DSGE model to replicate fairly well the economy's response to technology shocks.
We start our analysis by characterizing the US economy's response to permanent technology shocks by means of a structural vector autoregression (SVAR) estimated on US data over the sample 1955(1)-2002(4). Following (Galí et al 2003), we split our sample into two separate samples, one covering the pre-Volcker period (1955(1)-1979(2)) and the other covering the Volcker-Greenspan period (1982(3)-2002(4)), thus acknowledging a priori the possible presence of a structural break in monetary policy. As in (Galí et al 2003), technology shocks are the only shocks responsible for the unit-root in average labor productivity. We show that, within the confines of our SVAR, technology shocks, while maybe not the dominant source of business cycle, account for a sizable portion of fluctuations in output, hours, inflation, wage inflation, and the nominal interest rate.
Thus, if the SVAR does a good job of identifying technology shocks, these results suggest that monetary authorities should pay attention to these shocks.
Armed with this empirical representation of the data and a DSGE model that reason-ably well reproduces the economy's response to identified technology shocks, we ask the counterfactual question: Would the economy's dynamic response have been different from what the SVAR indicates if US monetary authorities systematic response to technology shocks had been optimal? To answer this question, we follow (Giannoni and Woodford 2004) and derive the monetary authorities loss function as a second-order approximation to the social utility function. We then compute the optimal response to permanent technology shocks in our DSGE model and develop a simple test of the optimality hypothesis by comparing the outcome of the optimal monetary policy with the SVAR model. It must be noticed that this is a limited information test, in that it does not preclude that monetary policy badly reacted to other shocks that we do not seek to identify in our SVAR.
Our analysis requires that we a priori specify a monetary policy rule before estimating the model. In the present paper, we adopt a specification that closely resembles that advocated by (Taylor 1993). Resorting to such a parsimonious rule allows us to synthesize the complex process of monetary policy with a small number of parameters. Such rules have been shown to depict well actual monetary policy with a number of countries. 2 Within the context of a fully specified, estimated DSGE model, (Boivin and Giannoni 2003) also show that such a parsimonious rule captures the essential features of US monetary policy.
At the same time, it has been argued that such rules perform well relative to the more complicated optimal rule. Thus, we view our modelling choice as a good compromise between parsimony and goodness of fit.
Our main result is that monetary authorities dynamic reaction to a permanent technology shock does not apparently differ from the optimal response, especially so over the pre-Volcker sample period. This result contrasts with the conclusions reached by (Galí et al 2003). The major reason for this is our assumption that both prices and wages are sticky.
Moreover, based on simple quantitative analyses, we show that allowing for sticky wages in addition to sticky prices is important in terms of empirical fit.
The remainder is as follows. Section 2 briefly details our structural VAR approach and comments on the obtained results. Section 3 describes the theoretical model. Section 4 details the minimum distance estimation technique used to select the structural parameters. Section 5 states the program facing monetary authorities and derives the optimal monetary policy. The economy's dynamic responses to permanent technology shocks under this policy are then compared with those deriving either from the SVAR or from the theoretical model coupled with a Taylor rule. The last section briefly concludes.

SVAR Analysis
We start our analysis by characterizing the economy's response to permanent technology shocks. This is done by estimating a SVAR in which technology shocks are identified as the only shocks that can have a permanent effect on the long-run level of productivity.
The first subsection details the estimation and identification procedure and the second subsection expounds the empirical results.

Structural VAR Estimation
We use data from the Non Farm Business (NFB) sector over the sample period 1955(1)-2002(4). We define the log of average labor productivity (â t ) as the difference between the log of output (ŷ t ) and the log of hours (n t ). Quarterly inflation (π t ) is the growth rate of output's implicit deflator. Quarterly wage inflation (π w t ) is the growth rate of nominal hourly compensation. Finally, the quarterly nominal interest rate (î t ) is the quarterly Fed Funds rate. 3 The same variables are considered in our DSGE model. We follow (Galí and Rabanal 2004) and extract a quadratic trend from hours, to account for structural changes in the labor market that our model is not designed to reproduce. 4 It has been argued in the literature (Boivin andGiannoni, 2005, Galí et al, 2003) that US monetary policy experienced significant structural changes over the period studied in this paper. We follow (Galí et al 2003) and accordingly split our sample into two subperiods, the first (Pre-Volcker) one covering 1955(1)-1979(2) and the second one covering 1982(3)-2002(4). We then estimate our SVAR on each subperiod. 5 As in (Galí et al 2003), the period 1979(3)-1982(2) is excluded, because of its idiosyncrasy (Bernanke and Mihov 1998).
Formally, let us consider the data vector z t = (∆â t ,n t ,π t ,π w t ,î t ) 0 , where ∆ is the first difference operator. Let m denote the number of variables in z t . We estimate the canonical where is the maximal lag, which we determine by minimizing the Hannan-Quinn information criterion. In our analysis, we found that = 2. Let us define where I m is the identity matrix. Now, we assume that the canonical innovations are linear combinations of the structural shocks η t , i.e.
for some non singular matrix S. As usual, we impose an orthogonality assumption on the structural shocks, which combined with a scale normalization implies Eη t η 0 t = I m .
Since we are only identifying a single shock, we need not impose a complete set of restrictions on the matrix S. Let us define C (L) = B (L) S. Given the ordering of z t , we simply require that C (1) be lower triangular, so that only technology shocks can affect the long-run level of productivity. This amounts to imposing that C (1) is the Cholesky factor of B (1) ΣB (1) 0 . Given consistent estimates of B (1) and Σ, we easily obtain an estimate for C (1). Retrieving S is then a simple task using the formula S = B (1) −1 C (1).

Results
The dynamics of output, hours, inflation, real wages, and the nominal interest rate in response to a one percent technology shock are reported on figure 1 for the pre Volcker period and on figure 2 for the Volcker-Greenspan period. 6 In each case, the grey areas represent the 90% asymptotic confidence intervals, which we computed numerically, as indicated in (Hamilton 1994). Notice that output is simply deduced from the combined dynamics of average labor productivity and hours. Similarly, the real wage is deduced from the responses of wage inflation and inflation.
Over the first subperiod (figure 1), output slightly declines on impact. These responses are not statistically significant. After a few quarters, output starts to monotonically and significantly reach its new steady state level. These responses are similar to what (Galí et al 2003) obtain. Hours follow an inverted hump pattern. They decline on impact, continue to decline for two periods, and then start to reach back their steady state level.
These results confirm the pattern obtained by (Galí 1999), (Galí and Rabanal 2004), and (Galí et al 2003). Notice additionally that the response of hours is estimated precisely, with a narrow confidence interval at short and long horizons. This somewhat contradicts result reported by . The difference arises because (i) we used quadratically detrended hours and (ii) we do not resort to the same set of covariates in the SVAR in addition to hours and productivity growth. Inflation initially decreases, though not statistically significantly, and then gradually rises toward its steady state value. The transitional path is significant after a few quarters, and exhibits a substantial amount of persistence. Similarly, the real wage exhibits a substantial amount of persistence. Though mildly reactive on impact, it then gradually reaches its new steady state level. Finally, the nominal interest rate follows an inverted hump pattern qualitatively similar to that of hours. The latter is suggestive of an accommodative behavior of monetary authorities over our sample which seem to have reacted to technology shocks by a protracted decline in nominal interest rate. Interestingly, the patterns of the responses of output and inflation are consistent with what one could expect from a technology shock.
Over the second subperiod (figure 2), we generically obtain responses that exhibit the same shapes as those previously described, though in each case, the apparent degree of persistence is drastically reduced. In particular, virtually all the inverted hump dynamics have disappeared. Output now rises on impact, and rapidly reaches its new steady state level. The impact response of hours is still negative, but is much less pronounced than in the pre-Volcker period. In contrast, the impact response of inflation is similar with what obtained in the previous subperiod, but now, inflation reaches back its initial level much faster. This is suggestive of a significant change in inflation persistence, which our DSGE will allow us to interpret in terms of a modification in the underlying degree of nominal rigidity. Finally, it should also be noticed that over this subperiod, we obtain very large confidence intervals. This should be kept in mind when interpreting our results.
Before continuing, we must address an important question: Do technology shocks contribute much to fluctuations in our SVAR? This question is of course important, because, ultimately, if these shocks account for a tiny portion of fluctuations, it does not matter much whether monetary authorities correctly reacted to them. To answer this question, we conduct two complementary exercises.
We start by computing the percentage of variance of the k step ahead forecast error in the elements of z t due to technology shocks. These are reported on Over the second subperiod, things appear to be somewhat different. Technology shocks now account for roughly 60% of the forecast error variance of productivity growth and for roughly 50% of the forecast error variance of inflation. They do not contribute much to the fluctuations of hours (between 1% and 5% ), and account only for roughly 10% of the forecast error variance of the Fed Funds rate and wage inflation.
Second, we compute the ratio of the variance of the business cycle components of z t conditional on technology shocks only to the variance of the business cycle components of z t conditional on all five shocks. We proceed as follows. From the estimated VAR coefficients, we construct the series of output, hours, inflation, wage inflation, and the nominal interest rate that would have obtained with technology shocks only. We then filter these series using the band pass (BP) filter advocated by (Christiano and Fitzgerald 2003). In the implementation of this filter, we retain the traditional definition of the business cycle as those movements between 6 and 32 quarters. The same filter is applied on the original series. We can thus compute the contribution of technology shocks to the variance of the business cycle components of each series. 7 In this case, we reconstruct output as the cumulated sum of productivity growth plus hours.
Over the first subsample, we obtain that technology shocks account for 28.06% of the To complete this section, we need to address another important question: is the technology shock that we identify really a technology shock? Here, we give a first answer to this question by running similar exogeneity tests as those implemented by (Francis and Ramey 2005). These tests will be complemented in section 4.4 by a simulation analysis based on our estimated DSGE model.
The approach taken here consists in testing whether variables that are considered exogenous and unrelated to technology are correlated with our identified technology shock.
Following (Francis and Ramey 2005), the exogenous variables that we consider are the Ramey and Shapiro's 1998)  Though there are instances where the null of no explanatory power is rejected (P -values below 5%), these tests globally suggest that the dummy variables do not help predict our structural shock, which gives us confidence in our interpretation of these shocks as technology shocks.
To gain further confidence, we also follow (Galí et al 2003) and (Galí and Rabanal 2005), and compute the correlation coefficient between our shock and the purified Solow residuals computed by (Basu et al 2004). The latter measure is available on an annual basis.
Accordingly, we annualize our technology shock by averaging across quarters within a calendar year. Once again, because of data limitations, we focus on the first subsample.
We obtain a correlation of 0.35, significant at the usual level. Thus, these tests are globally supportive of our interpretation of the shock identified through our SVAR.
Overall, these results suggest that technology shocks, while maybe not the dominant source of business cycle, still account for a sizable portion of fluctuations in the variables of interest, especially so when it comes to the business cycle component of output and inflation. This exercise suggests that it is legitimate that US monetary authorities pay attention to technology shocks given their relative importance over the business cycle.

The Model
We now expound our theoretical framework. The latter contains a large number of modelling elements that should help it to get a convincing fit. 8 The model is similar in spirit to those of (Giannoni and Woodford 2004) and (Galí and Rabanal 2004).

Final Goods and Material Goods
Competitive firms produce a homogeneous final good with the inputs of intermediate goods, according to the CES technology where y t is the quantity of final good produced in period t and y t (ς) is the input of intermediate good ς. Intermediate goods are imperfectly substitutable, with substitution elasticity θ p > 1. The zero profit condition for final good producers implies that the aggregate price index obeys the relationship Another set of competitive firms produce material goods by combining the same intermediate goods as above. They have access to the CES technology where q t is the produced quantity of material goods and q t (ς) denotes the input of intermediate good ς. Notice that the technologies for producing final and material goods share the same substitution elasticity between any two intermediate goods. Accordingly, the price of material goods will be P t .
Let d t (ς) denote the overall demand addressed to the producer of intermediate good ς.
The above assumptions imply the following relationship This is the demand function that monopolist ς will take into account when solving her program.

Aggregate Labor Index
Following (Erceg et al 2000), we assume for convenience that a set of differentiated labor inputs, indexed on [0, 1], are aggregated into a single labor index h t by competitive firms, which will be referred to as labor intermediaries in the sequel. They produce the aggregate labor input according to the following CES technology where θ w > 1 is the elasticity of substitution between any two labor types and h t (υ) denotes the input of labor of type υ. Let W t (υ) denote the nominal wage rate associated to type-υ labor, which labor intermediaries take as given. The first order conditions are where the aggregate nominal wage is defined as Notice that eq. (7) is a direct consequence of the combination of eq. (6) and the zero profits condition.

Intermediate Goods
In the third sector, monopolistic firms produce the intermediate goods. Each firm ς is the sole producer of intermediate good ς. Given a demand d t (ς), it faces the following production possibilities where F (·) is an increasing and concave production function, n t (ς) is the input of aggregate labor, m t (ς) denotes the input of material goods, and s m is the share of material goods in gross output. This specification is borrowed from (Rotemberg and Woodford 1995). Finally, z t is a productivity shock which evolves according to where g > 1 is the average, gross growth rate of technical progress, ρ ∈ (0, 1), and t ∼ iid(0, σ 2 ). The autocorrelation of productivity shocks is meant to capture the effects of gradual technology diffusion 9 , such as those rationalized by (Rotemberg 2003).
Additionally, we assume that monopolistic producers of intermediate goods are subsidized at rate τ p . Furthermore, we assume that this rate is such that the monopoly distortion is completely eliminated.
Over the recent past, a number of authors have argued that including material goods in New Keynesian models is important for obtaining a good empirical fit. 10 In the present paper, following suggestions in (Woodford 2003), the material goods device plays an important role in strengthening the degree of strategic complementarity in price setting decisions.
Cost minimization ensures that Following (Calvo 1983), we assume that in each period of time, a monopolistic firm can reoptimize its price with probability 1 − α p , irrespective of the elapsed time since it last revised its price. The remaining firms simply rescale their price according to the simple where π t ≡ P t /P t−1 represents the (gross) inflation rate, π is the steady state inflation rate, and γ p ∈ (0, 1) measures the degree of indexation to the most recently available inflation measure. This is an extension of the inflation indexation mechanism considered in (Woodford 2003). While with the latter a hybrid new Phillips is only valid in the neighborhood of a zero-inflation steady state, the former enables us to consider strictly positive steady state inflation rates.
Since firm ς is a monopoly supplier, it will take the demand function (4) into account when setting its price. Additionally, it takes into account the fact that this price will presumably hold for more than one period -except for the automatic revisions. Now, let P t (ς) denote the price chosen in period t, and let d t,T (ς) denote the production of good ς in period T if firm ς last reoptimized its price in period t. According to eq. (4), d t,T (ς) obeys the relationship Then, P t (ς) is selected so as to maximize Standard manipulations yield the approximate loglinearized first order condition whereπ t is the logdeviation of π t ,ŷ t andŵ t are the logdeviations of y t e −z t and w t e −z t , respectively, 11 and where we defined the composite parameter Here, F (n), F 0 (n), and F 00 (n) denote the values of F and its first and second derivatives, evaluated at the steady state value of n, and β ∈ (0, 1) is the household's subjective discount factor. Letting µ p ≡ θ p /(θ p − 1), notice that it is the term (1 − s m ) rather than ¡ 1 − µ p s m ¢ that appears in eq. (12). This is a direct result of our assumption that there is no monopoly distortion in the deterministic steady state of the model.

Households
The economy is inhabited by differentiated households, indexed on [0, 1]. A typical household υ acts as a monopoly supplier of type-υ labor. It is assumed that at each point in time only a fraction 1 − α w of the households can set a new wage, which will remain fixed until the next time period the household is allowed to reset its wage. The remaining households simply revise their wages according to the simple rule W T where γ w ∈ (0, 1) measures the degree of indexation to the most recently available inflation measure. Notice that we let the households index their nominal wage to past inflation as well as to the average growth rate of technical progress. In addition to being economically realistic, this assumption contributes to ensuring the existence of a well-behaved deterministic steady state. Finally, we assume that households are subsidized at rate τ w .
Furthermore, we assume that this rate is such that the monopoly distortion is completely eliminated.
In addition, a typical household must select a sequence of consumptions and nominal bonds holdings. As such, the above described problem makes the choices of wealth accumulation contingent upon a particular history of wage rate decisions, thus leading to households heterogeneity. For the sake of tractability, we assume that the momentary utility function is separable across consumption and leisure. Combining this with the assumption of a complete set of contingent claims market, all the households will make the same choices regarding consumption and will only differ by their wage rate and supply of labor. This is directly reflected in our notations.
Household υ's goal in life is to maximize where E t is the expectation operator, conditional on information available as of time t, V(·) is a well-behaved utility functions, and b ∈ (0, 1). The variable c t represents consumption and h t (υ) is household υ's supply of labor. The preferences are characterized by internal habit formation.
The representative agent maximizes (14) subject to the sequence of constraints where div t denotes profits redistributed by monopolistic firms, w t (υ) ≡ W t (υ) /P t is the real wage rate earned by type-υ labor. Additionally, b t ≡ B t /P t , where B t denotes the nominal bonds acquired in period t and maturing in period t + 1; ξ t denotes lump-sum taxes; i t denotes the gross nominal interest rate.
The first order conditions with respect to c t and b t are Let us defineî t andĉ t as the logdeviations of i t and c t e −zt , respectively, andλ t as that of λ t e z t . Additionally, let us defineb = b/g. We thus obtain the approximate loglinear first order conditionŝ where we defined η ≡b 1 + βb 2 .
Let us now consider the wage setting decision confronting a household drawn to reoptimize its nominal wage rate in period t, say household υ. In the sequel, it will be convenient to define wage inflation π w t ≡ W t /W t−1 . Since the household is a monopoly supplier, it will take the demand function (6) into account when setting its wage. Additionally, it takes into account the fact that this wage rate will presumably hold for more than one period -except for the automatic revision. Now, let W t (υ) denote the wage rate chosen in period t, and let h t,T (υ) denote the hours worked in period T if household υ last reoptimized its wage in period t. According to eq. (6), h t,T (υ) obeys the relationship ) .
In the sequel, it will prove convenient to refer to the term in curly brackets as the disutility wedge. Standard manipulations yield the approximate loglinear relation whereπ w t andŵ t are the logdeviations of π w t and w t e −z t , respectively, and where we defined the parameters

Monetary Policy and Equilibrium
Let us define the natural rate of outputŷ n t as the level of stochastically detrended output that would have prevailed in the absence of nominal rigidities. Define the output gapx t asŷ t −ŷ n t . Then, the monetary authority is assumed to obey a (Taylor 1993)-like rule of the formî This rule incorporates an interest rate smoothing component and the usual feedback terms: monetary authorities react to the deviation of inflation as well as to the deviations of the output gap. 12 A large literature has documented that such simple rules perform well relative to the fully optimal rule.
In equilibrium, it must be the case thatĉ t =ŷ t . Combined with eq (23), the final linear system can then be summarized as followŝ This system is solved with the AIM package proposed by (Anderson and Moore 1985).

Model Estimation
In this section, we describe the model calibration and the minimum distance estimation technique. We then, go on to expound our results.

Structural Parameters Calibration
We partition the model parameters into two groups. The first one collects the parameters which we calibrate prior to estimation. These include parameters that can be given a value based on first order moments, as well as parameters that cannot be separately identified. Let ψ 0 = (β, φ, ω p , s m , θ w , θ p ) 0 denote the vector of calibrated parameters. The calibration is summarized in table 3. The fist four parameters can be calibrated to mimic "great ratios", and the last two raise specific problems.
We first set β = 0.9989 as is conventional in the literature. This together with g = 1.0051 implies a steady state value of the quarterly real interest rate of 0.62%, as in the whole sample 1955(1)-2002(4). Assuming that F is Cobb-Douglas, i.e. y = n 1/φ , we set φ = 3/2, implying a labor share close of 2/3, as in the data. Notice that we implicitly assume that profits are redistributed proportionately to factors income, so that 1/φ is indeed the steady state labor share, as in (Chari et al 2000). Given that F is Cobb-Douglas, the definition of ω p implies ω p = φ − 1. Following (Basu 1995), we set s m = 0.5, implying that the share of material goods in gross output is 50%.
Finally, we chose to calibrate θ p and θ w because these parameters cannot be separately identified as long as we want to estimate the probabilities of price and wage fixity, namely α p and α w . The reason why is simple. Notice that α p and θ p (resp. α w and θ w ) appear only in eq. (27) (resp. eq. (26)). Fundamentally, the data allow us only to estimate the partial elasticity of inflation (resp. wage inflation) with respect to the real marginal cost (resp. labor disutility wedge), and many combinations of α p and θ p (resp. α w and θ w ) are compatible with a given estimate of this partial elasticity, as explained by (Rotemberg and Woodford 1997) and (Amato and Laubach 2003). 13 Thus, α p and θ p (resp. α w and θ w ) are not separately identified. Here, we chose to estimate α p and α w , which requires that θ p and θ w be calibrated prior to estimation. We set θ p = 10, so that the long-run markup charged by intermediate goods producers amounts to 11%, consistent with the values reported by (Basu and Fernald 1997). Symmetrically, we set θ w = 10.

Structural Parameters Estimation
Recall that we defined the data vector z t = (∆â t ,n t ,π t ,π w t ,î t ) 0 . Now, for k ≥ 0, let us define the vector collecting the dynamic response of the components of z t+k to a technology shock η s Formally, θ k is the first column of C k , where C k is the kth coefficient of C (L). In the sequel, we define θ as where the vec (·) operator stacks the columns of a matrix. In the vector θ, we replace the response of ∆â t with that of logged output, which we obtain by cumulating the response of ∆â t and adding that ofn t to the result. Similarly, for ease of interpretation, we replace the response ofπ w t with that of the real wage, which we obtain by cumulating the difference between the responses ofπ w t andπ t . We regroup the model's structural coefficients which we seek to estimate in the vector ψ 1 = (η, γ w , γ p , α w , α p , ω w , ρ i , a p , a x , σ ) 0 . These structural coefficients are selected so as to solvê where θ m (ψ 0 , ψ 1 ) denotes the theoretical counterpart of θ, Ψ is the set of admissible values for the parameters ψ 1 , and V is a diagonal matrix containing the asymptotic variances of θ along its diagonal. 14 As suggested by (Christiano et al 2005), this choice of weighting matrix ensures that ψ 1 is selected so that the model-based IRFs lie as much as possible in the confidence interval of the SVAR-based IRFs. The minimization is subject to standard constraints. 15 Letting ψ = (ψ 0 0 , ψ 0 1 ) 0 , it is convenient to define To obtain the parameters standard errors, we proceed as follows. We start by taking a first order Taylor expansion on the first order condition associated with the minimization of J(ψ, β) in the neighborhood of the true parameters values. Then let us define Applying standard reasoning, we obtain where Σ θ is the variance covariance matrix of θ and T is the sample size. In practice, all the partial derivatives are computed numerically at the point estimate. Notice finally that J(ψ, θ) is asymptotically distributed as χ 2 (dim(θ) − dim(ψ 1 )). The estimation of ψ 1 is repeated for each subsample. In the first subsample, the model captures well the protracted declines of inflation, hours, and the nominal interest rate. However, it is less successful at reproducing the initial inflexion of output and real wages. In the second subsample the model underestimates the initial decline in inflation and the initial increase in output. In spite of this, the global specification test does not allow us to reject the model. For the first subsample, we obtain J = 37.17, with a P -value of 99.9%, and for the second, we obtain J = 18.14, with a P -value of 99.9%. We are nonetheless reluctant to emphasize these results because of the large number of degrees of freedom and the well-documented lack of power of such global specification tests.

Estimation Results
During the course of the estimation, we first tried to estimate all the parameters in ψ 1 .
Three parameters were characterized by binding constraints, namely a x = 0 and γ w = 1 in each samples, and ω w = 3 and ω w = 0 for the first and second subsamples, respectively.
In the latter case, the upper bound corresponds to the value reported by (Prescott 2004) and the lower bound corresponds to the indivisible labor hypothesis, as in (Hansen 1985).
In a second stage, we enforced these equalities and estimated the remaining parameters. 16 This suggests that the degree of wage indexation to past wage inflation is very high and that monetary authorities did not particularly grant attention to the dynamics of the deviation of the output gap. When it comes to ω w , the problems encountered might be a symptom of lack of identification.
Below, we discuss the remaining parameter estimates.

Pre-Volcker Period
When it comes to the price setting side of the model, we obtain the following results.
First, the probability of no price adjustment is α p = 0.76, implying an average spell of no reoptimization of slightly more than four quarters. Though small compared with other estimates, e.g. (Galí and Gertler 1999), this figure is higher than what suggests microeconomic evidence reported by (Bils and Klenow 2004), even when one takes into account the effects of sampling uncertainty. However, it is broadly consistent with the results obtained by (Blinder et al 1998). The degree of price indexation to past inflation γ p = 0.37, but is imprecisely estimated. This implies that during each quarters, fixed prices incorporate roughly 40% of past inflation. The probability of no wage adjustment is α w = 0.77, implying an average spell of no reoptimization of slightly more than five quarters. This value is higher than that reported by (Christiano et al 2005).
When it comes to preference parameters, we obtain standard results. First, with η = 0.4926 and β = 0.9989, we easily deduce thatb = 0.8392. In our sample, average, quarterly labor productivity growth amounts to 0.51%, that is g = 1.0051, so that b = 0.8434. contrast with the view emphasized by (Clarida et al 2000), who find the pre-Volcker period might be characterized by a failure to fulfill the Taylor principle. Our result is more in accordance with (Orphanides 2004), whose real-time analysis suggests that over the pre-Volcker period, the response of monetary policy to inflation was sufficiently aggressive to ensure a determinate equilibrium. 18 We also obtain ρ i = 0.27, though imprecisely estimated. This value suggests that monetary authorities cared about smoothing the nominal interest rate. Another interpretation is that the model does not generate enough endogenous persistence via the feedback effects in eq. (23), so that allowing for extra serial correlation inî t is necessary.
The standard error of technology shocks σ is close to 0.53%. Notice also that ρ = 0.47, suggesting that the model lacks endogenous propagation mechanisms. This implies that the standard error of the growth rate of technical progress is about 0.60, a standard value when compared with other US estimates. We experienced with ρ and constrained this parameter to zero. This resulted in a higher value of σ but did not quantitatively affect the other estimates. In this case, the global specification test remained supportive of our model. Allowing for a positive ρ is however essential to capture the inverted-hump-shaped dynamics of hours at short horizons.

Volcker-Greenspan Period
Our DSGE model allows us to interpret what are the possible sources of change over the Volcker-Geenspan period, compared to the previous subsample.
Two parameters seem to be unaffected by the change of subsample, namely α p = 0.74, and η = 0.48. The latter implies b = 0.78. In each case, it is difficult to reject the null hypotheses that each parameter taken separately has not changed compared with the previous subsample.
When it comes to the other parameters, we obtain a lower value for the degree of price indexation, with γ p = 0.25. However, this value is not statistically different from what obtained in the first subsample. Similarly, the degree of interest rate smoothing seems higher, with ρ i = 0.42, but once again, the evidence of structural change is not compelling.
Finally, the autocorrelation of technology shocks seems higher, with ρ = 0.55, but not significantly different from the previous estimate.
The three key parameters that seem to explain most of the observed change in dynamics are α w , a p and σ . We now obtain α w = 0.9, suggesting a much higher degree of nominal wage rigidity. At the same time, recall that the estimation algorithm drove ω w to zero, thus shutting down an important source of strategic complementarity between wage setters.
Thus, it is unclear whether the higher value of α w results in a higher overall degree of nominal wage rigidity. The next section further discusses this point. The evidence of structural change is more apparent when it comes to a p and σ . We now obtain a p = 1.67, suggesting the monetary authorities have been much more reactive to inflation expectations in the Volcker-Greenspan era. This appears to be a pretty consensual view of US monetary policy, and confirms results obtained by (Boivin and Giannoni 2005).
Finally, we obtain σ = 0.26, reflecting the fact that most of the impact responses in the Volcker-Greenspan period are much smaller that their pre-Volcker counterparts. The latter finding echoes the view put forth by (Ahmed et al 2004), according to which the "good luck" (i.e. smaller shocks) hypothesis cannot be rejected as a central explanation of the apparent reduction of the volatility of real GDP growth and inflation in the US since 1984. However, our estimate of a p also suggests that monetary policy might have plaid a significant role as well.
To conclude, we insist that one should interpret the results pertaining to the second subsample with great caution. The IRF are not estimated with much precision, thus leading to potentially corrupted structural parameter estimates.

Does the SVAR Really Identify Technology Shocks?
In light of a recent set of papers questioning the ability of SVAR models to identify structural shocks, e.g. (Chari et al (2004), (Dupaigne et al 2005) In setting up this robustness analysis, we follow (Altig et al 2004) and, for each subsample, implement the simulation experiment described below.
1. We start by drawing technology shocks from a normal distribution and feed them into our DSGE model. In this first step, we use the estimated values of ψ 1 to simulate paths for (∆â t ,n t ,π t ,π w t ,î t ) 0 . Let z (i) m,t , t = 1, . . . , T , denote the ith simulated path of (∆â t ,n t ,π t ,π w t ,î t ) 0 .
2. We draw shocks from the SVAR residuals, eliminate the SVAR-based technology shocks, and compute a sample path for (∆â t ,n t ,π t ,π w t ,î t ) 0 according to the SVAR parameters. Let z (i) v,t , t = 1, ..., T , denote the ith simulated path from this second step.
3. We form z v,t , and estimate the same SVAR as that described in section t to a technology shock are then computed and stored.
In step 1, we also discard 200 initial points so as to make sure that the simulation does not depend on initial conditions. Steps 1 to 3 are repeated 1000 times (i = 1, . . . , 1000), thus generating a population of IRF. These are sorted in ascending order, and we simply keep the 50th and 950th simulated IRF to form a 90% confidence interval. Notice that implicit in this simulation exercise is the assumption that z This assumption is of course consistent with the identifying constraints in the empirical SVAR.
The results of these simulation experiments are reported on figures 3 (Pre-Volcker sample) and 4 (Volcker-Greenspan sample).
When it comes to the first subsample, figure 3 clearly shows that the empirical SVAR manages to identify the true (i.e. the DSGE model) technology shocks, in spite of a small upper bias. More precisely, the median responses have the same signs and shapes as the true responses. These conclusions are consistent with simulation results reported by (Erceg et al 2004). Incidentally, this reinforces our confidence in the procedure used to identify technology shocks.
As shown in figure 4, the SVAR has trouble reproducing the true responses to technology shocks for the second subsample. Though the SVAR manages to reproduce the correct signs, it misses the shapes of the responses. This is particularly true when it comes to quarterly wage inflation and the quarterly Fed Funds rate.
What can we conclude from this exercise? As shown by (Chari et al 2004) and (Dupaigne et al 2005), a mispecified SVAR model can, under certain conditions, produce IRF that are not compatible with the true data generating process. 19 In this case, estimating a DSGE by means of an MDE procedure applied on impulse response can yield severe biases.
However, except maybe for the second subsample, if the data were indeed generated by the DSGE model, then the previous simulations clearly show that a SVAR similar to that estimated in section 2 would correctly identify the "true" technology shocks.

Counterfactual Analysis
Having estimated the structural parameters of our model, we are now in a position to answer the question asked at the beginning of the paper: Would the economy's dynamic response have been different from what the SVAR indicates if US monetary authorities systematic response to technology shocks had been optimal? To answer this question, we follow the methodology advocated by (Woodford 2003). We start by deriving the appropriate welfare objective and then go on to compute the economy's response to a permanent technology shock under the optimal monetary policy.

Optimal Monetary Policy
yet tedious calculations yield the approximate utility-based loss function where t.i.p. stands for "terms independent of policy", and and δ and κ are complicated functions of the structural parameters. 20 Notice that this approximate loss function is exactly similar to that derived by (Giannoni and Woodford 2004). This result was not warranted since our model differs from theirs due to the presence of permanent technology shocks and material goods.
The monetary authorities' program consists in maximizing the approximate welfare criterion (29), subject to the structural constraintŝ whereŵ n t andλ n t are stochastic variables beyond the control of monetary authorities, 21 and where we defined the composite parameter Notice that the processes governingŷ n t ,ŵ n t , and λ n t are taken into account in the monetary authorities problem. Solving the above program results in a system of first order conditions and constraints that we solve, once again, with the AIM algorithm.

Results and Discussion
Given the parameter vector ψ obtained in the previous section, we obtain values for λ p , λ w , λ x , and δ. These are reported in table 5. For ease of interpretation, we actually report λ p , λ w /λ p , and λ x /λ p . These figures suggest that over the first subsample, the correct welfare objective granted a higher weight to (π w t − γ pπ t−1 ) 2 than to (π t − γ pπ t−1 ) 2 . To understand the origin of this result, notice that It is important to understand that the term in brackets, which is equal to ξ p /ξ w , is relatively insensitive to our particular calibration choices. This term is akin to the ratio of the partial elasticities of inflation and wage inflation with respect to the real marginal cost and the disutility wedge, respectively. As explained before, these are the quantities that the data truly pinpoint -our calibration simply offers a particular interpretation in terms of degrees of nominal rigidity. The first term, under our calibration which imposes θ w = θ p , depends only on φ > 1. Thus, that λ w /λ p exceeds one simply reflects the fact that the data favor a scenario with ξ p much higher than ξ w . Under our calibration, this in turn reflects that the overall degree of nominal rigidities is higher for wages than for prices.
When it comes to the second subsample, our results stand in contrast with previous estimates derived by (Giannoni and Woodford 2004). In particular, our results suggest that the utility-based loss function puts a higher relative weight on (π w t − γ pπ t−1 ) 2 , with λ w /λ p = 0.90. This estimate is more in line with the value reported by (Amato and Laubach 2003), who obtained λ w /λ p = 0.89. In this case, we obtain the the overall degree of nominal rigidities is higher for prices than or wages. This might seem to contradict our estimates of α p and α w . However, recall that we imposed ω w = 0, which mechanically shuts down an important source of strategic complementarities between wage setters, thus leading to a higher α w needed to match a given partial elasticity of wage inflation with respect to the disutility wedge. Notice that (Giannoni and Woodford 2004) obtain much higher values for ω w and θ w , which might explain part of the discrepancy. Notice also that (Amato and Laubach 2003) conclude that over this period the overall degree of nominal rigidity was smaller for wages than for prices.
Having solved the new dynamic system, we can compute the economy's responses to a permanent technology shock under the optimal monetary policy. These responses are reported in figures 1 (Pre-Volcker) and 2 (Volcker-Greenspan). In addition to this visual comparison, we can construct a formal test to investigate the null of no difference by constructing the following Q statistic where θ o (ψ 0 , ψ 1 ) is θ's theoretical counterpart under the optimal monetary policy. The Q statistic is distributed as a χ 2 with degrees of freedom equal to dim (θ). In this exercise, we neglect the possibility of sampling uncertainty on the estimated value of ψ 1 , consistently with the spirit of our MDE strategy. We also exclude the dynamic response ofî t from θ to conduct this test, because the dynamics of the policy instrument is irrelevant for our purpose 22 -ultimately we are interested in the responses of output, inflation, and wage inflation. We conduct this test by focussing either on the real wage or on wage inflation.
We now turn to the implementation of our limited information optimality test, the results of which are reported in table 6.

Pre-Volcker Period
As figure 1 makes it clear, though the dynamics of the nominal interest rate under the simple Taylor rule shares little resemblance with the optimal response ofî t , it is difficult on the basis of our experiment to reject the null hypothesis that the observed dynamics of output, hours, inflation, and real wages do not differ from their optimal counterparts. In particular, the dynamics of output under the assumed Taylor rule and under the optimal policy are virtually indistinguishable. The negative dynamics of hours is slightly less pronounced under the optimal policy. The same applies for inflation.
As it turns out, the null hypothesis is accepted with a P -value well above 90% when we focus on real wages in addition to output, hours, and inflation. Alternatively, the P -value is 30% when we use wage inflation instead. In either case, we fail to reject the null of no statistical difference between the actual economy's responses and the responses under optimal monetary policy.
This result substantially differs from those of (Galí et al 2003). The latter find no evidence in support of the view that monetary policy was optimal over the pre-Volcker period. The principal reason why we obtain such contrasted results derives from the assumption that prices and wages are both sticky. In an environment where only prices are sticky, it is possible for the Central Bank to completely stabilize inflation and the output gap, simultaneously. Thus, as long as the SVAR-based impulse responses of inflation are statistically different from zero, it is possible that a simple χ 2 test as that proposed above would reject the optimality hypothesis. However, as shown by (Erceg et al 2000), as soon as one assumes that wages are also sticky, the Central Bank can no longer simultaneously stabilize inflation, wage inflation, and the output gap. In this case, it is possible that the optimal response of inflation is statistically different from zero. This is what we obtain in practice in our own exercise.
To verify this intuition, we propose the following exercise. In a first step, we reestimate our DSGE model, assuming flexible wages (this amounts to imposing γ w = α w = 0).
During the course of the estimation, we encountered two different problems. As above, the parameter a x is driven toward 0. Additionally, the parameter γ p is driven toward 1.
We accordingly enforce these equalities. More problematic is our finding that a p is driven below one. In this case, we encounter the usual indeterminacy problem. Dealing with the latter is beyond the scope of the present paper, so that we simply impose the restriction a p = 1.01. The remaining parameters are estimated, and, while broadly satisfying (with a P -value of 99.99%), the overall fit of our model is poor, especially when it comes to hours and wage inflation, as shown in figure 5. 23 Importantly, a Quasi-Maximum-Likelihood (QML) test along the lines of (Newey and West 1987) would reject the restriction γ w = α w = 0.
Second, we recompute the approximate loss function (imposing λ w = 0 and λ p = 1), and rerun our simple χ 2 test. We obtain a P -value of zero, thus rejecting unambiguously the optimality hypothesis. As is clear from figure 5, the optimal response of inflation is uniformly zero and, thus drives the statistic Q to high values. Additionally the optimal responses of the real wage somewhat differs from their SVAR-based counterpart. This is all the more penalizing as the confidence interval of the SVAR-based response of real wage is relatively narrow at short horizons. This exercise demonstrates that the mere inclusion of sticky wages completely overturns the conclusion that one would have reached based on a simple sticky price model.
It must be emphasized that our counterfactual experiment is not a priori biased toward accepting the null hypothesis. In setting up this exercise, we left a priori no chance to the model with an "ad-hoc" Taylor rule to reproduce the economy's dynamics under the optimal monetary policy. In fact, one may even argue that we a priori hampered the model, in the sense that the "ad-hoc" Taylor rule does not belong to the same parametric class as that of the optimal rule. Thus, one can view our thought experiment as a very conservative (limited information) test of optimality.

The Volcker-Greenspan Period
The model-based and SVAR-based IRFs are reported on figure 2. Once again, notice that in this case, the VAR-based confidence intervals are fairly large. Accordingly, the estimated IRF do not seem to be very informative. This might partly explain the problems described above. We then perform our (limited information) test of optimality and compute the Q statistic defined above. We obtain results that still support the optimality hypothesis, with a P -value well above 90%, be it with real wages or wage inflation.
To conclude this exercise, we reestimate the model under the assumption of perfect wage flexibility. Once again, a x is driven toward 0. The parameter γ p is now driven toward 1. We thus enforce these equalities. We still obtain that a p is driven to values close to one, but with a very large standard error. As above, we simply impose the restriction a p = 1.01. The remaining parameters are estimated. The overall fit of our model is good, with a P -value above 90%, as shown in figure 6. 24 Once again, a formal QML test would reject the restriction α w = γ w = 0, though less strongly than in the previous case.
Second, we recompute the approximate loss function (still imposing λ w = 0 and λ p = 1), and rerun our simple χ 2 test. We now obtain a P -value well above 90%, thus accepting the optimality hypothesis. As is clear from figure 6, the difference between the SVARbased and the optimal responses is sometimes large, but the SVAR-based responses are not estimated very precisely, so that acknowledging this large sampling uncertainty leads us to accept the null hypothesis.
Of course, this conclusion is consistent with the subsample analysis conducted by (Galí et al 2003). Thus, with or without sticky wages, we fail to reject the null hypothesis that the Fed correctly reacted to permanent technology shocks in the Volcker-Greenspan period.
However, we insist that this exercise should be interpreted with caution, because of the large sampling uncertainty associated with the estimated SVAR. The latter might yield corrupted structural parameter estimates, and substantially reduces the meaningfulness of our simple χ 2 test.

Conclusion
In this paper, we asked the question:  (Galí et al 2003) concerning the pre-Volcker period. The main reason for this discrepancy is that we assume wage stickiness in addition to price stickiness. In such an environment, the optimal response of inflation to a permanent technology shock need not be uniformly zero, in contrast with a model with only sticky prices, as in (Galí et al 2003 Notes 1 See also (Altig et al 2004), (Edge et al 2003), (Ireland 2004).
2 See (Clarida et al 1998(Clarida et al , 2000. 3 Output and hours worked are divided by the civilian population over 16. The Fed Funds rate is expressed at a quarterly rates. The data are extracted from the Bureau of Labor Statistics website, except for the Fed Funds rate which is obtained from the FREDII database. 4 See also (Galí 2005). 5 In each case, only observations from the relevant subsamples are used, especially so for the initial lags. 6 Here, and in the following pictures, the size of the technology shock is normalized to one standard deviation. 7 As recommended by (Christiano and Fitzgerald 2003), we drop two years of data at the beginning and end of the sample before computing these variance ratios. 8 A detailed technical appendix is available from the authors upon request. 9 (Altig et al 2004) and (Galí et al 2003) also consider autocorrelated growth rates of technical progress. 12 We also experimented with Taylor rules featuring the growth rate of output or the deviation of output from its stochastic trend instead of the output gap. The results were not qualitatively altered. 13 See also (Eichenbaum and Fisher 2004) for a related discussion. A similar claim holds when it comes to s m , as argued by (Matheron and Maury 2004).
14 This estimation method relates to that of (Amato and Laubach 2003), (Boivin and Giannoni 2005), (Christiano et al 2005), (Giannoni and Woodford 2004), (Gilchrist and Williams 2000), and (Rotemberg andWoodford 1997, 1999). 15 The constrained minimization is undertaken with the sequential quadratic programming provided in the MATLAB optimization package. 16 Accordingly, we substract 3 to dim(ψ 1 ) to obtain the number of degrees of freedom of the χ 2 test. 17 We investigated the sensitivity of this result to a higher degree of curvature for the utility function. We postulated a utility function of the form We first set σ = 5, which yielded lower values for η and hence b, at the cost of a deteriorated the fit. Second, we estimated σ as an additional free parameter. In this case, σ is driven to values below one (0.45), the fit is marginally improved with respect to the constrained (σ = 1), and we obtain a very imprecise estimate of σ (standard error of 2.67). Given that this utility function is non standard, we prefer to to stick to the original specification.
rise in response to a technology shock.

J-statistic
Notes: The values in parentheses are the standard errors computed as indicated in the text. Values in brackets are the P -values associated with the J-statistics. A star refers to a parameter which hit a constraint during the course of the first stage estimation