Recent Reserve Bank discussion papers (with abstracts) for 2007
Reserve Bank discussion and research papers present the detailed scholarly research of staff economists and visiting scholars. The papers are published throughout the year mainly for academic and professional economists.
(NB. If you do not have the free Acrobat reader software necessary to read these discussion papers already installed, go to the Adobe website.)
Real Business Cycle (RBC) and Dynamic Stochastic General Equilibrium (DSGE) methods have become essential components of the macroeconomist’s toolkit. This literature review stresses recently developed (often Bayesian) techniques for computation and inference, providing a supplement to the Romer (2006) textbook treatment which stresses theoretical issues. Many computational aspects are illustrated with reference to the simple divisible labour RBC model familiar to graduate students from King, Plosser and Rebelo (1988), Christiano and Eichenbaum (1992), Campbell (1994) and Romer (2006). Code and US data to replicate the computations are provided on the Internet, together with a number of appendices providing background details.
It is standard to model the output-inflation trade-off as a linear relationship with a time-invariant slope. We assess empirical evidence for three types of nonlinearity in the short-run Phillips curve. At an empirical level, we aim to discover why large negative output gaps in Japan during the period 1998-2002 did not lead to accelerating deflation, but instead coincided with stable, albeit moderately negative, inflation. We document that this episode is most convincingly interpreted as reflecting a gradual flattening of the Phillips curve. Our analysis sheds light on the determinants of the time-variation in the Phillips curve slope. Our results suggest that, in any economy where trend inflation is substantially lower (or substantially higher) today than in past decades, time-variation in the slope of the short-run Phillips curve has become too important to ignore.
We examine the informational content of New Zealand data releases using a parametric dynamic factor model estimated with unbalanced real-time panels of quarterly data. The data are categorised into 21 different release blocks, allowing us to make 21 different factor model forecasts each quarter. We compare three of these factor model forecasts for real GDP growth, CPI inflation, non-tradable CPI inflation, and tradable CPI inflation with real-time forecasts made by the Reserve Bank of New Zealand each quarter. We find that, at some horizons, the factor model produce forecasts of similar accuracy to the Reserve Bank’s forecasts. Analysing the marginal value of each of the data releases reveals the importance of the business opinion survey data – the Quarterly Survey of Business Opinion and the National Bank’s Business Outlook survey – in determining how factor model predictions, and the uncertainty around those predictions, evolves through each quarter.
This paper uses a structural vector autoregression model to analyse the relationship between migration flows, housing construction and house prices in New Zealand. It shows that a net immigration flow equal to one percent of the population is associated with an approximately 10 percent increase in house prices. This size of this relationship, which has existed since the 1960s, is an order of magnitude larger than would be expected from the average change in the population and house prices in the long term. One explanation is that migration flows occur at times when locals are changing their demand for housing because of revised expectations about future income growth. A second explanation is that migrant flows have a destabilising effect on agents expectations about the fundamental value of houses. While the paper cannot satisfactorily distinguish between these two options, the results suggest that monetary policy can still be used to dampen the house price changes that occur at times when migration flows are unusually large.
The paper develops an overlapping generations model incorporating a realistic depiction of the credit constraints facing home buyers to explain why home ownerships rates have declined in New Zealand since 1990 despite a significant relaxation of credit constraints. The model focuses attention on the role of property investors in the property market, and suggests changes in credit constraints mainly affect the tenure decisions of individual households, but not the aggregate level of house prices. The model suggests the decline in real interest rates is likely to be the cause of the rise in house prices and the decline in home ownership rates since 1990.
In this paper we use a small open economy model to identify the causal factors that drive New Zealand's current account. The model features nonseparable preferences, habit in consumption, imperfect capital mobility, permanent productivity shocks, fiscal shocks and two foreign shocks to explore features that are important in understanding the dynamics of the current account. The results suggest that permanent technology shocks and world cost of capital shocks account for the bulk of variation in the current account at short horizons; at longer horizons, external valuation shocks (reflecting terms of trade and exchange rate developments) account for most of the variance. Habit in consumption and a debt-sensitive risk premium are features that improve overall model it as measured by posterior odds ratios. These features, and the contribution of foreign and permanent technology shocks, help to explain why the one shock present value model of the current account fails to appropriately characterise the dynamics of the New Zealand current account, as discussed in Munro and Sethi (2006).
Traditional vector autoregressions derive impulse responses using iterative techniques that may compound specification errors. Local projection techniques are robust to this problem, and Monte Carlo evidence suggests they provide reliable estimates of the true impulse responses. We use local linear projections to investigate the dynamic properties of a model for a small open economy, New Zealand. We compare impulse responses from local projections to those from standard techniques, and consider the implications for monetary policy. We pay careful attention to the dimensionality of the model, and focus on the effects of policy on GDP, interest rates, prices and the exchange rate.
This paper argues that UK WWI fiscal policy followed the ‘English method’ identified by Sprague (1917) and his discussants, and revived by the US to finance the Korean War (see Ohanian 1997). During WWI, UK fiscal policy adopted the “McKenna rule” named for Reginald McKenna, Chancellor of the Exchequer (1915-16). McKenna presented his fiscal rule to Parliament in June 1915. The McKenna rule guided UK fiscal policy for the rest of WWI and the interwar period. We draw on narrative evidence to show that motivation for the McKenna rule came from a desire to treat labour and capital fairly and equitably, not pass WWI costs onto future generations, and commit to a debt retirement path and higher taxes. However, a permanent income model suggests the McKenna rule adversely affected the UK because a higher debt retirement rate produces a lower consumption-output ratio. Data from 1916-37 supports this prediction.
This paper argues that bilateral spatial price models do not estimate bilateral transactions costs when trade with third cities is important. The paper examines trans-Atlantic gold arbitrage during the gold standard era by assembling a database indicating when trans-Atlantic gold shipments occurred. It shows that two-way gold shipments between New York and London frequently occurred prior to 1901. However, in 1901 gold shipments to London ceased and were replaced by triangular arbitrage shipments through Paris. Consequently, New York and London gold price data cannot be used to estimate New York-London transactions costs after 1901, as no trade took place.
This note illustrates the connections between the Hessians of numerical optimization problems, variance-covariance matrices for parameter vectors, and the influence that data mismeasurement may have on parameter estimates. Condition numbers provide a central guide to the sensitivity of common numerical problems to data mismeasurement. Examples are provided that clarify their importance. Two simple prescriptions arise from this analysis. First, data must be of an ‘appropriate’ scale. In some cases this means that the data need similar means and similar variances. Second, in numerical algorithms it is desirable to ascertain the condition number of the Hessian implied by the initial parameter values used for numerical optimisation algorithms. Condition numbers are easy to compute and indicate whether the updates from an initial starting value are likely to be poor.
This article solves a high frequency model of price arbitrage incorporating storage and trade when the amount of trade is limited by transport capacity constraints. In equilibrium there is considerable variation in transport costs, because transport costs rise when the demand to ship goods exceeds the capacity limit. This variation is necessary to attract shipping capacity into the industry. In turn, prices in different locations differ by a time varying amount. Thus while the law of one price holds, it holds because of endogenous variation in transport costs.
This memo characterises the business cycles of the New Zealand economy, a la Stock and Watson (1998). The paper provides a set of stylised facts that New Zealand macroeconomic models should, ideally, be capable of emulating. This paper therefore serves as an important backdrop to macro modelling efforts. We also examine the same data series for the US and Australia, providing an indication of which features of New Zealand’s business cycles may be idiosyncratic.
Computing the optimal trajectory over time of key variables is a standard exercise in decision-making and the analysis of many dynamic systems. In practice however, it is often enough to ensure that these variables evolve within certain bounds. In this paper we study the problem of setting monetary policy in a `good enough' sense, rather than in the optimising sense more common in the literature. Important advantages of our satisficing approach over policy optimisation include greater robustness to model, parameter, and shock uncertainty, and a better characterisation of imprecisely defined monetary policy goals. Also, optimisation may be unsuitable for determining prescriptive policy in that it suggests a unique 'best' solution while many solutions may be satisficing. Our analysis frames the monetary policy problem in the context of viability theory which rigorously captures the notion of satisficing. We estimate a simple closed economy model on New Zealand data and use viability theory to discuss how inflation, output, and interest rate may be maintained within some acceptable bounds. We derive monetary policy rules that achieve such an outcome endogenously.
The qualitative responses that firms give to business survey questions regarding changes in their own output provide a real-time signal of official output changes. The most commonly-used method to produce an aggregate quantitative indicator from business survey responses - the net balance, or diffusion index - has changed little in 40 years. It focuses on the proportion of survey respondents replying "up", "the same" or "down". This paper investigates whether an improved real-time signal of official output data changes can be derived from a recently advanced method on the aggregation of survey data from panel responses. It also considers the ability of survey data to anticipate revisions to official output data. We find, in a New Zealand application, that exploiting the panel dimension to qualitative survey data gives a better in-sample signal about official data than traditional methods. This is achieved by giving a higher weight to firms whose answers have a close link to official data than to those whose experiences correspond only weakly or not at all. Out-of-sample, it is less clear it matters how survey data are quantified with simpler and more parsimonious methods hard to improve. It is clear, nevertheless, that survey data, exploited in some form, help to explain revisions to official data.
We evaluate the performance of an open economy DSGE-VAR model for New Zealand along both forecasting and policy dimensions. We show that forecasts from a DSGE-VAR and a 'vanilla' DSGE model are competitive with, and in some dimensions superior to, the Reserve Bank of New Zealand's official forecasts. We also use the estimated DSGE-VAR structure to identify optimal policy rules that are consistent with the Reserve Bank's Policy Targets Agreement. Optimal policy rules under parameter uncertainty prove to be relatively similar to the certainty case. The optimal policies react aggressively to inflation and contain a large degree of interest rate smoothing, but place a low weight on responding to output or the change in the nominal exchange rate.
Discussion paper correspondence can be directed to:
Reserve Bank of New Zealand
PO Box 2498