«Contents IFC’s Contribution to the 55th ISI Session, Sydney, 2005 Proceedings IFC Conference Basel, 2004 No 22 – September 2005 Contents KEYNOTE ...»
No. 22 • September 2005
IFC’s Contribution to the
55th ISI Session, Sydney, 2005
IFC Conference Basel, 2004
No 22 – September 2005
COST, QUALITY AND RELEVANCE OF FINANCIAL STATISTICS.............11
Cost, quality and relevance of the BIS international financial statistics
(Paul Van den Bergh)
The potential impact of Basel II on central bank data requirements in Jamaica (Myrtle D. Halsall and R. Brian Langrin)
Irving Fisher Committee Future challenges in compiling balance of payments and international investment position on Central-Bank Statistics (Jörgen Ovi and Thomas Elkjar)
ACCOUNTING STANDARDS AND THEIR IMPACT ONFINANCIAL STATISTICS
A comparison of the main features of accounting and statistical standards Executive Body: and review of the latest developments in the field of accounting standards Jan Smets (Chair) (Paolo Poloni and Patrick Sandars)
Paul Van den Bergh From general ledger towards financial statistics (Johan Lammers)
Almut Steger The impact of the introduction of accrual accounting by Australian Rudi Acx governments on government finance statistics Radha Binod Barman (Peter Harper)
economy is likely to behave in the future, for the kind of analysis central banks conduct and therefore for the sorts of statistics they need to have.
• New emphases in the mandates of central banks – in particular the explicit focus on financial system stability (as opposed to prudential supervision of individual institutions) – carry implications for data collections and the way we process them.
• The changing data environment, and in particular more private provision of data, provides both opportunities for central banks to exploit that information, but also some potential pitfalls.
The financial sector and balance sheets
For a long time, data from the ‘real’ side of the economy were of primary interest to macroeconomic policy-makers. This presumably followed the intellectual currents in economics. The development of national income accounting in the 1940s, and the growing optimism about the capacity of macroeconomic policy to deliver consistently high levels of output and employment, emphasised the measurement, forecasting and control of aggregate demand. The various partial indicators of economic activity, culminating in the quarterly estimates of national income and spending, were the raw statistical materials with which generations of economists learned to work their trade. Of course, central bankers always paid a good deal of attention to financial data like interest rates, lending, credit and money data, but even in central banks I suspect that until the mid 1970s most of the prestigious analytical jobs were in the areas dealing with the real economy. This period was also the heyday of large-scale macroeconometric model building, usually with great detail on the expenditure side of national accounting and with associated data requirements. It’s worth noting, incidentally, that these models typically failed to capture adequately the inter-linkages between the real and financial sides of the economy. For some time, of course, the financial side was seen as just a passive add-on – many people thought that changes in balance sheets didn’t matter much, and that movements in asset prices were of second-order importance. A common view for many years, in fact, was that monetary policy didn’t matter much.
As the intellectual battle raged over what activist stabilisation policy could, in fact, achieve, the economic and financial upheavals of the 1970s ushered in a period in which financial variables were suddenly seen as much more important – money did matter after all – and discussion focused much more on financial quantities. There was the observed correlation between measures of the money stock and the price level. Irving Fisher’s Equation of Exchange, MV PQ, made an appearance here, as the quantity theory of money was turned into a policy prescription of beguiling simplicity: if only central banks could control ‘M’, they would in due course stabilise ‘P’.
That idea seemed very appealing in the mid 1970s, but as we all know, the policy process turned out to be more complex than that. Today is not the time to explore all that again. It suffices to say that, despite tremendous efforts in developing and analysing a host of measures of money, attempts to impart stability by targeting closely the money stock were much less successful in practice than in theory. Most countries have moved away from that idea towards some sort of implicit or explicit targeting of the ultimate objective, prices, using the short-term nominal interest rate as the instrument.
Yet it would be a mistake to think that this shift signifies that the behaviour of the financial sector has once again come to be viewed as unimportant to the economy. On the contrary, the way in which the financial system responds to financial prices, to regulation (or deregulation) and to the demand for products by the household and business sectors, and the way in which it is constantly innovating, has a major bearing on the path of economic activity. Moreover, the importance of these links is growing.
Opinions vary on whether or not this is a good thing. It has been claimed, for example, that the growth of derivatives markets potentially enhances economic stability, insofar as risks inherent in life can be shifted from those who do not wish to run them to those who do. It has also been claimed that such innovations are highly dangerous – ‘financial weapons of mass destruction’ was one colourful description.3 Either way, an interaction of financial processes with the real economy is in mind; what is at issue is where the risks inherent in economic life are ultimately borne, and whether the people running them understand them and have been paid an appropriate price to do so. This is an area where the statistical collections find it hard to keep up, particularly with the proliferation of financial activity which crosses national borders or occurs off-balance sheet.
3 Available at http://www.berkshirehathaway.com/letters/2002pdf.pdf (Chairman’s letter, p 15).
designed jointly, drawing on the Bank’s existing knowledge about household debt, and the research firm’s expertise in questionnaire design. The main field work was undertaken in January and February this year and the Australian public were generally very co-operative.
Indeed, Reserve Bank staff took a number of calls, emails and letters from people taking an active interest in the survey (though also, it must be said, a number of calls telling us to mind our own business!). The results will be published later this year.
An earlier example of using customised survey data to address a specific issue was the survey of hedging practices of Australian enterprises in late 2001. This was conducted by the ABS with major input and funding from the RBA. It was motivated by the fact that while Australia had very substantial foreign liabilities, the foreign currency exposures reported by the financial sector were very small (as would be expected given that such exposures carry capital requirements). Clearly these entities engaged in substantial hedging, but we knew little about the other sectors of the economy. Hence we approached the ABS to carry out a survey to fill in the missing pieces. What we found was that even though net liabilities to foreigners were (and still are) substantial, the Australian community as a whole had, at end 2001, a modest net foreign currency asset position. The difference is of course due to the fact that foreign demand for Australian dollar-denominated assets was substantial, which has remained true in the period since. Hence while absorbing substantial resources from abroad, Australian entities were not, by and large, accumulating large foreign currency risks. This was a very important fact to know, and I think it has had a significant impact on the views various observers, including ratings agencies, have formed about the country’s external accounts. Work is currently under way in the ABS for an update of this survey, with substantial funding support from the RBA.
These are just two examples of the use of one-off surveys. In due course, regular statistical collections may well adapt to provide more information on some of these questions, but that takes time. Hence, I think there could well be more of this sort of approach by central banks in future: use of customised survey information to address specific questions which arise because of fast-moving structural change in the economy.
An implication of this for central bank statisticians could be, I suppose, that a somewhat different set of skills might be required. Time series expertise – I can recall in the past reading, or trying to read, lengthy papers on the X-l1 seasonal adjustment technique as applied to monetary data – might be relatively less in demand, and knowledge of how to design, implement and interpret surveys giving a cross section or panel data set, more in demand. Central banks might of course need to contract out for that expertise – and may well use official agencies for that purpose, though there is ample competition from private firms.
Not unrelated to the growing size and complexity of the financial sector of the economy is the rise in emphasis on financial system stability as a ‘charter item’ for central banks. Financial stability as an objective has, of course, been around for as long as central banking. The lender of last resort function – to liquefy the system in times of crisis – was in fact a major part of the raison d’être of the modern central bank. But we have seen in the past decade or so a clearer focus on identifying potential threats to system stability and working to reduce them. This has been reflected in the structure of some central banks, as for example in the ‘stability wing’ of the Bank of England, and the creation of a System Stability function in the RBA. It has also been reflected in the advent of regular publications about stability issues by central banks, in our case the Financial Stability Review now published twice each year.
In this audience it is worth asking: what is the data set needed for this task?
Thus far, in our own experience and, as best I can tell, that of some other central banks, the data used by the work on system stability overlap to some extent with those used by the macroeconomists in their monetary policy work. In our case, aggregates for credit, household sector debt-servicing burdens, risk spreads and so on are commonly used for both types of work. That is because the ranking question of late has been whether the extent of additional household leverage amounts to a risk to financial stability. It turns out that this pretty much depends on whether it constitutes a risk to macroeconomic stability first. That is, our assessment is that high household debt is unlikely, of itself, to lead directly to distress for lenders, or to a growth slump.
Where there is a risk is that some other contractionary shock might be amplified by high levels of debt, with potential impacts on the economy. That might affect financial firms’ profitability indirectly.
Thus far, then, the data sets used by the macro policy people and those by the financial stability people have been similar. As our work on system stability issues continues to develop,
Part of the art of policy-making is developing a sense of how to distinguish noise and signal from this mass of ‘information’. Before placing too much weight on an indicator, some knowledge of how it is put together is obviously important. To this end, it is often worthwhile for people in the policy analysis process to develop a good dialogue with the compilers of these data.
On occasion, well-trained people in the bureaucracy have been able to suggest methodological improvements to privately-compiled surveys.
No survey of economic conditions should have much weight attached to it until we have seen its performance over a period of time long enough for some business cycle fluctuations to be observed. I grant that, in Australia, a very long expansion means that this test is getting a bit demanding. But even within an expansion there are fluctuations in the pace of growth and a good business survey should pick these up. Most surveys will be found, in my experience, to have given some false signals as well as some genuine ones. This issue of type I versus type II errors can be critical in judging the state of affairs at key points in the business cycle, using survey data.
It is in the area of financial prices where the proliferation of private data is perhaps most marked. The vast bulk of data on pricing of financial instruments is privately compiled, a result of the size of private financial markets and their continuous nature. Where financial instruments are traded on exchanges, their prices are easily observed, and there are relatively few challenges associated with compiling pricing data. However, with the increasing shift towards over-the-counter (OTC) and non-standard products, this task is more difficult and it becomes necessary to rely more on financial institutions’ proprietary data. There is no real alternative to this, but of course we need to take care to be satisfied as to the accuracy and impartiality of the data and it is incumbent on private providers of data to be prepared to provide some assurance here. As central banks increasingly use such data sets to infer market attitudes to risk and expectations about the future (a process which incidentally requires increasingly sophisticated analytical skills), all these issues seem likely to grow in importance over the years ahead. Many challenges will surely come our way.
Central banks are heavy consumers of information, and hence of statistics, and always will be.