Before advent of Web and Mobile based applications, people use the following for making payments for retail expenses.
Credit and Debit Cards
After 1st generation of online commerce, payments, and banking websites, mobile solutions are leap frogging the web apps particularly in developing countries to help people at bottom of the wealth pyramid who may not have computers but have smartphones.
Ist Gen: e-commerce, e-payments, e-banking
Online commerce sites such as Amazon, eBay
Online payments such as Paypal
Online Banking at various Banks websites such as Wells Fargo
Magnetic Card readers and EMV Chip card readers
There are these networks around the globe for small value retail payments. UnionPay in China and RuPay in India are now directly competing with other well established providers such as MasterCard and Visa. The main motive is financial inclusion of unbanked people.
Debit Cards /Credit Cards (Small Value Retail Payment Systems) Networks
USA and Other Countries (Excluding China and India)
There are several solutions worth mentioning which do not yet fit in any broad categories.
Chat/SMS based payment solutions are very popular in China and now being integrated in applications such as Facebook.
Chat based Payments
Tencent Wechat (China)
Vodaphone M-PESA (SMS based)
There are newer Proximity based payments solutions using two technologies – BLE and NFC. There are now several solutions based on each of these technologies.
Proximity Payments (No Contact)
A. Payment solutions powered by iBeacon technology (Bluetooth Low Energy)
B. NFC powered Payment Solutions
Visa Pay Wave
Quick Tap (UK)
Sure Tap (Canada)
Touch2Pay (New zealand)
T-Money (South Korea)
Cep-T Cuzdan (Turkey)
m-POS apps using scanning hardware for reading of credit/debit cards of customers at businesses is a popular service provided. Square leads the pack.
Mobile Card Readers (m-POS)
There are now several companies which offer money transfer service to accounts across the globe.
Overseas Money Transfer
Azimo Money Transfer
Mobile Payment applications have mushroomed in India. India is on leading edge in providing real time mobile payment system available 24/7. Some of the services providers have been given licensed to start Payment Banks dedicated to payment operations as opposed to Deposit Banks.
Vodaphone M Pesa
There are many applications introduced by Banks who provide access to users accounts on a smartphone. Some of them are listed below.
ICICI Bank imobile
State Bank of India Anywhere
Bank of Baroda mPassbook
Indian Bank Indpay
Kotak Mahindra Bank
In 2016, ICICI Bank introduced first NFC based mobile app in India.
NFC based m-payment solutions
ICICI Bank Pockets
There are several retailers who have introduced Mobile apps for payments. Some of them are listed below.
Specialized Retail Payment Apps
Walmart Pay (USA)
Dunkin Donuts (USA)
Taco Bell (USA)
CVS Pay (USA)
Kohl Pay (USA)
Amazon Payment (USA)
Here is a list of Online Payment Solutions.
On-line Payment Solutions
China has large population of smartphone users who do use m-payment apps such as Alipay regularly. Many others are trying to get a foothold in this market with partnerships with Chinese UnionPay.
China UnionPay (CUP) Cloud Quick Pass
Ping An Pay
NFC based m-payment
Apple Pay (CUP)
Samsung Pay (CUP)
Key Sources of Research:
Top 10 Trends in Payments in 2016
Cashless Payment System in India- A Roadmap
Ashish Das, and Rakhi Agarwal
Fast Retail Payment Systems
Stephanie Bolt, David Emery and Paul Harrigan
Report of the Key Advisory Group on the Payment Systems in India (KAG on PSI)
31st May, 2012
NEFT, RTGS, UPI: What should you use to transfer money?
This is an important topic. Uneven development using orthodox economic and development theories has led researcher to look for alternative explanations.
How to properly integrate Global – Regional – National – Local perspectives?
How valuable is relational (network) perspective?
What is the role of power relations among Actors?
How does Institutional, Cultural, and Social embeddedness of Actors impact development and economy?
How does actions and interactions of Actors affect local economic environment?
From Toward a relational economic geography
During the 1990s, a controversial debate has emerged in economic geography and other social sciences, such as economics and sociology, focusing on the question of what research program, key focus and methodology a novel economic geography should embody (Perrons, 2001). This was, partially, a reaction to the work of Krugman (1991), Fujita et al. (2001), and others who claimed to have developed a new economic geography. This self-proclaimed new economic geography offers an interesting economic perspective on the conventional problems of spatial distribution and equilibrium, based on an analysis of increasing returns, transportation costs, and other traded interdependencies (Martin and Sunley, 1996; Bathelt, 2001). Yet it fails to develop a comprehensive research program as a basis for economic geography because ‘. . . the new economic geography ignores almost as much of the reality they study as old trade theory did’ (Krugman, 2000, p. 50).1 In following Martin and Sunley’s (1996) suggestion, this approach is better classified as geographical economics. While this literature brings economic geography closer to the core ideas of neoclassical economics, Amin and Thrift (2000) have recently suggested another fundamentally different direction for economic geography, capitalizing on concepts and theories from other social sciences. Amin and Thrift (2000, p. 4) provocatively claim that economic geography is no longer able to ‘fire the imagination’ of researchers. Therefore, they ask for a critical reflection and renewal of this field’s basic goals, concepts, and methods. The reactions to their contribution have stimulated a debate, parts of which have been published in a special issue of Antipode in 2001. This debate has unfortunately been dominated by discipline-political arguments, opinions, and claims. In essence, it focuses on the question of whether economic geography should be closely associated with economics or lean towards the social, political, and cultural sciences. In particular, Thrift (2000) has identified a growing interest in the cultural dimension of economic relations, as well as in economic issues of cultural studies. While Amin and Thrift (2000) propose a cultural turn away from neoclassical economics, their critics emphasize existing linkages with and the importance of economic theories as a foundation of economic geography (Martin and Sunley, 2001; Rodriguez- Pose, 2001). We agree with Martin and Sunley (2001) that this debate is partly based on false dualisms, such as economics vs. sociology and quantitative vs. qualitative methodology. In our view, this discussion is unclear because it mixes normative accounts of the discipline’s policy implications with epistemological and methodological arguments. The debate is also somewhat misdirected for it tries to separate those economic and social aspects that are inseparable. The decisive question cannot be whether economic geography should be economized or culturalized. Rather, the economic and the social are fundamentally intertwined. They are dimensions of the same empirical reality which should be studied in a dialogue of perspectives rather than in mutual exclusion and reductionist prioritization (Stark, 2000).
The second transition is characterized by a reformulation of the core concepts of economic geography. In the following sections, discontinuities between relational economic geography and regional science will be identified according to five dimensions of the research design. These dimensions include the conception of space, object of knowledge, conception of action, epistemological perspective, and research goal. From this, we develop a relational framework for analysis which systematically focuses on economic actors and their action and interaction. The basic propositions of this framework will be developed in the remainder of this section (Table 1).
4.1. Conception of space
A relational view of economic geography is based on a relationship between space and economy which is contrary to that of regional science.10 Specifically, regional science views space as a container which confines and determines economic action. It treats space as a separate entity which can be described and theorized independently from economic action. In contrast, a relational approach assumes that economic action transforms the localized material and institutional conditions of future economic action. Similar to Storper and Walker (1989), this approach emphasizes that the economic actors themselves produce their own regional environments. The way in which spatial categories and regional artifacts have an impact on economic action can only be understood if the particular economic and social context of that action is analysed (Bahrenberg, 1987). Spatial structures and processes have, however, been socially and economically underconceptualized in regional science. We contend that space can neither be used as an explanatory factor for economic action nor be treated as a separate research object in isolation from economic and social structures and relations. Consequently, as space is not an object of causal power to explain social or economic action it cannot be theorized (Sayer, 1985; Saunders, 1989; Hard, 1993).11 Of course, economic processes also have material outcomes (e.g. infrastructure) which are localized in certain places and territories and exist over longer time periods. Such structures clearly have an impact on economic action and interaction in these localities. Nonetheless, economic actors and their action and interaction should be at the core of a theoretical framework of economic geography and not space and spatial categories. Spatial scientists, such as Bunge (1973), treat spatiality as the object of knowledge in economic geography. They aim to detect those spatial laws which govern human action without looking at the actors themselves. Instead of treating space as a container, we suggest a conception of space as perspective (Glu¨ ckler, 1999). In other words, we use space as a basis for asking particular questions about economic phenomena but space is not our primary object of knowledge. It is this conception that we refer to as the geographical lens. As part of this, economic exchange becomes the focus of analysis and not space. Similarly, we do not seek to identify spatial laws but, instead, look for explanations of localized economic processes and their consequences.12 It is particularly through the application of a distinct perspective to the study of an object of knowledge that discipline-specific research problems can be formulated. The spatial perspective or geographical lens leads economic geographers to pose research questions about an economic phenomenon, different from those typically asked by economists or sociologists. We also suggest that the perspective applied helps mobilize a particular terminology and, over time, a set of tacit knowledge which entails an understanding of what it is that is being analysed and how this subject matter can be described and evaluated adequately.
From Rethinking relational economic geography
Since the mid-1990s, the softening of sub-disciplinary boundaries within human geography and the more general call for a ‘relational thinking’ in human geography (Massey et al . 1999; see also Allen et al. 1997; Sack 1997; Lee and Wills 1997) have stimulated the consolidation of what might be termed a ‘relational economic geography’. 1 In this ‘relational turn’, economic geographers tend to place their analytical focus on the complex nexus of relations among actors and structures that effect dynamic changes in the spatial organization of economic activities (see Amin 1998; Dicken and Malmberg 2001; Ettlinger 2001; Bathelt and Glückler 2003; Boggs and Rantisi 2003). This relational economic geography is concerned primarily with the ways in which socio-spatial relations of actors are intertwined with broader structures and processes of economic change at various geographical scales. Despite the claims of novelty among most economic geographers who have taken on such a relational thinking in their geographical analysis, it remains unclear whether this ‘relational turn’ represents merely a modest reworking of earlier work in economic geography that might not be explicitly relational in its conceptualization and analysis. After all, heated debates on the spatial divisions of labour, locality studies and flexible specialization dominated the heyday of economic geography during much of the 1980s and the early 1990s (Scott 2000). With hindsight, these debates have legitimized the analytical concern of economic geography with the social relations of production and the relations between the spatial and the social (Harvey 1982; Thrift 1983; Massey 1984; Smith 1984; Gregory and Urry 1985; Lee 1989). By sidestepping the pitfalls of an earlier brand of quantitative economic geography concerned with spatial geometries and locational analysis, the substantive foci on regions, localities and production processes in these debates have no doubt foregrounded the recent ‘relational turn’ in economic geography. While many recent geographic writings have addressed aspects tangential to the core theoretical categories deployed in a relational economic geography (e.g. Barnett 1998; Thrift 2000; Barnes 2001; Storper 2001), there is surprisingly a lack of systematic evaluation and integration of our knowledge of this growing field. In view of limited space, this paper develops a sympathetic critique and rethinking of the ‘relational turn’ in order to clarify the distinctive contributions of a relational economic geography and to rework some of its conceptual tools. In the next section, I critically examine the nature and emergence of the ‘relational turn’ in economic geography, by revisiting relational thought that existed as an undercurrent before the 1990s and situating the recent ‘relational turn’ in this earlier work in economic geography. Whilst the recent ‘relational turn’ has some of its intellectual antecedents in the earlier debates of the 1980s (particularly the social relations of production framework), its substantive content has been broadened to include social actors and their network relations at different spatial scales. Focusing on recent economicgeographical writings on regional development, embedded networks and geographical scales, I note that much of this large body of recent work is relational only in the thematic sense that relations among actors and structures are an important theme in contemporary economic-geographical enquiry. In particular, the causal nature of relationality and power relations are under-theorized and underspecified. If relational thinking in economic geography is to have a greater impact, we need to rework and deepen its theoretical constructs to go beyond simply a ‘thematic turn’ (Jessop 2001, 1214). The paper moves on to rework some of the most important theoretical insights in the ‘relational turn’ – relationality, power and actors. Dynamic and heterogeneous relations among actors and structures are conceptualized as causal mechanisms of socio-spatial change in economic landscapes. Here, I explore the notion of ‘relational geometries’ constituted through relationality and power . The concept of relational geometries refers to the spatial configurations of heterogeneous relations among actors and structures through which power and identities are played out and become efficacious. These relational geometries are neither actors (e.g. individuals and firms) nor structures (e.g. class, patriarchy and the state), but configurations of relations between and among them – connecting actors and structures through horizontal and vertical power relations. Relational geometries are also not networks per se because the latter refer mainly to horizontal and, mostly, static ties among actors only. Actors in these relational geometries are not static ‘things’ fixed in time and space. They are dynamic and evolving in such relational ways that their differential practices unleash multiple forms of emergent power in relational geometries. Building on the concept of different and emergent forms of causal power as positions in relational geometries and as practice through social action, this relational perspective allows us to avoid the two polarized frameworks in contemporary economic geography – actor networks and institutional structures. This effort to rework relational economic geography thus parallels the recently reinvigorated ‘relational sociology’ that ‘sees relations between terms or units as preeminently dynamic in nature, as unfolding, ongoing processes rather than as static ties among inert substances’ (Emirbayer 1997, 289). To substantiate the relevance of this reworking of conceptual categories, I show how relationality and multiple forms of power can offer vital insights into regional development that go beyond existing relational frameworks in economic geography.
From Geographies of circulation and exchange: Constructions of markets
In the preceding sections we have discussed three heterodox alternatives to the orthodox free market logic.
For socioeconomists, markets are embedded in social structures and are a far cry from the virtual market model celebrated by orthodox economists. It is social relations that underwrite real markets, guaranteeing their functioning in the face of uncertainties. Work un- dertaken in this spirit puts emphasis on social relations and institutions, and analyses how non-economic institutions either enable or constrain efficient market exchange.
Political economists insist that, neoliberal claims to the contrary notwithstanding, capitalism cannot exist without “market imperfections”. In these accounts, the market model is nothing else than a fictitious ideological device to hide from view the underlying dynamics of capitalism. Accordingly, political economic scholars regard it as their task to remove the veil and to lay open the contradictory reality of concrete markets under capitalism.
Cultural economists apply the cultural theoretical concept of performativity towards the market. Rather than reproducing the classical distinction between the abstract market model and real-life markets, protagonists point to the role that the practice of economists widely understood plays in the self-realization of economic thought. It is argued that the model of the perfect market realizes itself in the world in the assembly of far-reaching socio-technical arrangements. Here, markets take on ambivalent form as relational effects of socio-technical networks engaging in the twin processes of framing and overflowing. The latter process includes the proliferation of new social relations, groups and communities which may articulate economic and non-economic alternatives.
In the discipline of economic geography heterodox approaches have managed to break the hegemony of the neoclassical orthodoxy. Unfortunately, the arguments in heterodox debates on the market and on alternative economic geographies more generally are very often taken from entrenched positions, authors apparently finding it very difficult to understand the train of thought followed by the “opposing” camp. While this is true for all positions introduced in this progress report, cultural economy has arguably had a particularly difficult time. With our representation of the performativity approach we hope to have been able to clarify some of the misunderstandings. The strength of the heterodox project lies precisely in the co-existence of competing positions, each challenging the still omnipresent logic of the perfect market in different ways. This is what a vibrant heterodox project should aspire to: A healthy competition of plurivalent and opposing ideas, a competition, however, which at the same time does not prevent conversation across different approaches and is pluralistic enough to gain from the application of different perspectives (see Barnes 2006).
From Advancing evolutionary economic geography by engaged pluralism
Please see my related post on Relational Sociology.
Trade Finance is the lubricant in Global Trade. The concentration of banks providing Trade Finance is very high. So are the risks if a bank fails or withdraws credit due to regulations.
How many Banks provide Trade Finance?
What happens when Banks withdraw credit due to Financial Crisis?
What other alternatives are there for Trade Finance ? GTLP?
What is the role of increased regulations on Trade Finance? BASEL III
From Trade finance around the world
Decline in Trade Finance as a cause of Global Trade Collapse
Concentration of Banks providing Trade Finance
De-risking by EU Banks to EMEs due to BASEL III requirement
Backlash against Trade
From DE-RISKING BY BANKS IN EMERGING MARKETS – EFFECTS AND RESPONSES FOR TRADE / IFC EMCOMPASS
Emerging evidence suggests that de-risking is a reality. Increased capital requirements, coupled with rising Know-Your-Customer, Anti-Money-Laundering, and Combating-the-Financing-of-Terrorism compliance costs have resulted in the exit of several global banks from cross-border relationships with many emerging market clients and markets, particularly in the correspondent banking business. A subset of this business, trade finance, is also at risk, with potential consequences for segments of emerging market trade. The emerging market trade finance gap was significant before the crisis and has since likely expanded. Those involved in addressing the de-risking challenge must focus on compliance consistency and effective adaptation of technological innovations.
From ADB 2016 Trade Finance Gaps, Growth, and Jobs Survey
The estimated global trade finance gap is $1.6 trillion.
$692 billion of the gap is in developing Asia (including India and the People’s Republic of China).
56% of SME trade finance proposals are rejected, while large corporates face rejection rates of 34% and multinational corporations are rejected only 10% of the time.
Firms report that 25% more trade finance would enable them to hire 20% more people.
Woman-owned firms face higher than average rejection rates.
70% of surveyed firms are unfamiliar with digital finance, uptake rates highest in peer-to-peer lending.
From ADDRESSING THE GLOBAL SHORTAGE OF TRADE FINANCE
The International Chamber of Commerce (ICC) 2016 Global Survey on Trade Finance reveals that 61 percent of respondents cited a global shortage of trade finance—a figure that is particularly concerning as we continue to observe a period of prolonged sluggishness when it comes to global trade growth. But hope is not lost. Doina Buruiana, Project Manager at ICC Banking Commission, explains the various ways that the trade-finance gap can be filled.
For the fifth consecutive year, trade growth has been reported at below 3 percent and has not recovered to pre-crisis levels—with a global trade-finance shortage estimated to have reached US$1.6 trillion in 2016, according to the Asian Development Bank (ADB). Such figures certainly make for grim reading. And what’s more, the findings from the International Chamber of Commerce’s (ICC) 2016 Global Survey on Trade Finance—an annual report reflecting the issues and trends on the trade-finance landscape—are also providing cause for concern. Sixty-one percent of respondents—national, regional and global banks providing trade finance—reported a global shortage of trade finance.
There are various reasons for this. Ninety percent cited the cost or complexity of compliance requirements relating to anti-money laundering (AML), know your customer (KYC) and sanctions as a chief barrier to the provision of trade finance. Furthermore, 77 percent of respondents to the Global Survey cited Basel III regulatory requirements as a significant impediment to trade finance. Many global banks are withdrawing from several emerging-market regions dependent on trade and trade finance, partly due to pressures to favour domestic clients following some banks’ bailouts by taxpayers.
And the fallout can be severe. A shortage of trade finance impacts the growth of businesses worldwide. In particular, small to medium-sized enterprises (SMEs) are being affected by the shortage of bank liquidity. According to the Global Survey, 58 percent of rejected trade-finance proposals were SME applications, despite the sector submitting 44 percent of all trade-finance proposals.
Yet hope is not lost. There are various ways in which the industry can adapt to not only bridge the gap in unmet demand for finance and help revive global growth, but also to evolve the industry, to drive healthy competition and to remove the focus from being global-bank dependent.
Backlash against trade
Improving understanding and attitudes toward trade, and awareness around trade finance, would be a good place to start. Across the world, many have attacked trade and globalisation for threatening jobs and benefitting only big businesses—sentiments that have been evident across the European Union (EU) during Transatlantic Trade and Investment Partnership (TTIP) negotiations, and also during the recent US presidential election campaigns.
Indeed, we’ve seen a clear rise in protectionist and populist policies—a recent World Trade Organization (WTO) report cited that between mid-October 2015 and mid-May 2016, G20 economies had introduced new protectionist trade measures at the fastest pace since 2008. To address this, we need to first make the case for trade itself in order to highlight the importance of trade finance. It is therefore crucial that businesses and trade-finance industry stakeholders reinvigorate the narrative around global trade, relaying its significance to the public and ensuring that trade is on the agenda of policymakers worldwide.
Understanding trade finance.
Next, enhancing awareness around trade finance should also remain a top priority. While there has already been significant progress in the dialogue between trade-finance practitioners and regulators, and a noticeable shift towards a more suitable risk-aligned treatment of trade finance, it is crucial that we continue to emphasise the low risk nature of trade-finance instruments.
Indeed, ICC’s 2015 Trade Register report highlights the low risk nature of trade-finance products—with favourable credit and default-risk experience. For instance, the Trade Register shows that there is a low default rate across all short-term trade-finance products, with the average expected loss for short-term trade finance lower than typical corporate exposures. In particular, traditional documentary trade-finance products such as letters of credit (LC) are low risk. Remarkably, the transaction default rate for export LCs between 2008 and 2014 was 0.01 percent. Medium- to long-term products also fare well, with a low loss nature due to the export credit agency’s (ECA) guarantee—normally with investment-grade ratings and backed by high-income Organisation for Economic Co-operation and Development (OECD) governments.
The need for increased awareness around trade finance extends well beyond traditional trade finance and also includes newer techniques and instruments under the supply-chain finance umbrella. We also need to raise industry understanding around compliance measures—differentiating between client KYC and non-client KYC, for instance, in order to ease processes. In addition, enhanced awareness and understanding in relatively unsettled areas in trade finance, such as trade-based money laundering, would help direct compliance measures. Despite common belief, for instance, only a small proportion of trade-based money laundering actually occurs in trade-finance transactions.
Yet while progress has certainly been made with regulation and compliance proposals, the Global Survey suggests that the costs associated with such measures are still, and will perhaps continue to be, prohibitive. As such, if we want to close the trade-finance gap, we need to move slightly away from a global bank-dominated financial landscape and embrace collaboration.
Financial-technology firms (fintechs) are increasingly shaping the future of trade finance, and make an obvious banking partner, with both parties bringing strengths and expertise to such arrangements. Indeed, many fintechs are looking to partner with—rather than compete with—banks due to balance-sheet requirements, the regulatory framework to navigate, and the industry expertise required to bring new concepts to fruition. Certainly, partnerships between the two players could drive additional efficiencies and the capacity of banks to conduct business—perhaps eventually reducing the trade-finance shortage.
Fintechs aren’t the only players that could potentially collaborate with banks—or even fill the trade-finance gap independently. The Global Survey found that export credit agencies (ECAs) are increasingly supporting export finance, with alternative liquidity flowing into the ECA space. Thirty-seven percent of respondents reported that they had successfully concluded business with institutional investors in ECA finance, up from 30 percent in the previous survey in 2015, reflective of the growing role of alternative investors.
The Global Survey also highlighted the important role of multilateral development banks (MDBs), with 75 percent of respondents agreeing that MDBs (and ECAs) help reduce trade-finance gaps. In particular, MDBs provide financial assistance to emerging markets for investment projects and policy-based loans. This can prove crucial for enabling access to trade finance in general, and for SMEs.
The ADB’s Trade Finance Program (TFP), for instance, fills market gaps for trade finance by providing guarantees and loans through more than 200 banks. The TFP has supported more than 12,000 transactions across Asia, valued at over US$23.1 billion—of which more than 7,700 involved SMEs. What’s more, the TFP focuses on markets in which the private sector has less capacity to provide trade finance, and where there are large trade-finance gaps.
However, the Global Survey also indicated that MDB and ECA support varies by region—with respondents deeming it most effective in advanced Asia, Russia and sub-Saharan Africa, and less effective in Commonwealth of Independent States (CIS) countries, India and Central America and the Caribbean. Clearly, an increase in the envelope and effectiveness of MDB trade-finance provision in these regions will help further reduce the gap. In order to counter geographical disparities, the next step for MDBs is to consider any structural limitations in existing trade-finance programmes—or contextual difficulties in particular markets.
Finally, non-bank capital provides another useful source of trade finance, particularly from private-sector sources of finance—such as specialist financiers or alternative-finance providers. Since the financial crisis, these players have played an increasingly crucial role in meeting unmet demand, and have experienced considerable growth. What’s more, specialist financing is growing increasingly popular among companies in emerging markets, in which trade-finance demand is most acute.
Revamping trade finance.
Of course, one way to possibly boost the provision of trade finance is to make it more efficient and attractive. Certainly, the digitisation of trade finance holds huge potential. Automating trade finance can make overall processes more effective and reliable, increasing capacity for banks, corporates and other stakeholders along the supply chain. For instance, eDocs (paperless documents) streamline processes, with the ability for multiple parties to access, review and collaborate at any one time. The resulting operational improvements in turn reduce errors, maintain data integrity and accelerate the completion of agreements.
Despite the clear benefits, the Global Survey shows that there has been a slow uptake of digitisation. In fact, one-fifth of respondents reported that there is no evident digitisation at all, two-thirds saw very little impact of technology on trade finance, and just over 7 percent saw digitisation as being widespread. The slow uptake is likely due to the challenges of digitising trade—including the considerable scale and complexity of the task at hand, for instance. Banks should play a key role in advocating the benefits of digitisation and help their corporate clients adapt to new systems.
We cannot let the trade-finance gap incapacitate trade. Clearly, there are steps that the trade-finance industry can take to help meet unmet demand. Looking ahead, improving attitudes and raising understanding, encouraging collaboration and making progress towards innovation in the industry will support the growth of businesses of all sizes—and the economy—worldwide.
From Global Trade Liquidity Program /IFC
The Global Trade Liquidity Program (GTLP) is a unique, coordinated global initiative that brings together governments, development finance institutions (DFIs), and private sector banks to support trade in developing markets and address the shortage of trade finance resulting from the global financial crisis.
With targeted commitments of $4 billion from public sector sources, the program has supported nearly $20 billion of trade since its inception. It raises funds from international finance and development institutions, governments, and banks, and it works through global and regional banks to extend trade finance to importers and exporters in developing countries. IFC’s commitment to the program is $1 billion.
GTLP began its operations in May 2009, channeling much-needed funds to back trade in developing countries. Phase 2 was launched in January 2010 with an unfunded solution, based on the existing GTLP platform, to support trade finance directed at the food and agribusiness sectors. The program was extended in January 2012 to continue to stabilize and foster trade and commodity finance to emerging markets.
Since its launch, GTLP has been acknowledged in the financial industry as an innovative structure to help infuse much needed liquidity into the trade finance market, thereby catalyzing global trade growth. The solution also represents a win-win proposition: for the banks it provides an opportunity to continue supporting clients through these difficult times; for IFC and its partners, it affords the ability to channel liquidity and credit into markets to help revitalize trade flows by leveraging on the banks’ vast networks across emerging markets in Asia, Africa, Middle East, Europe, and Latin America.
The program is already benefiting thousands of importers and exporters and small- and medium-sized enterprises.
From ADB Trade Finance Program
ADB’s Trade Finance Program (TFP) fills market gaps for trade finance by providing guarantees and loans to banks to support trade.
Backed by its AAA credit rating, ADB’s TFP works with over 200 partner banks to provide companies with the financial support they need to engage in import and export activities in Asia’s most challenging markets. With dedicated trade finance specialists and a response time of 24 hours, the TFP has established itself as a key player in the international trade community, providing fast, reliable, and responsive trade finance support to fill market gaps.
A substantial portion of TFP’s portfolio supports small and medium-sized enterprises (SMEs), and many transactions occur either intra-regionally or between ADB’s developing member countries. The program supports a wide range of transactions, from commodities and capital goods to medical supplies and consumer goods.
The TFP continues to grow, supporting billions of dollars of trade throughout the region, which in turn helps create sustainable jobs and economic growth in Asia’s developing countries.
Understanding Global Value Chains – G20/OECD/WB Initiative
There is lot of opacity in understanding of GVCs. Efforts are underway since last few years to get better analytical and statistical tools to understand International Trade and Global Value Chains.
Globalization in Trade and Finance encouraged by International organizations such as IMF/WB/OECD/WTO/UNCTAD/UNIDO and others has changed the landscape of Trade.
There is still a long way to go to make better sense of issues and concerns for policy makers.
OECD/WB/WTO along with G20 Trade Ministers have initiated efforts since 2012.
From Global Value Chains
Introduction to GVCs
International production, trade and investments are increasingly organised within so-called global value chains (GVCs) where the different stages of the production process are located across different countries. Globalisation motivates companies to restructure their operations internationally through outsourcing and offshoring of activities.
Firms try to optimise their production processes by locating the various stages across different sites. The past decades have witnessed a strong trend towards the international dispersion of value chain activities such as design, production, marketing, distribution, etc.
This emergence of GVCs challenges conventional wisdom on how we look at economic globalisation and in particular, the policies that we develop around it.
Trade in Value Added
The goods and services we buy are composed of inputs from various countries around the world. However, the flows of goods and services within these global production chains are not always reflected in conventional measures of international trade. The joint OECD – WTO Trade in Value-Added (TiVA) initiative addresses this issue by considering the value added by each country in the production of goods and services that are consumed worldwide. TiVA indicators are designed to better inform policy makers by providing new insights into the commercial relations between nations.
GVCs and Trade Policy
Global value chains (GVCs) have become a dominant feature of world trade, encompassing developing, emerging, and developed economies. The whole process of producing goods, from raw materials to finished products, is increasingly carried out wherever the necessary skills and materials are available at competitive cost and quality. Similarly, trade in services is essential for the efficient functioning of GVCs, not only because services link activities across countries but also because they help companies to increase the value of their products. This fragmentation highlights the importance of an ambitious complementary policy agenda to leverage engagement in GVCs into more inclusive growth and employment and the OECD is currently undertaking comprehensive statistical and analytical work that aims to shed light on the scale, nature and consequences of international production sharing.
From Global Value Chains/Global Production Networks: Organizing the Global Economy
The key organizational feature of the global economy?
“Global Value Chains are defined by fragmented supply chains, with internationally dispersed tasks and activities coordinated by a lead firm (a TNC)” (UNCTAD, 2013, p.125; original italics).
Data gathering exercises:UNCTAD,OECD,WTO,JETRO…
Now firmly on the agenda among leading international economic organizations
The international division of labour:imperial/colonialsystems and exchanges of raw materials and finished goods
The new international division of labour(NIDL):establishment of overseas production bases of core country TNCs
The global division of labour:much more complex global networks lying behind the production of different goods and services
About 60% of global trade, which today amounts to more than $20 trillion, consists of trade in intermediate goods and services that are incorporated at various stages in the production process of goods and services for final consumption” (UNCTAD, 2013, p. 122)
Not new, but since 2000 trade and FDI have increased exponentially, and ahead of GDP growth, highlighting a growth in TNC coordinated global value chains
Double counting – approx. 25-30% of value of world trade, e.g. the iPhone example. Not just trade from China to US, but incorporates high value components from Japan, South Korea etc.
Beyond national economies and basic trade data, and beyond TNCs and FDI, to more complex organizational structures involving intra-firm trade, arm’s length trade and non-equity modes e.g. subcontracting
From GLOBAL VALUE CHAIN ANALYSIS: A PRIMER
From Global Capitalism and Commodity Chains: Looking Back, Going Forward
From Global Value Chains/Global Production Networks: Organizing the Global Economy
Global Commodities Chains (GCCs)
Global Production Networks (GPNs)
Global Value Chains (GVCs)
Trans National Corporation (TNC)
Multi National Corporation (MNC)
Multi National Enterprises (MNE)
UNIDO (United Nations Industrial Development Organization)
OECD (Organization for Economic Cooperation and Development)
WTO (World Trade Organization)
WB (World Bank)
UNESCAP (Economic and Social Commission for Asia and Pacific)
UNCTAD ( United Nations Commission for Trade and Development)
ILO ( International Labor Organization)
G20 ( Group of 20 Nations)
TIVA ( Trade in Value Added)
Neil M Coe
Henry Wai-chung Yeung
Key Sources of Research:
Measuring Trade in Value Added: An OECD-WTO joint initiative
The Collapse of Global Trade during Global Financial Crisis of 2008-2009
There are three broad categories of global Trade.
Trade in Commodities
Trade in Manufactured Goods
Trade in Services
During the Financial Crisis, Trade in commodities declined due to increase in Prices.
Trade in Services were largely unaffected.
Trade in Manufactured goods declined sharply for variety of reasons not yet entirely clear.
Potential Causes for decline
Fall in Aggregate Demand of goods
Constrained Trade Finance
Increase in Trade Barriers
Impact of Global Value Chains
From GLOBAL VALUE CHAINS IN A POSTCRISIS WORLD A DEVELOPMENT PERSPECTIVE
The global economic crisis of 2008–09 has revealed the interdependence of the world economy. The financial crisis originated in the United States, but the resulting economic downturn quickly spread to the rest of the world. Trade, along with finance, was one of the main vectors of transmission of the crisis. In 2009, there was a massive contraction in global trade—minus 13 percent. The contraction was largely a reflection of a drop in demand, especially for durable goods. The fact that the shock was transmitted very rapidly reflects the increasing reliance by businesses on so-called global value chains (GVCs)—the process of ever-finer specialization and geographic fragmentation of production, with the more labor-intensive parts of the production process transferred to developing countries. In a world where GVCs are the prevalent business model for multinational corporations, a reduction in demand for final products by global buyers implies that demand shocks are immediately transmitted “upstream” to subcontractors in developing countries.
From Resilient to the crisis? Global supply chains and trade flows
According to the most recent IMF estimates (IMF 2009), the ongoing recovery will drive a wedge between output and trade. Output is supposed to shrink by ‘only’ 1.1% at the end of 2009 (-3.4% in advanced economies), but world trade is forecast to still experience a drop of -11.9%. While other estimates put the latter figure at –9% (WTO, World Bank), it is indisputable that during 2009 official figures recording trade flows will fall much more than GDP.
Apart from its magnitude, the fall in trade in 2009 has also been quite homogeneous across all countries (more than 90% of OECD countries have exhibited simultaneously a decline in exports and imports exceeding 10%, as noted by Araujo and Olivera Martins 2009). This fall has also been very fast, with trade virtually grinding to a halt in the last month of 2008.1 These facts led Baldwin and Evenett (2009) to qualify the drop in trade during the crisis as “severe, sudden and synchronised”.
A number of transmission mechanisms have recently been proposed to account for these three attributes of the contraction of trade flows, many of which impinge upon the role that global supply chains might have played in exacerbating the drop in global demand.
The basic argument is that in a world characterised increasingly by vertical specialisation, goods are produced sequentially in stages across different countries – so-called international supply chains. The constituent parts and components of a final good crosses borders several times before the final product reaches the consumer; at each border crossing, the full value of the partially assembled good is recorded as trade. As a result, for a given reduction in world income, trade should decline “not only by the value of the finished product, but also by the value of all the intermediate trade flows that went into creating it”.
This implies that the extensive presence of supply chains does not automatically explain why world trade overshot the world GDP drop; other explanatory factors are needed. These may include:
The collapse in internal demand and production, affecting current and future level of (tradable) inventories worldwide;
Fiscal stimulus plans with a relatively stronger support of non-tradable sectors, like construction and infrastructures (Bénassy-Quéré et al. 2009);
The rise of ‘murky’ protectionism; and
The problems of trade finance with financial spreads still well-above ‘normal’ (i.e. pre-crisis) market rates (Auboin, 2009).
Do the above arguments mean that global supply chains are totally neutral as a transmission mechanism of the crisis from GDP to trade? Of course not. In all likelihood, however, the channels are much more complex than originally thought, and entail important compositional effects.
For the sake of argument, let us take the following story based on the idea that a relatively large part of the overreaction of trade has been caused by the sudden drying up of liquidity in trade finance. Auboin (2009) notes that, in the second part of 2008, spreads on short-term trade credit facilities suddenly soared to between 300 to 600 basis points above LIBOR, compared to 10 to 20 basis points in normal times, leading to a virtual freeze of important trade deals throughout the globe, with supply chain operations being disrupted by lack of financing, especially for developing country suppliers.
Under this assumption we would have a scenario in which the liquidity channel has led trade to overshoot the fall in demand, with the effect being larger within supply chains, as the trade financing of these operations is typically managed by large international financial institutions, particularly hit by the crisis.3
In this scenario, we would still obtain a severe, sudden and synchronised drop in trade flows, with the effects correlated with (but not caused by) the behaviour of global supply chains.
Moreover, under the same scenario, we would also observe that, during the crisis,trade falls more along the intensive margin (i.e. value per trade) than the extensive margins (i.e. number of traders). The reason being that, if the overreaction of trade was caused relatively more by liquidity constraints than by a disruption of supply chains, the above effects would lead to a reduction in the volume of trade, but not necessarily to a similar reduction in the number of traders worldwide.
This is exactly what Bricongne et al. (2009) find in a paper analysing the behaviour of French exporters during the crisis. Relying on monthly data for individual French exporters observed until April 2009, the authors find that the drop in French exports is mainly due to the intensive margin of large exporters, with small and large firms evenly affected once sectoral and geographical specialisation are controlled for. Interestingly, they also find that firms (small and large) in sectors more dependent on external finance are the most affected by the crisis.
While any conclusion must wait for more data to become available, there are good reasons to believe that the rise of global supply chains has not necessarily been the main cause of the recent “severe, sudden and synchronised” fall in global trade flows. Based on the available evidence, one may even be tempted to conclude that, under certain circumstances, international networks of production may also display some degree of ‘resilience’ to adverse shocks like the current crisis: supply-chain-related trade flows may react later (rather than sooner) to an adverse shock. Their fall may be smaller and, eventually, their recovery may happen faster relative to overall trade flows.
The observed resilience of supply chains may arise from some intrinsic attribute of production chains, as argued above. Alternatively, it may be the outcome of the political economy. Fearing that a collapse of supply chains would set off a sudden process of de-globalisation and implosion of international trade, governments may intervene in favour of supply chains. For example, the massive bail-outs of large financial institutions have helped their best customers, among them the big players within supply chains. Finally, of course, this indirect support of supply chains may have also been an unintended consequence of financial bailouts implemented for very different reasons.
From UNCTAD Global Value Chains: Investment and Trade for Development
BLS ( Bureau of Labor Statistics)
UNCTAD ( United Nations Conference on Trade and Development)
NIPAs ( National Income and Product Accounts)
OECD ( Organization for Economic Cooperation and Development)
EBRD (European Bank for Reconstruction and Development)
WTO (world Trade Organization)
GATT (General Agreement on Trade and Tariffs)
ILO (International Labor Organization)
ADB (Asian Development Bank)
UNIDO ( United Nations Industrial Development Organization)
BEA ( Bureau of Economic Analysis)
Global Supply Chains
Global Value Chains (GVCs)
Inter Industry Input Output Tables
Inter Country Input Output Tables
Multi National Companies ( MNCs)
Regional Economic Integration
Trade in Goods and Services
Trade in Value Added (TIVA)
World Input Output Database (WIOD)
OECD-WTO TIVA Database
UNCTAD-EORA GVC Database
Global Trade Analysis Project (GTAP) Database
Institute of Developing Economies (IDE-JETRO) Asian IO Tables
World Input Output Network (WION)
Global Multi Regional Input Output (GMRIO) Framework
EXIOBASE/EXIOPOL EXIOBASE is a global, detailed Multi-regional Environmentally Extended Supply and Use / Input Output (MR EE SUT/IOT) database.
Key Sources of Research:
The Global Trade Slowdown: Cyclical or Structural?
Cristina Constantinescu, Aaditya Mattoo, and Michele Ruta
The future of global trade: Where are we heading and should we be concerned?
Gaaitzen de Vries
Oscillations and Amplifications in Demand-Supply Network Chains
From Modeling and Measuring the Bullwhip Effect
Demand variability and uncertainty is a driver of supply chain inventory. Managing supply chains can be a challenge when demand variability and uncertainty is high. For a company in a supply chain consisting of multiple stages, each of which is run by a separate organization (or company), the variability of demand faced by this company can be much higher than the variability faced by downstream stages (where “downstream stages” refers to the stages closer to the final consumption of the product). The bullwhip effect refers to the phenomenon where demand variability amplifies as one moves upstream in a supply chain (Lee et al, 1997a, or LPW). LPW described this as a form of demand information distortion. Lee et al (1997b) further discussed the managerial and practical aspects of the bullwhip effect, giving more industry examples. The bullwhip effect phenomenon is closely related to studies in systems dynamics (Forrester, 1961; Sterman, 1989; Senge, 1990). Sterman (1989) observed a systematic pattern of demand variation amplification in the Beer Game, and attributed it to behavioral causes (i.e., misperceptions of feedback). Macroeconomists have also studied the phenomenon (Holt et al, 1968; Blinder, 1981; Blanchard, 1983).
From Operational and Behavioral Causes of Supply Chain Instability
Supply chain instability is often described as the bullwhip effect, the tendency for variability to increase at each level of a supply chain as one moves from customer sales to production (Lee et al. 1997, Chen et al. 2000). While amplification from stage to stage is important, supply chain instability is a richer and more subtle phenomenon. The economy, and the networks of supply chains embedded within it, is a complex dynamic system and generates multiple modes of behavior. These include business cycles (oscillation), amplification of orders and production from consumption to raw materials (the bullwhip), and phase lag (shifts in the timing of the cycles from consumption to materials). High product returns and spoilage are common in industries from consumer electronics to hybrid seed corn (Gonçalves 2003). Many firms experience pronounced hockey-stick patterns in which orders and output rise sharply just prior to the end of a month or quarter as the sales force and managers rush to hit revenue goals. Boom and bust dynamics in supply chains are often worsened by phantom orders—orders customers place in response to perceived shortages in an attempt to gain a greater share of a shrinking pie (T. Mitchell 1923, Sterman 2000, ch. 18.3, Gonçalves 2002, Gonçalves and Sterman 2005).
What are the causes of supply chain instability? Why does supply chain instability persist, despite the lean revolution and tremendous innovations in technology? What can be done to stabilize supply chains and improve their efficiency?
Here I describe the origins of supply chain instability from a complex systems perspective. The dynamics of supply chain networks arise endogenously from their structure. That structure includes both operational and behavioral elements.
From Operational and Behavioral Causes of Supply Chain Instability
Oscillation, Amplification, and Phase Lag
Exhibit 1 shows industrial production in the US. The data exhibit several modes of behavior. First, the long-run growth rate of manufacturing output is about 3.4%/year. Second, as seen in the bottom panel, production fluctuates significantly around the growth trend. The dominant periodicity is the business cycle, a cycle of prosperity and recession of about 3–5 years in duration, but exhibiting considerable variability.
The amplitude of business cycle fluctuations in materials production is significantly greater than that in consumer goods production (exhibit 2). The peaks and troughs of the cycle in materials production also tend to lag behind those in production of consumer goods. Typically, the amplitude of fluctuations increases as they propagate from the customer to the supplier, with each upstream stage tending to lag behind its customer. These three features, oscillation, amplification, and phase lag, are pervasive in supply chains.
From Booms, Busts, and Beer: Understanding the Dynamics of Supply Chains
A central question in operations management is whether the oscillations, amplification and phase lag observed in supply chains arise as the result of operational or behavioral causes.
Operational theories assume that decision makers are rational agents who make optimal decisions given their local incentives and information. Supply chain instability must then result from the interaction of rational actors with the physical and institutional structure of the system.
Physical structure includes the network linking customers and suppliers and the placement of inventories and buffers within it, along with capacity constraints and time delays in production, order fulfillment, transportation, and so on.
Institutional structure includes the degree of horizontal and vertical coordination and competition among firms, the availability of information to decision makers in each organization, and the incentives faced by each decision maker.
Behavioral explanations also capture the physical and institutional structure of supply chains, but view decision makers as boundedly rational actors with imperfect mental models, actors who use heuristics to make ordering, production, capacity acquisition, pricing and other decisions (Morecroft 1985, Sterman 2000, Boudreau et al. 2003, Gino & Pisano 2008, Bendoly et al. 2010, Croson et al. 2013).
Amplifications and Phase Lag
Amplification and phase lag arise from the presence of basic physical structures including stocks of inventory and delays in adjusting production or deliveries to changes in incoming orders.
Oscillations, however, are not inevitable. They arise from boundedly rational, behavioral decision processes
The difference matters: if supply chain instability arises from operational factors and rational behavior, then policies must be directed at changing the physical and institutional structure of the system, including incentives.
If, however, instability arises from bounded rationality and emotional arousal such policies may not be sufficient.
Jay W Forrester
Hau L Lee
Negative Feedback Loop
Positive Feedback Loop
Supply Chain Networks
Beer Distribution Game
Operational and Institutional Structures
Key Sources of Research:
Behavioral Causes of Demand Amplification in Supply Chains: “Satisficing” Policies with Limited Information Cues
REDUCING THE IMPACT OF DEMAND PROCESS VARIABILITY WITHIN A MULTI-ECHELON SUPPLY CHAIN
Francisco Campuzano Bolarín1,Lorenzo Ros Mcdonnell1, Juan Martín García
The impact of order variance amplification/dampening on supply chain performance
Robert N. Boute, Stephen M. Disney, Marc R. Lambrecht and Benny Van Houdt
Coping with Uncertainty: Reducing ”Bullwhip” Behaviour in Global Supply Chains
Rachel Mason-Jones, and Denis R. Towill
Bullwhip in Supply Chains ~ Past, Present and Future
Steve Geary Stephen M Disney and Denis R Towill
Shrinking the Supply Chain Uncertainty Circle
THE BULLWHIP EFFECT IN SUPPLY CHAIN Reflections after a Decade
Gürdal Ertek, Emre Eryılmaz
Information distortion in a supply chain: The bullwhip effect
Hau L Lee; V Padmanabhan; Seugjin Whang
Management Science; Apr 1997; 43, 4;
THE SUPPLY CHAIN COMPLEXITY TRIANGE: UNCERTAINTY GENERATION IN THE SUPPLY CHAIN
Financial Stability and Systemically Important Countries -IMF-FSAP
IMF – FSAP
Assess financial stability and development
From The Financial Sector Assessment Program (FSAP)
The goal of FSAP assessments is twofold: to gauge the stability and soundness of the financial sector, and to assess its potential contribution to growth and development.
To assess the stability of the financial sector, FSAP teams examine the resilience of the banking and other non bank financial sectors; conduct stress tests and analyze systemic risks including linkages among banks and nonbanks and domestic and cross border spillovers; examine microprudential and macroprudential frameworks; review the quality of bank and non bank supervision, and financial market infrastructure oversight against accepted international standards; and evaluate the ability of central banks, regulators and supervisors, policymakers, and backstops and financial safety nets to respond effectively in case of systemic stress. While FSAPs do not evaluate the health of individual financial institutions and cannot predict or prevent financial crises, they identify the main vulnerabilities that could trigger one.
To assess the development aspects of the financial sector, FSAPs examine the development needs in terms of institutions, markets, infrastructure, and inclusiveness; quality of the legal framework and of payments and settlements system; identify obstacles to the competitiveness and efficiency of the sector; topics relating to financial inclusion and retail payments; and examine its contribution to economic growth and development. Issues related to development of domestic capital markets are particularly important in developing and low-income countries. While focusing on development issues, FSAPs also keep in view financial stability dimensions .
Since 1999 the IMF has monitored countries’ financial sectors on a voluntary basis through the Financial Sector Assessment Program. In developing and emerging market countries, the World Bank participates in these assessments, focusing on long-term financial development issues.
In the context of these financial sector assessments, the IMF examines three key components in all countries:
• the soundness of banks and other financial institutions, including stress tests;
• the quality of financial market oversight in banking and, if appropriate, insurance and securities; and
• the ability of supervisors, policymakers, and financial safety nets to respond effectively in case of a crisis.
One size does not fit all in these assessments. The IMF tailors its focus in each of these areas to a country’s individual circumstances, and takes into account the potential sources that might make the country in question vulnerable.
The objective is to assess countries’ crisis prevention and management frameworks, with the goal of supporting both national and global financial stability.
Based on 2010 IMF-FSAP, these are the counties having the systemically important Financial Sectors:
Hong Kong SAR
the United Kingdom
the United States
In 2014, Four new countries were added to the list based on expanded criteria for financial stability:
Expanded criteria emphasize connections between financial sectors, institutions
More emphasis on how problems in one country affect others
From Press Release: IMF Expanding Surveillance to Require Mandatory Financial Stability Assessments of Countries with Systemically Important Financial Sectors
September 27, 2010
Press Release No. 10/357
September 27, 2010
The Executive Board of the International Monetary Fund (IMF) has approved making financial stability assessments under the Financial Sector Assessment Program (FSAP) a regular and mandatory part of the Fund’s surveillance for members with systemically important financial sectors. While participation in the FSAP program has been voluntary for all Fund members, the Executive Board’s decision will make financial stability assessments mandatory for members with systemically important financial sectors under Article IV of the Fund’s Articles of Agreement.
The decision adopted on September 21, 2010 to raise the profile of financial stability assessments under the FSAP for members with systemically important financial sectors is a recognition of the central role of financial systems in the domestic economy of its members, as well as in the overall stability of the global economy. It is a major step toward enhancing the Fund’s economic surveillance to take into account the lessons from the recent crisis, which originated in financial imbalances in large and globally interconnected countries.
The FSAP provides the framework for comprehensive and in-depth assessments of a country’s financial sector, and was established in 1999, in the aftermath of the Asian crisis. FSAP assessments are conducted by joint IMF-World Bank teams in developing and emerging market countries, and by the Fund alone in advanced economies. FSAPs have two components, which may also be conducted in separate modules: a financial stability assessment, which is the responsibility of the IMF and, in developing and emerging market countries, a financial development assessment, the responsibility of the World Bank.
These mandatory financial stability assessments will comprise three elements: 1) An evaluation of the source, probability, and potential impact of the main risks to macro-financial stability in the near term, based on an analysis of the structure and soundness of the financial system and its interlinkages with the rest of the economy; 2) An assessment of each countries’ financial stability policy framework, involving an evaluation of the effectiveness of financial sector supervision against international standards; and 3) An assessment of the authorities’ capacity to manage and resolve a financial crisis should the risks materialize, looking at the country’s liquidity management framework, financial safety nets, crisis preparedness and crisis resolution frameworks.
“The FSAP program has been a key tool for analyzing the strengths and weaknesses of the financial systems of IMF member countries. This is why more than three-quarters of the Fund’s members have volunteered for these assessments, some more than once. However, the recent crisis has made clear the need for mandatory and regular assessments of financial stability for countries with large and interconnected financial systems. The Board’s decision represents an important part of the international community’s response to the recent crisis and will buttress our ability to exercise surveillance over a key aspect of the global economic machinery – the financial system,” said John Lipsky, First Deputy Managing Director of the IMF.
A total of 25 jurisdictions were identified as having systemically important financial sectors, based on a methodology that combines the size and interconnectedness of each country’s financial sector.
They are in alphabetical order: Australia, Austria, Belgium, Brazil, Canada, China, France, Germany, Hong Kong SAR, Italy, Japan, India, Ireland, Luxembourg, Mexico, the Netherlands, Russia, Singapore, South Korea, Spain, Sweden, Switzerland, Turkey, the United Kingdom, and the United States.
This group of countries covers almost 90 percent of the global financial system and 80 percent of global economic activity. It includes 15 of the Group of 20 member countries, and a majority of members of the Financial Stability Board, which has been working with the IMF on monitoring compliance with international banking regulations and standards. Each country on this list will have a mandatory financial stability assessment every five years. Countries may undergo more frequent assessments, if appropriate, on a voluntary basis. The methodology and list of jurisdictions will be reviewed periodically to make sure it continues to capture the countries with the most systemically important financial sectors that need to be covered by regular, in-depth, mandatory financial stability assessments.
“Going forward, regular stability assessments of systemically important financial sectors should contribute to a deeper the public understanding of the risks to economic stability arising from the financial sector. Financial instability can have a major impact on economic activity and job creation.” Mr. Lipsky said. “At the same time, we are committed—with the World Bank—to ensuring that this new mandate does not crowd out FSAP assessments in other countries.”
From IMF Survey : IMF Broadens Financial Surveillance
New methods for new risks
In the wake of the crisis, the IMF has strengthened the framework for surveillance of countries’ financial systems.
In 2010, the IMF made financial sector assessments mandatory for the countries with most important financial sectors in the global system, initially 25 and now 29.
The IMF is also focusing on how problems in one country can affect others, and on the connections between financial institutions. The IMF, among others, has developed what are known as network models to try and understand how events in one financial institution, market, or country will impact others.
Given the growing reach of global banks, the IMF also closely examines cross-border supervisory cooperation arrangements. In countries where foreign-owned banks are systemically important, it is essential that the host country supervisor has enough tools and good communications with the parent banks’ regulators.
Last December, the IMF Board reviewed the methodology that determines whether a country’s financial sector is systemically important. In light of the experience since the crisis, it agreed to place even more emphasis on the connections between financial sectors and institutions, expand the coverage of cross-border linkages to cover not only banking but also equity and debt exposures, and capture the potential for pure price contagion. Based on these revamped criteria, the IMF added the four countries to the original 25.
From Systemic Risk: From measurement to the New Financial Stability Agenda
Broad coverage of possible cross-border transmission channels for shocks:
Balance Sheets, Financial Interconnectedness, and Financial Stability – G20 Data Gaps Initiative
From G-20 Data Gaps Initiative II: Meeting the Policy Challenge
In 2009, the G-20 Finance Ministers and Central Bank Governors (FMCBG) endorsed 20 recommendations to address data gaps revealed by the global financial crisis. The initiative, aimed at supporting enhanced policy analysis, is led by the Financial Stability Board (FSB) and the International Monetary Fund (IMF). The Inter-Agency Group on Economic and Financial Statistics (IAG)1 plays the global facilitator role to coordinate and monitor the implementation of the DGI recommendations.
The financial crisis which started in 2007 with problems in the U.S. subprime market, spread to the rest of the world becoming the most severe global crisis since the Great Depression. One difference between the global financial crisis and earlier post-war crises was that the crisis struck at the heart of the global financial system spreading throughout the global economy. This required global efforts for recovery. As one element of the global response, in October 2009, the G-20 Finance Ministers and Central Bank Governors (FMCBG) endorsed a DGI led by the Financial Stability Board (FSB) Secretariat and the IMF Staff. DGI was launched as an overarching initiative of 20 recommendations to address information gaps revealed by the global financial crisis.
Following the global financial crisis, in 2008, the G-20 leaders, at their meeting in Washington,9 committed to implement a fundamental reform of the global financial system to strengthen financial markets and regulatory regimes so as to avoid future crises.10 As part of the reform agenda, the FSB was established in April 2009 as the successor to the Financial Stability Forum (FSF) and started working as the central locus of coordination to take forward the financial reform program as developed by the relevant bodies. The obligations of members of the FSB were set to include agreeing to undergo periodic peer reviews, using among other inputs IMF/World Bank Financial Sector Assessment Program (FSAP) reports. The G-20 leaders noted the importance of global efforts in implementing the global regulatory reform so as to protect against adverse cross-border, regional and global developments affecting international financial stability.
The components of the G-20 regulatory reform agenda complement each other with an ultimate goal of strengthening the international financial system. The DGI has been an important element of this agenda as the regulatory reform agenda items mostly require better data. The collection of data on Global Systemically Important Banks’ (G-SIBs) exposures and funding dependencies is among the steps towards addressing the “too-big-to-fail” issue by reducing the probability and impact of G-SIBs’ failing. The FSB work on developing standards and processes for global data collection and aggregation on securities financing transactions aims to improve transparency in securitization towards the main goal of reducing risks related to the shadow banking system. Over-the-counter (OTC) derivatives markets including Credit Default Swap (CDS) were brought under greater scrutiny towards the main goal of making derivatives markets safer following the global crisis. DGI supported this goal by improving information in CDS markets. A number of other G-20 initiatives have strong links with the DGI project including the FSB work on strengthening the oversight and regulation of the shadow banking system; and on the work on global legal entity identifiers (LEI)11 which contribute to the robustness of the data frameworks with a more micro focus. The changing global regulatory reforms particularly the implementation of Basel III was also taken into consideration in the development of the DGI.
The importance of closing the data gaps hampering the surveillance of financial systems was also highlighted as part of the IMF’s 2014 Triennial Surveillance Review (TSR).12 The 2014 TSR emphasized that due to growing interconnectedness across borders, financial market shocks will continue to have significant spillovers via both capital flows and shifts in risk positions. Also, new dimensions to interconnectedness will continue to emerge such as through the potential short-run adverse spillovers generated by the financial regulatory reforms. To this end, the TSR recommended improving information on balance-sheets and enriching flow-of funds data. The IMF has overhauled its surveillance to make it more risk-based. To this end, the IMF Managing Director’s Action Plan for Strengthening Surveillance following the 2014 TSR13 underlined that the IMF will revive and adapt the Balance Sheet Approach (BSA) to facilitate a more in-depth analysis of the impact of shocks and their transmission across sectors, and possibly initiate the global flow of funds to better reflect global interconnections (Box 1). This work requires data from the DGI as it will help support the IMF’s macro-financial work including in the key exercises and reports (i.e., Early Warning Exercise, FSAP, and GFSR).
Global Flow of Funds
Through the use of internationally-agreed statistical standards, data on cross-border financial exposures (IBS, CPIS, and Coordinated Direct Investment Survey (CDIS)) can be linked with the domestic sectoral accounts data to build up a comprehensive picture of financial interconnections domestically and across borders, with a link back to the real economy through the sectoral accounts. This work is known as the “Global Flow of Funds (GFF).”14 The GFF project is mainly aimed at constructing a matrix that identifies interlinkages among domestic sectors and with counterpart countries (and possibly counterpart country sectors) to build up a picture of bilateral financial exposures and support analysis of potential sources of contagion. The concept of the GFF was first outlined in the Second Progress Report on the G-20 Data Gaps Initiative and initiated in 2013 as part of a broader IMF initiative aimed at strengthening the analysis of interconnectedness across borders, global liquidity flows and global financial interdependencies. In the longer term, the GFF matrix is intended to support regular monitoring of bilateral cross-border financial positions through a framework that highlight risks to national and international financial stability. IMF Staff is working towards developing a GFF matrix starting with the largest global economies.
How Does the DGI Address the Surveillance Agenda?
As noted above, in the wake of the 2014 TSR the IMF Managing Director published an Action Plan for Strengthening Surveillance. Among the actions to be taken was that “The Fund will revive and adapt the balance sheet approach to facilitate a more in-depth analysis of the impact of shocks and their transmission across sectors.” This responded to a call from outside experts David Li and Paul Tucker in their external study for the 2014 TSR on risks and spillovers.37
Even though the 2007/2008 crisis emerged in the financial sector, given its intermediary role, the problems in the financial sector also affected other sectors of an economy. To this end, analysis of balance sheet exposures is essential given the increasingly interconnected global economy. As it is pointed out in the IMF TSR 2014, the use of balance sheets to identify sources of vulnerability and the transmission of shocks, could have helped detect risks associated with European banks’ reliance on U.S. wholesale funding to finance structured products. In June 2015, the IMF set out the way forward in a paper for the IMF Executive Board on Balance Sheet Analysis in Surveillance. 38 Sectoral accounts and balance sheet data are essential, including from-whom to-whom data, in providing the context for an assessment of the links between the real economy and financial sectors. The sectoral balance sheets of the SNA is seen as the overarching framework for balance sheet analysis as the IMF Executive Board paper makes clear. Further, the paper sets out a data framework for such analysis.39 Putting the sectoral balance sheets of the SNA in a policy context, the IMF has developed a BSA, which compiles all the main balance sheets in an economy using aggregate data by sector. The BSA is based on the same conceptual principles as the sectoral accounts, providing information on a from-whom-to-whom basis with an additional focus on vulnerabilities arising from maturity and, currency mismatches as well as the capital structure of economic sectors.
While currently not that many economies compile from-whom-to-whom balance sheet data, BSA data can be compiled from the IMF’s Standardized Report Forms, IIP, and government balance sheet data—a more limited set of data than needed to compile the sectoral accounts. The DGI-2 recommendations address key data gaps that act as a constraint on a full-fledged balance sheet analysis. The DGI recommends addressing such gaps through improving G-20 economies’ dissemination of sectoral accounts and balance sheets building on 2008 SNA, including for the non-financial corporate and household sectors. (Annex 1, Recommendation II.8) Given the multifaceted character of the datasets, implementation of this recommendation is challenging and progress has been slow. However, all G-20 economies agree on the importance of having such information and have plans in place to make it happen.
The crisis emphasized the fact that it is not possible to isolate the problems in a single financial system as shocks propagate rapidly across the financial systems. Indeed, the IMF, since 2010, has been identifying jurisdictions with systemically important financial sectors based on a set of relevant and transparent criteria including size and interconnectedness. Within this identification framework, cross-border interconnectedness is considered an important complementary measure to the size of the economy: it captures the systemic risk that can arise through direct and indirect interlinkages among financial sectors in the global financial system (i.e., the risk that failure or malfunction of a national financial system may have severe repercussions on other countries or on overall systemic stability.48 The 2014 TSR summed up the issue succinctly in its Executive Summary: “Risks and spillovers remain first-order issues for the world economy and should be central to Fund surveillance. Recent reforms have made surveillance more risk-based, helping to better capture global interconnections. Experience so far also points to the need to build a deeper understanding of how risks map across countries, and how spillovers can quickly spread across sectors to expose domestic vulnerabilities.”49 Four existing datasets that include key information on cross-country financial linkages are the IIP, BIS IBS, IMF CPIS and IMF CDIS. Together these datasets provide a comprehensive picture of cross-border financial interconnections. This picture is especially relevant for policy makers as financial connections strengthen across border and domestic conditions are affected by financial developments in other economies to whom they are closely linked financially. DGI-2 focuses on improving the availability and cross-country comparability of these datasets (Annex1, Recommendations II.10, 11, 12 and 13). The well-known IIP is a key data source to understanding the linkages between the domestic economy and the rest of the world by providing information on both external assets and liabilities of the economy with a detailed instrument breakdown. However, the crisis revealed the need for currency and more detailed sector breakdowns, particularly for the other financial corporations (OFCs) sector. Consequently, as part of the DGI, the IIP was enhanced to support these policy needs. Significant progress has also been made in ensuring regular reporting of IIP along with the increase in frequency of reporting from annual to quarterly. By end-2015 virtually all G-20 economies reported quarterly IIP data. The IBS have been a key source of data for many decades providing information on aggregate assets and liabilities of internationally active banking systems on a quarterly frequency. The CPIS data, while on an annual frequency, provided significant insights into portfolio investment assets. That said, both datasets had limitations in terms of country coverage and granularity. CPIS also needed to be improved in terms of frequency and timeliness. To this end, the DGI supported the enhancements in these datasets.
G-20 Data Gaps Initiative (DGI)
Financial Stability Board (FSB)
The Inter-Agency Group on Economic and Financial Statistics (IAG)
Finance Ministers and Central Bank Governors (FMCBG)
Financial Stability Forum (FSF)
Global Systemically Important Banks (G-SIBs)
Credit Default Swap (CDS)
Global legal entity identifiers (LEI)
IMF Triennial Surveillance Review (TSR)
IMF Balance Sheet Approach (BSA)
IMF Global Flow of Funds (GFF)
IMF IIP (International Investment Positions)
BIS IBS (International Banking Statistics)
IMF CPIS (Coordinated Portfolio Investment Survey)
IMF CDIS (Coordinated Direct Investment Survey)
IMF GFSR ( Global Financial Stability Report)
Other Related Terms:
Global Systemically Important Financial Institutions (G-SIFIs )
GLOBAL SYSTEMICALLY IMPORTANT INSURERS (G-SIIS)
Systemically Important Financial Market Utilities (G-FMUs)
Nonbank Financial Companies (G-SINFC)
Financial Stability Oversight Council (FSOC)
The IAG members are
BIS (Bank of International Settlements)
G20 (Group of 20 Nations)
IMF (International Monetary Fund)
OECD (Organisation for Economic Co-operation and Development)
ECB (European Central Bank)
Eurostat (European Statistics/Directorate-General of the European Commission)
UN (United Nations)
From G-20 Data Gaps Initiative II: Meeting the Policy Challenge
From G-20 Data Gaps Initiative II: Meeting the Policy Challenge
Progress of DGI ((DGI-I and DGI-II)
From G-20 Data Gaps Initiative II: Meeting the Policy Challenge
The first phase of the DGI was successfully concluded in September 2015 and the second phase of the initiative (DGI-2) was endorsed by the G-20 FMCBG. The key objective of the DGI-2 is to implement the regular collection and dissemination of comparable, timely, integrated, high quality, and standardized statistics for policy use. DGI-2 encompasses 20 new or revised recommendations, focused on datasets that support: (i) monitoring of risk in the financial sector; and (ii) analysis of vulnerabilities, interconnections and spillovers, not least cross-border.
Following the significant progress in closing some of the information gaps identified during the global financial crisis of 2007/08, the G-20 FMCBG endorsed, in September 2015, the closing of DGI-1. During the six-year implementation of DGI-1, significant achievements were obtained, particularly regarding the development of conceptual frameworks, as well as enhancements in some statistical collection and reporting. Regarding the latter, more work is needed for the implementation of some recommendations, especially in seven high-priority areas across G-20 economies, notably in government finance statistics and sectoral accounts and balance sheets.
In September 2015, the G-20 FMCBG also endorsed the launch of the second phase of the DGI. The main objective of DGI-2 is to implement the regular collection and dissemination of reliable and timely statistics for policy use. Its twenty recommendations are clustered under three main headings: (1) monitoring risk in the financial sector, (2) vulnerabilities, interconnections and spillovers, and (3) data sharing and communication of official statistics. The DGI-2 maintains the continuity with the DGI-1 recommendations while setting more specific objectives with the intention for the G-20 economies to compile and disseminate minimum common datasets for these recommendations. The DGI-2 also includes new recommendations to reflect the evolving users’ needs. Furthermore, the DGI-2 aims at strengthening the synergies with other relevant global initiatives.
The DGI-2 facilitates closing data gaps that are policy-relevant. By achieving its main objective, the DGI-2 will be instrumental in closing gaps in policy-relevant data. Most of the datasets covered by the DGI-2 are particularly relevant for meeting the emerging macro- financial policy needs, including the analysis of international positions, global liquidity, foreign currency exposures, and capital flows volatility.
The DGI-2 introduces action plans that set out specific “targets” for the implementation of its twenty recommendations through the five-year horizon of the initiative. The action plans acknowledge that countries may be at different stages of statistical development and take into account national priorities and resource constraints. The DGI-2 intends to bring the G-20 economies at higher common statistical standards through a coordinated effort; however, flexibility will be considered in terms of intermediate steps to achieve the targets based on national priorities, resource constraints, emerging data needs, and other considerations.
Key Sources of Research:
Second Phase of the G-20 Data Gaps Initiative (DGI-2) Second Progress Report
Prepared by the Staff of the IMF and the FSB Secretariat September 2017
Integrated Macroeconomic Accounts, NIPAs, and Financial Accounts
Integrated Macroeconomic Accounts are based on:
National Income and Product Accounts – maintained by Bureau of Economic Analysis
Flow of funds – Financial Accounts – maintained by Federal Reserve
In addition, there are Input-Output Accounts.
Another important development was Double entry book keeping method in accounting.
From Financial Accounts of the United States – Z.1
Financial Accounts of the United States
The Statistical Release Z.1, “Financial Accounts of the United States,” is organized into the following sections:
Matrices summarizing flows and levels across sectors, tables on debt growth, net national wealth, gross domestic product (GDP), national income, saving, and so on.
Flows of financial assets and liabilities, by sector and by financial instrument
Levels of financial assets and liabilities, by sector and by financial instrument
Balance sheets, including nonfinancial assets, and changes in net worth for households and nonprofit organizations, nonfinancial corporate businesses, and nonfinancial noncorporate businesses
Supplementary tables providing equity detail of the household and nonprofit organization sector and data on nonprofit organizations
Integrated Macroeconomic Accounts
The Integrated Macroeconomic Accounts (IMA) relate production, income, saving, and capital formation from the national income and product accounts (NIPA) to changes in net worth from the “Financial Accounts” on a sector-by-sector basis. The IMA are published jointly by the Federal Reserve Board and the Bureau of Economic Analysis and are based on international guidelines and terminology as defined in the System of National Accounts (SNA2008).
From The Integrated Macroeconomic Accounts of the United States
The integrated macroeconomic accounts (IMAs) were developed as part of an interagency effort to further harmonize the Bureau of Economic Analysis’s (BEA) national income and product accounts (NIPAs) and the Federal Reserve Board’s (FRB) flow of funds accounts (FFAs). Work began on these accounts in 2002 and the first version of the tables was presented at the NBER Conference on Research in Income and Wealth, Architecture for the National Accounts in April 2004, with annual data for 1985 to 2002.2 A February 2007 Survey of Current Business article officially introduced the regular production of the IMAs.3 The System of National Accounts, 1993 (SNA) was used as the organizing framework in an effort to bring these accounts into closer accordance with international guidelines. While the resultant set of IMAs are largely in accordance with the SNA, there remain differences, particularly with respect to the way sectors are defined. The IMAs use a consistent set of sectors throughout the entire sequence of accounts, and these sectors are primarily based on definitions used in either the NIPAs or FFAs. In the SNA, institutions are grouped into five mutually exclusive sectors: 1) nonfinancial corporations, 2) financial corporations, 3) general government, 4) nonprofit institutions serving households (NPISH), and 5) households. In the IMAs, estimates are calculated and presented for the following seven, mutually exclusive sectors: 1) households and NPISH, 2) nonfinancial noncorporate business, 3) nonfinancial corporate business, 4) financial business, 5) federal government, 6) state and local government, and 7) rest of world. The primary difference is in the treatment of noncorporate business (which includes entities such as sole proprietorships, general partnerships, and limited liability partnerships, and government enterprises), which, with the exception of government enterprises, are classified as either nonfinancial noncorporate business or financial business in the IMAs. Government enterprise activities are reflected in the government sectors within the IMAs. In contrast, in the SNA, limited liability companies, limited partnerships and government enterprises are classified as “quasi-corporations” in the financial or nonfinancial corporate sectors and general partnerships and sole proprietorships are classified in the household sector.
Most of the IMA series are derived from published NIPA and FFA data. Current account and capital account statistics are based on NIPA data, while the financial account statistics are based on FFA data. The other changes in volume, revaluation, and balance sheet accounts reflect a combination of both BEA and FFA data. BEA provides FRB with unpublished data, where necessary, and the FRB is responsible for compiling the full set of integrated accounts. The IMAs are updated quarterly about 75 days after the end of the quarter and are published by both BEA and FRB. The BEA-supplied statistics for the most recent quarter typically reflect revisions to the previous quarter, with more substantial historical revisions stemming from the NIPA annual revision introduced with the September IMA release. The FRB-supplied statistics are not constrained by a set revision cycle and thus are open to historical revisions with each quarterly update.
Bureau of Economic Analysis (BEA) Publishes
GDP and Personal Income
GDP by Industry
International Transactions, Services, and IIP
Direct Investments and MNEs
GDP and Personal Income
GDP and Personal Income data includes
Gross domestic product (GDP)
Gross domestic income (GDI)
Government receipts and expenditures
Personal income and disposable personal income
Personal consumption expenditures (PCE), or consumer spending
From A Guide to the National Income and Product Accounts of the United States
The estimation of national income was initiated during the early 1930s, when the lack of comprehensive economic data frustrated the efforts of Presidents Hoover and Roosevelt to design policies to combat the Great Depression. In response to this need, the Department of Commerce commissioned Simon Kuznets of the National Bureau of Economic Research (NBER) to develop estimates of national income. Professor Kuznets headed a small group within the Bureau of Foreign and Domestic Commerce’s Division of Economic Research. Professor Kuznets coordinated the work of researchers at the NBER in New York and his staff at Commerce. The estimates were presented in a report to the Senate in 1934, (National Income, 1929–32.)
The entry of the United States into World War II led to increased demand for data that could be used for wartime planning. Early in 1942, annual estimates of gross national product (GNP) were introduced to complement the estimates of national income. In addition, estimates were developed to detail how income was generated, received, and spent by various sectors of the economy.
The U.S. national income and product statistics were first presented as part of a complete and consistent accounting system in the July 1947 supplement to the SURVEY OF CURRENT BUSINESS. The supplement contained 48 tables covering the years 1929–46. All estimates were presented in current dollars; no adjustments were yet made for changes in purchasing power. Quarterly estimates were available for only a few of the aggregates (national income, GNP, and personal income, and their major components). Monthly estimates were presented for personal income and its major components.
In 1951, annual estimates of real GNP and of implicit price deflators were introduced as supplementary tables. Real GNP was calculated by holding fixed the prices of a particular base year that is—GNP was calculated in “constant dollars.” In 1954, these inflation- adjusted estimates were formally integrated into the standard NIPA tables.
Another revision, published in 1958, introduced changes in the accounting system and added new information to the accounts. Five summary accounts were adopted as a concise, general presentation of output, income, outlays, foreign transactions, saving, and investment. Quarterly estimates of real GNP were introduced. Government sector tables provided a new breakdown of expenditures by type and function for the Federal Government and for state and local governments. The foreign transactions tables were expanded in detail and integrated with the balance of payments accounts. Regional estimates were introduced, as were estimates of the net stock of fixed assets in manufacturing.
In the 1965 comprehensive revision, for the first time, the components of GNP were benchmarked to the detailed estimates contained in the 1958 input-output table, which provided a better understanding of the structural relationships within the economy.
During the 1960s and 1970s, the estimates of capital stock were expanded to cover all business and government owned fixed assets and consumer durable goods. In 1976, in order to provide a more consistent valuation, the estimates of consumption of fixed capital (CFC) were shifted to a current-cost basis. Previously, the estimates were on a book-value basis—that is, valued at historical cost—reflecting a mixture of prices for the various years in which the assets were acquired.
In 1985, BEA introduced quality-adjusted price indexes for computers and peripheral equipment that were developed with the assistance and advice of re- searchers from the IBM Corporation. The indexes, which were based on a statistical technique known as “hedonic” regression, adjusted for the rapid improvements in speed and capacity of computer equipment. These hedonic price indexes provide improved measures of price change for computers and peripheral equipment during periods when quality characteristics change rapidly and when prices decline as new products are introduced.
In 1991, BEA changed its featured measure of U.S. production from GNP to GDP. GDP covers the goods and services produced by labor and property located in the United States and is thus consistent with key economic indicators of employment, productivity, and industry output. The change also facilitated comparisons of economic activity in the United States with that in other countries.
In 1993, the System of National Accounts 1993 (SNA 1993) was adopted by the international community in order to facilitate international comparisons of national economic statistics and to serve as a guide for countries as they develop their economic accounting systems.1 BEA actively participated in preparing SNA 1993 and announced its plan to move toward consistency with SNA 1993. Since then, the major improvements in the NIPAs have been designed, at least in part, to incorporate the SNA’s concepts and definitions wherever feasible.2
In 1996, BEA introduced several major improvements to the NIPAs. BEA began estimating the changes in real GDP and its components by chaining together year-by-year quantity changes that were calculated using the Fisher index formula, rather than estimating real GDP on the basis of prices of a single, arbitrary base year.3 Government expenditures for equipment and structures were recognized as fixed investment, thereby providing a more complete measure of investment through the consistent treatment of fixed assets whether purchased by the public or the private sector. The method for calculating CFC was changed to reflect the results of studies on the prices of used equipment and structures in resale markets that found that depreciation generally tends to follow a geometric pattern.
The 1999 comprehensive revision of the NIPAs further improved the definitions underlying the accounts and the statistical underpinnings of the current-dollar estimates, quantities, and prices in the accounts. For example, business and government expenditures for software were recognized as fixed investment. Government employee retirement plans were reclassified so that they would be treated similarly to private pension plans. A new method was introduced for calculating the real value of unpriced bank services by incorporating measures of banking activity. The consumer price indexes that were used for deflating personal consumption expenditures (PCE) were revised back to 1978 to reflect the use of a geometric mean formula.
The most recent comprehensive revision of the NIPAs, which was released beginning in 2003, further improved and updated the accounts.
In 2004, BEA participated in a Conference on Research in Income and Wealth on “A New Architecture for the U.S. National Accounts.”11 The purpose of the conference was to initiate the development of a comprehensive and fully integrated set of U.S. national ac counts. Conference participants identified short-term and long-term initiatives to more fully integrate the existing sets of accounts, to uncover gaps and inconsistencies, and to expand and integrate systems of non- market accounts with the core system. As part of this exercise, participants identified initiatives to integrate BEA’s existing set of accounts with other U.S. economic accounts, including the productivity accounts prepared by the Bureau of Labor Statistics and the flow of funds accounts prepared by the Federal Reserve Board.
Simon Kuznets and Richard Stone: National Income Accounts
From Taking the Pulse of the Economy: Measuring GDP
National income and product accounts—best known by one of their principle aggregates, gross domestic product (GDP)—are produced by virtually every nation in the world. Simon Kuznets and Richard Stone, both later to become Nobel Prize winners, led the creation of the national accounts for the United States and the United Kingdom, respectively.
From THE NATIONAL ACCOUNTS AS A TOOL FOR ANALYSIS AND POLICY; PAST, PRESENT AND FUTURE
The development of national accounting systems occurred simultaneously in Britain, the Netherlands and the Scandinavian countries. This development was closely linked with three other major innovations in economic theory in the 1930s: input-output analysis, econometric modelling of the whole economy and the Keynesian revolution.
From VALUE AND INCOME IN THE NATIONAL ACCOUNTS AND ECONOMIC THEORY
National accounting and economic theory have a long joint history, both in persons and in concepts.’ Some important cases in point are:
King and Petty are not only the founding fathers of national accounting, but should also be remembered for their contributions to economic theory.’ King’s law of demand can be regarded as the first statistical demand curve. Petty is known for his work on the velocity of money. He also acknowledged the importance of the concept of human capital, by making an estimate of its value in England.
Kuznets’ work on economic growth and historical time series has been important to both national accounting and economic theory. The same applies to Leontief’s pathbreaking work on input-output analysis.
Hicks and Frisch are generally known for their contributions to economic theory and econometrics. However, they also made important contributions to national accounting.
The reverse situation holds for Stone: his role in the development of international guidelines on national accounting is his most outstanding contribution but he should also be remembered as one of the pioneers of econometrics.
The Keynesian revolution was important to both economic theory and national accounting. It stimulated the development of the national accounts all over the world. The drastic increase in the availability of national accounts figures reinforced the Keynesian revolution in economic theory (and policy). Furthermore, during the Second World War, Keynes, at that time a high ranking official in the U.K., asked Stone and Meade to develop and estimate a system of national accounts for improving the planning of the war budget.
Since the Second World War, the role and nature of national accounting has drastically changed. National accounting has become well-established and institutionalized.
From Three centuries of macro-economic statistics
Simon Kuznets (see e.g. Kuznets, 1941) reconstructed national income and product accounts for the USA, first back to 1919 and eventually back to 1869. Such impressive measurement exercises were the input for investigating business cycles and long term economic growth. For example, what was the role of the various industries, what was the role of technology and innovations, what is the relationship between economic growth and inequality (the U-shaped Kuznets-curve) or between economic growth and urbanization, traffic congestion and pollution. These examples also illustrate that Kuznets was very well aware of the major differences between economic growth and welfare. According to Kuznets “As a general formula, the desirability of as high and sustained a growth rate as is compatible with the costs that society is willing to bear is valid, but in using it to judge economic problems and policies, distinctions must be kept in mind between quantity and quality of growth, between its costs and return, and between the short and the long run”.
From THE NATIONAL ACCOUNTS AS A TOOL FOR ANALYSIS AND POLICY; PAST, PRESENT AND FUTURE
Wassily Leontief and Input Output Accounts
In 1936, Leontief published an article, which started input-output analysis (Leontief, 1936). For this major innovation, Leontief was later awarded the Nobel Prize. Input-output analysis started not fully out of the blue. Precursors can be found amongst other things in Quesnay’s zigzag diagram (“Tableau Economique”) and some of the equations relating input and output by Walras (see Stone, 1984). The crucial innovation contained in Leontief’s article was that it formulated for the first time a “model connecting inputs and output, which made it possible to calculate indirect as well as direct inputs and thus to carry out the many, now familiar, analyses which depend on being able to do this” (see Stone, 1984).
Commodity-flow accounting can be regarded as the statistical counterpart of input-output analysis, as commodity-flow accounts are a type of input-output table. Commodity-flow accounting started in Sweden (Lindahl), Denmark (Kampmann) and the United States (Kuznets). In Sweden, the results of a ten-year project under the direction of Lindahl were published in 1937. In this monumental and well-thought out study, the inputs of industries were calculated by some form of commodity-flow analysis, i.e. as the sum of inputs received by that industry from other industries as computed from the production statistics of the other industries (adjusted for imports and exports). The Swedish study inspired work in Denmark by Kampmann on input-output tables, that included also estimates on national aggregates in constant prices. More information on these developments in Scandinavia can be found in Aukrust (1994, pp. 26-31).
In the forties and fifties, input-output analysis was developed more fully and many of its applications were proved to be successful analytical tools. In the international guidelines of 1968, input-output tables were explicitly linked to national accounting (see sections 3.2 and 3.3).
From Taking the Pulse of the Economy: Measuring GDP
In the 1950s, the development of input–output accounts by Leontief and others provided a conceptual framework for estimating the size of the economy by an income measure, by an expenditure measure, and also by a third method—a value-added measure. In 1964, the Bureau of Economic Analysis published its first input–output account that was directly tied to the national accounts (Goldman, Marimont, and Vaccara, 1964). The input– output table calculates GDP by three interlocking methods. First, it estimates each industry’s gross output and subtracts intermediate inputs from other industries to derive each industry’s residual value-added, which can be summed in what is sometimes called the “production approach” to estimate GDP. A second approach to estimating GDP, the “income approach” measures the income earned by the different factors of production. The third approach, the “final expenditures approach,” shows what is happening across different types of spending such as con- sumption, investment, and exports less imports. Table 1 shows the main categories within these three methods of measuring GDP for 2005.
Please see my earlier post on Input Output Accounts
From The Origins of Financial Accounts in the United States and Italy: Copeland, Baffi and the Institutions
Copeland’s work ties in with various lines of analysis. The first connection is with the developments in national accounts that followed Keynes’s General Theory. Keynes invented not only a discipline, but also the words to describe it, setting the national accounts on a new basis. Their construction was not an easy one. Blanchard (2000) described macroeconomics before the Second World War as ‘an age of confusion’. During the interwar years progress in national accounts can be attributed to Colin Clark and Simon Kuznets.1 According to Patinkin (1982), the work of Clark and Kuznets was a statistical revolution that anticipated the Keynesian one.2 Later on, in 1942, Hicks published the first edition of The Social Framework. A major effort of organisation produced the United Nations’ System of National Accounts (SNA) of 1947, strongly based on Richard Stone’s paper (1945) on ‘Definition and Measurement of the National Income and Related Totals’. Copeland had already studied the national accounts before the Second World War, publishing papers in the NBER series Studies in Income and Wealth. His essays of 1935, ‘National Wealth and Income – An Interpretation’, and 1937, ‘Concepts of National Income’, were cited by Stone in the preparatory work for the SNA. Copeland’s view was that the estimation of national income could benefit from the use of a double-entry bookkeeping system, i.e. of the approach he would apply for his future moneyflows. During the Second World War, statistical work on national accounts received a tremendous impulse (see Carson 1975). After the focus on the ‘inflation gap’, i.e. the amount by which the real GDP exceeds potential GDP, it became natural to ask questions concerning the spending capacity (and therefore the financial situation) of the different sectors of the economy: households, firms, government, financial intermediaries, the rest of the world. In other words, when the concepts of national income and other non-financial accounts had been codified, it was a consequent, yet complex, step to move on to the notion of financial accounts.
From Credit Aggregates from the Flow of Funds Accounts
The FOFA are based on research by Morris A. Copeland (1952), who had been studying financial flows when the NIPA became available in the early 1930s. With his training in accounting and with the NIPA in mind, Copeland began to calculate financial flow measures for the banking sector, and then, over a decade later, he compiled aggregate data for all sectors. In 1944, the National Bureau of Economic Research invited Copeland to develop a more complete system to account for financial flows. Copeland accepted the invitation, and in 1952, the Bureau published the results: U.S. financial flows and related balances for 1936 through 1942.
The Board of Governors of the Federal Reserve System continued the project and presented the result of its efforts in late 1955 in Flow of Funds in the United States, 1939–1953. The data, however, were on an annual basis and available only with a substantial time lag. In 1959, the Federal Reserve published a revised presentation with quarterly data. Since then the Federal Reserve has published regularly quarterly FOFA data.
Concepts of Level and Flow in the SNA and the Financial Accounts
From Financial Accounts
The level of an asset or liability (also referred to as the stock or outstanding) measures the value of the asset or liability in existence at a point in time. In the “Financial Accounts,” the levels are reported as of the end of each calendar quarter. In the SNA2008, the change in the level from one period to the next is called the “economic flow,” and can be decomposed into three broad elements: transactions, which measure the exchange of assets; revaluations, which measure changes in market value of untraded assets; and other changes in volume, which measure discontinuities or breaks in time series due to disaster losses or a change in source data or definition.
In the “Financial Accounts,” “flows” refer to the exchange of assets, corresponding to the SNA definition of transactions, that is, “flow tables” in the “Financial Accounts” are equivalent to “transaction tables” in the SNA terminology. In practice, other volume changes are relatively rare, and revaluations occur mainly for series carried at market value (such as corporate equities, real estate, and some debt securities), so for many series the change in the level is equal to the flow.
Please see my earlier post on Morris Copeland and Flow of Funds Accounts
A Brief History of Macro-Economic Modeling, Forecasting, and Policy Analysis
From A History of Macroeconomics from Keynes to Lucas and Beyond
From Modern Macroeconomic Models as Tools for Economic Policy
I believe that during the last financial crisis, macroeconomists (and I include myself among them) failed the country, and indeed the world. In September 2008, central bankers were in desperate need of a playbook that offered a systematic plan of attack to deal with fast- evolving circumstances. Macroeconomics should have been able to provide that playbook. It could not. Of course, from a longer view, macroeconomists let policymakers down much earlier, because they did not provide policymakers with rules to avoid the circumstances that led to the global financial meltdown.
Because of this failure, macroeconomics and its practitioners have received a great deal of pointed criticism both during and after the crisis. Some of this criticism has come from policymakers and the media, but much has come from other economists. Of course, macroeconomists have responded with considerable vigor, but the overall debate inevitably leads the general public to wonder: What is the value and applicability of macroeconomics as currently practiced?
There have been several criticisms of Main stream Economic Modeling from economists such as
Willem H Buiter
Lavoie and Godley
Issues with Neo Classical Models
No role of Money, Credit and Finance
Lack of Interaction between Real and Financial sectors
Lack of Aggregate Demand
Rational Expectations and others.
Orthodox and Heterodox Modeling
Input Output Equations Models – Inter Industry Analysis
CGE and DSGE Models
VAR ( Vector Auto Regression ) Models
Stock flow Consistent Models
System Dynamics models
VAR after Lucas Critique
DSGE (Dynamic Stochastic General Equilibrium Models)
DSGE – VAR
From HISTORY OF MACROECONOMETRIC MODELLING: LESSONS FROM PAST EXPERIENCE
The origin of macroeconometric modelling dates back to after World War II when Marschak organised a special team at the Cowles Commission by inviting luminaries such as Tjalling Koopmans, Kenneth Arrow, Trygve Haavelmo, T.W. Anderson, Lawrence Klein, G. Debreu, Leonid Hurwitz, Harry Markowitz, and Franco Modigliani (Diebold, 1998).
An interesting feature of macro modelling in this group was that there were three divisions to undertake the modelling procedures: first, economic theory or model specification; second, statistical inference (including model estimation, diagnostic tests and applications); and third, model construction which was dealing with data preparation and computations. The use of a team approach in macroeconometric modelling has been regarded as both cause and effect of large scale macroeconometric modelling (Intriligator, Bodkin and Hsiao, 1996).
Klein joined this team and conducted his first attempt in the mid 1940s to build a MEM for the US economy. See Klein (1983), Bodkin, Klein and Marwah (1991) and Intriligator, Bodkin and Hsiao (1996) for discussions of the MEMs which have been constructed for developed countries such as
the Klein interwar model,
the Klein-Goldberger model,
the Wharton model,
the DRI (Data Resources. Inc.) model,
the CANDIDE model,
the Brooking model etc.
History of Early Models
A. Klein Interwar Model
Developed in late 1940s
B. Klein -Goldberger Model
Developed at University of Michigan in 1950s. Annual forecasts
C. BEA Model
Developed by L Klein. Quarterly. Operational in 1961. Transferred to BEA. Eventually became BEA model.
D. Wharton Model
WHAR – III, with Anticipations
WHAR – MARK III
WEFA, Project LINK
Wharton models were constantly operated until 2001. DRI and WEFA merged to form Global Insight, Inc.
E. DRI Model
Built in 1969. by Data Resources inc. by Eckstein, Fromm, and Duessenbury.
F. Brookings Model
Developed by L Klein and J.S. Duessenberry. . Quarterly.
G. MPS Model
H. The Hickman – Coen Model
Developed by Hickman and Coen for long term forecasting
I. FAIR model
Developed by Ray Fair at Princeton. Now at Yale. Available for free.
J. The St. Louis Model
Developed by FRB/ST.Louis
K. Michigan MQEM Model
Quarterly. DHL III
L. The Liu-HWA Model
Developed in 1970s. Monthly.
M. WEFA -DRI/ Global Insight Model
Developed after merger of WEFA and DRI in 2001
N. Michigan MQEM /RSQE Model
Developed and extended in 1990s. Replaced by Hymans RSQE model.
O. Current Quarterly Model
L Klein and Global Insight collaboration. L Klein died in 2013.
P. CANDIDE Model
Model developed for Canada
From Economic Theory, Model Size, and Model Purpose
From HISTORY OF MACROECONOMETRIC MODELLING: LESSONS FROM PAST EXPERIENCE
A Macro Econometric Model (MEM) is a set of behavioural equations, as well as institutional and definitional relationships representing the main behaviours of economic agents and the operations of an economy. The equations, or behavioural relations, can be empirically validated to capture the structure of a macroeconomy, and can then be used to simulate the effects of policy changes.
Macroeconometric modelling is multi- dimensional and both a science and an art. Bautista (1988) and Capros, Karadeloglou and Mentzas (1990) have classified macroeconomic models into broad groups: MEMS and CGE (computable general equilibrium) models.
Further, according to Challen and Hagger (1983, pp.2-22) there are five varieties of MEMs in the literature:
the KK (Keynes- Klein) model,
the PB (Phillips-Bergstrom) model,
the WJ (Walras-Johansen) model,
the WL (Walras-Leontief) model,
the MS (Muth-Sargent) model.
The KK model is mainly used by model builders in developing countries to explain the Keynesian demand-oriented model of macroeconomic fluctuations. They deal with the problems of short-run instability of output and employment using mainly stabilisation policies. The basic Keynesian model has been criticised as it does not consider the supply side and the incorporation of production relations. Furthermore, this modelling approach does not adequately capture the role of the money market, relative prices and expectations. As a response to the shortcomings associated with the KK model, the St Louis model was constructed by the monetarist critics (Anderson and Carlson, 1970) in order to highlight the undeniable impacts of money on the real variables in the economy.
The second type of MEM, the PB, emerged in the literature when Phillips (1954, 1957) used both the Keynesian and the Neoclassical theories within a dynamic and continuous time model to analyse stabilisation policy. Although the PB model is also a demand-oriented model, differential or difference equations are used to estimate its stochastic structural parameters. In essence, the steady state and asymptotic properties of models are thus examined in a continuous time framework. One should note that this modelling method in practice becomes onerous to implement especially for large scale models.
The third type of MEM, the WJ, can be referred to as a multi-sector model in which the economy is disaggregated into various interdependent markets, each reaching an equilibrium state by the profit maximising behaviour of producers and utility maximising actions of consumers in competitive markets. Similar to an input-output (IO) approach, different sectors in the WJ model are linked together via their purchases and sales from, and to, each other. However, it is different from an IO model as it is highly non-linear and uses logarithmic differentiation.
The fourth type of MEMs, known as the WL model, has been widely considered as the more relevant MEM for developing countries (Challen and Hagger, 1983). The WL model incorporates an IO table into the Walrasian general equilibrium system, enabling analysts to obtain the sectoral output, value added or employment given the values of the sectoral or aggregate final demand components.
Finally, the foundations of the MS model are based on the evolution of the theory of rational expectations. The MS model is similar to the KK model in that they both are dynamic, non-linear, stochastic and discrete. But in this model the formation of expectations is no longer a function of previous values of dependent variables. The forward looking expectation variables can be obtained only through solving the complete model. The New Classical School demonstrated the role of the supply side and expectations in a MEM with the aim of highlighting the inadequacy of demand management policies. To this end, Sargent (1976) formulated forward-looking variants of this model which suggest no trade-off between inflation and unemployment in the short term, which is in sharp contrast to both the Keynesian and Monetarist modelling perspectives.
It is noteworthy that the subsequent advances in the WJ and WL models led to the formulation of CGE modelling, which is categorised here as the second type of macroeconomic model. The Neoclassical CGE models are based on the optimising behaviour of economic agents. The main objectives of CGE models are to conduct policy analysis on resource economics, international trade, efficient sectoral production and income distribution (Capros, Karadeloglou and Mentzas, 1990).
The 1960s witnessed the flowering of the large scale macroeconometric modelling. This decade saw the construction of the Brookings model, in which an input-output table was incorporated into the model. Adopting the team approach in modelling procedure in the 1970s, the majority of model builders aimed at the commercialisation of the comprehensive macro models, such as DRI, Wharton and Chase, by providing information to private enterprises. Modellers designed their models on the basis of quarterly or monthly data with the goal of keeping the models up-to-date, for commercial gain. As a consequence of taking such measures, model-builders became commercially successful (Fair, 1987). It is believed that in this era, the full-grown models “would contribute substantively to enlarging our understanding of economic processes and to solving real- world economic problems” (Sowey and Hargreaves, 1991: 600).
During the last three decades, MEMs have been internationalised via Project LINK which was first operated at the University of Pennsylvania. In 1987 according to Bodkin (1988b) Project LINK consisted of 79 MEMs of individual countries or aggregations. In Project LINK the world is treated as a closed system of approximately 20,000 equations which “allow trade, capita flows, and possible exchange rate and other repercussions to influence systematically the individual national economies” (Bodkin, 1988b: 222).
From STRUCTURAL ECONOMETRIC MODELLING: METHODOLOGY AND TOOLS WITH APPLICATIONS UNDER EVIEWS
Since an early date in the twentieth century, economists have tried to produce mathematical tools which, applied to a given practical problem, formalized a given economic theory to produce a reliable numerical picture. The most natural application is of course to forecast the future, and indeed this goal was present from the first. But one can also consider learning the consequences of an unforeseen event, or measuring the efficiency of a change in the present policy, or even improving the understanding of a set of mechanisms too complex to be grasped by the human mind.
In the last decades, three kinds of tools of this type have emerged, which share the present modelling market.
The “VAR” models. They try to give the most reliable image of the near future, using a complex estimated structure of lagged elements, based essentially on the statistical quality, although economic theory can be introduced, mostly through constraints on the specifications. The main use of this tool is to produce short term assessments.
The Computable General Equilibrium models. They use a detailed structure with a priori formulations and calibrated coefficients to solve a generally local problem, through the application of one or several optimizing behaviors. The issues typically addressed are optimizing resource allocations, or describing the consequences of trade agreements. The mechanisms described contain generally little dynamics.
This is no longer true for the Dynamic Stochastic General Equilibrium models, which dominate the current field. They include dynamic behaviors and take into account the uncertainty in economic evolutions. Compared to the traditional models (see later) they formalize explicitly the optimizing equilibria, based on the aggregated behavior of individual agents. This means that they allow agents to adapt their behavior to changes is the rules governing the behaviors of others, including the State, in principle escaping the Lucas critique. As the model does not rely on traditional estimated equations, calibration is required for most parameters.
The “structural” models. They start from a given economic framework, defining the behaviors of the individual agents according to some globally consistent economic theory. They use the available data to associate to these behaviors reliable formulas, which are linked by identities guaranteeing the consistency of the whole set. These models can be placed halfway between the two above categories: they do rely on statistics, and also on theory. To accept a formula, it must respect both types of criteria.
The use of this last kind of models, which occupied the whole field at the beginning, is now restricted to policy analysis and medium term forecasting. For the latter, they show huge advantages: the full theoretical formulations provide a clear and understandable picture, including the measurement of individual influences. They allow also to introduce stability constraints leading to identified long term equilibriums, and to separate this equilibrium from the dynamic fluctuations which lead to it.
Compared to CGEs and DSGEs, optimization behaviors are present (as we shall see later) and introduced in the estimated equations. But they are frozen there, in a state associated with a period, and the behavior of other agents at the time. If these conditions do not change, the statistical validation is an important advantage. But sensitivity to shocks is flawed, in a way which is difficult to measure.
From Macroeconomic Modeling in the Policy Process: A Review of Tools Used at the Federal Reserve Board and Their Relation to Ongoing Research
From Macroeconomic Modeling in the Policy Process: A Review of Tools Used at the Federal Reserve Board and Their Relation to Ongoing Research
USA Central Bank Models
A. FRB Models (Neo Classical)
MPS ( MIT-PENN-FRB)
FRB/US (since 1996)
B. FRB/NY DSGE Model
C. FRB/Chicago DSGE Model
D. FRB/Philadelphia DSGE Model – PRISM
Newer Central Bank Models
From Macroeconomic Models for Monetary Policies: A Critical Review from a Finance Perspective
There has been a remarkable evolution of macroeconomic models used for monetary policy at major central banks around the world, in aspects such as model formulation, solution methods, estimation approaches, and importantly, communication of results between central banks. Central banks have developed many different classes and variants of macroeconomic models in the hopes of producing a reliable and comprehensive analysis of monetary policy. Early types of models included quantitative macroeconomic models1, reduced-form statistical models, structural vector autore- gressive models, and large-scale macroeconometric models, a hybrid form combining the long-run structural relationships implied by a partial equilibrium treatment of theory (e.g., the decision rule for aggregate consumption) and reduced-form short-run relationships employing error-correcting equations.
Over the past 20 years in particular, there have been significant advances in the specification and estimation for New Keynesian Dynamic Stochastic General Equilibrium (New Keynesian DSGE) models. Significant progress has been made to advance policymaking models from the older static and qualitative New Keynesian style of modeling to the New Keynesian DSGE paradigm. The New Keynesian DSGE model is designed to capture real world data within a tightly structured and self-consistent macroeconomic model. The New Keynesian DSGE model has explicitly theoretical foundations, allowing it to circumvent the Sims Critique (see Sims, 1980) and the Lucas Critique (see Lucas, 1976), and therefore it can provide more reliable monetary policy analysis than earlier models. A consensus baseline New Keynesian DSGE model has emerged, one that is heavily influenced by estimated impulse response functions based on Structural Vector Autoregression (SVAR) models. In particular, a baseline New Keynesian DSGE model has recently been shown by Christiano et al. (2005) to successfully account for the effects of a monetary policy shock with nominal and real rigidities. Similarly, Smets and Wouters (2003, 2007) show that a baseline New Keynesian DSGE model can track and forecast time series as well as, if not better than, a Bayesian vector autoregressive (BVAR) model. New Keynesian DSGE models have been developed at many central banks, becoming a crucial part of many of their core models.2 Sbordone et al. (2010) have emphasized that an advantage of New Keynesian DSGE models is that they share core assumptions about the behavior of agents, making them scalable to relevant details to address the policy question at hand. For example, Smets and Wouters (2007) introduced wage stickiness and investment frictions into their model, Gertler et al. (2008) incorporated labor market search and wage bargaining, and Bernanke et al. (1999), Chari et al. (1995) and Christiano et al. (2008) studied the interaction between the financial sector and macroeconomic activity.
The devastating aftermath of the financial crisis and the Great Recession has prompted a rethink of monetary policy and central banking. Central bank monetary policy models face new challenges. Many macroeconomists (and in fact, many of the world’s leading thinkers) have called for a new generation of DSGE models. The first and foremost critique of the current state of the art of New Keynesian DSGE models is that these models lack an appropriate financial sector with a realistic interbank market, and as a result, the models fail to fully account for an important source of aggregate fluctuations, such as systemic risk. Second, the linkage between the endogenous risk premium and macroeconomic activity is crucial for policymakers to understand the transmission mechanism of monetary policy, especially in financially stressed periods. In models that lack a coherent endogenous risk premium, policy experiments become unreliable in stressed periods, and the model cannot provide a consistent framework for conducting experimental stress tests regarding financial stability or macroprudential policy. Third, heterogeneity among the players in the economy is essential to our understanding of inefficient allocations and flows between agents. These inefficiencies have an extremely important effect on the equilibrium state of the economy. Without reasonable heterogeneity among agents in models, there is no way to infer the distributional effects of monetary policy.
Finally, and perhaps most importantly in terms of government policy, a new generation of models is in strong demand to provide policymakers a unified and coherent framework for both conventional and unconventional monetary policies. For example, at the onset of the financial crisis, the zero lower bound went from a remote possibility to reality with frightening speed. This led central banks to quickly develop unconventional measures to provide stimulus, including credit easing, quantitative easing and extraordinary forward guidance. These unconventional measures demanded a proper platform to be analyzed. Furthermore, these unconventional monetary policies have blurred the boundary between monetary policy and fiscal policy. Through these policies, central banks gave preference to some debtors over others (e.g. industrial companies, mortgage banks, governments), and some sectors over others (e.g. export versus domestic). In turn, the distributional effects of monetary policy were much stronger than in normal times. As a result, these measures are sometimes referred to as quasi-fiscal policy. As Sims emphasized, a reliable monetary policy experiment cannot ignore the effect of ongoing fiscal policy. In order to implement unconventional measures during the crisis, central banks put much more risk onto government balance sheets than ever before, which had the potential to lead to substantial losses. Thus the government balance sheets in these models should be forward-looking, and its risk characteristics are crucial to the success of the model.
Other Central Banks Models
From Macro-Econometric System Modelling @75
A fourth generation of models has arisen in the early 2000s. Representatives are TOTEM (Bank of Canada, Murchinson and Rennison, 2006), MAS (the Modelling and Simulation model of the Bank of Chile, Medina and Soto, 2005), GEM (the Global Economic Model of the IMF, Laxton and Pesenti, 2003), BEQM (Bank of England Quarterly Model, Harrison et al, 2004), NEMO (Norwegian Economic Model at the Bank of Norway, Brubakk et al, 2006), The New Area Wide Model at the European Central Bank, Kai et al, 2008), the RAMSES model at the Riksbank (Adolfson et al, 2007), AINO at the Bank of Finland (Kuismanen et al, 2003), SIGMA (Erceg et al, 2006) at the U.S. Federal Reserve, and KITT (Kiwi Inflation Targeting Technology) at the Reserve Bank of New Zealand, Beneˇs et al, 2009.
From Macroeconomic Models for Monetary Policies: A Critical Review from a Finance Perspective
the Bank of Canada (QPM, ToTEM),
the Bank of England (MTMM, BEQM),
the Central Bank of Chile (MAS),
the Central Reserve Bank of Peru (MEGA-D),
the European Central Bank (NAWM, CMR),
the Norges Bank (NEMO),
the Sveriges Riksbank (RAMSES),
the US Federal Reserve (SIGMA, EDO),
the Central Bank of Brazil,
the Central Bank of Spain,
the Reserve Bank of New Zealand,
the Bank of Finland,
and IMF (GEM, GFM and GIMF).
In particular, the Bank of Canada, the Bank of England, the Central Bank of Chile, the Central European Bank, the Norges Bank, the Sveriges Rikbank, and the U.S. Federal Reserve have incorporated New Keynesian DSGE models into their core models.
Other Institutions Models
USA CBO (Congressional Budget Office)
USA OMB ( Office of Management and Budget)
USA Department of Energy – EIA Models
USA Bureau of Economic Analysis (BEA) Model
University of Michigan RSQE Model
FAIR US and MC Model at Yale University
Other Governmental Agencies Models
Private Sector Forecasting Models
The Conference Board
IHS Inc./Global Insight
Old Non Governmental Models
DRI (Data Resources Inc.)
They all merged into an entity IHS, Inc.
In 1987 Wharton Econometric Forecasting Associates (WEFA) merged with Chase Econometrics, a competitor to DRI and WEFA, and in 2001 DRI merged with WEFA to form Global Insight. In 2008 Global Insight was bought by IHS Inc., thus inheriting 50 years of experience and more than 200 full-time economists, country risk analysts, and consultants. 
The following book is a good resource for Lists of Models used in various countries.
Macroeconometric Models By Władysław Welfe
System Dynamics Models
Stock Flow Consistent Models
Flow of Funds Models
Agent based Computational Models
Network Economics Approaches
From Can Disequilibrium Macroeconomic Models Be Used to Anticipate Financial Instability? A Case Study
Two other approaches to modeling the macroeconomy are flow-of-funds models and stock-flow consistent models, and a fourth is agent-based models. All trace unfolding processes rather than equilibrium snapshots, and are so evolutionary. SFC models also differ from DSGE models in that they aim to be financially complete (but obviously stylized) representations of the economy.