In a world of low rates, what else can the RBA and central banks do?

From The Conversation.

The world still needs the central banks to bail us out of trouble but the impact of monetary policy is complicated in a world of zero or near-zero interest-rate policy (ZIRP) and negative interest-rate policy (NIRP).

Money presents us with three alternatives: we can spend it, save it or invest it. Most households and governments do the first; financial institutions take the third option; and virtually no one saves. Except Asia, obviously.

In 2008, spending and investment froze during the global financial crisis (GFC). This forced central banks and governments to ultimately adopt unorthodox and largely unprecedented strategies. Two tools were available to governments: fiscal stimulus and looser monetary policy. Most governments adopted a mix of both.

However, there are political and financial limits to fiscal policy, particularly as governments grew increasingly overextended during the GFC. Consequently, since 2008, monetary policy has largely displaced fiscal policy as means of generating economic stimulus. Except in Sydney, at the Reserve Bank of Australia (RBA).

ZIRP it. ZIRP it good

The Bank of Japan (BoJ) was the first to adopt ZIRP, as it sought to deal with the aftershocks of the Heisei recession of the early 1990s. This was referred to as Japan’s “lost decade”, as it experienced stagnant growth, a condition still bedevilling the country today, despite the best efforts of Abenomics.

As the global financial crisis emerged throughout 2007–08, the US Federal Reserve, the European Central Bank (ECB) and the Bank of England sank hundreds of billions of their respective currencies into their foundering financial sectors. The People’s Bank of China injected massive liquidity into Chinese markets.

In Australia, the RBA slashed interest rates, with deep successive cuts in 2008–09. Looser monetary policy was matched by the Rudd government’s significant fiscal expansion to prevent the collapse of consumer spending.

The reason behind this fiscal pump priming, combined with the dramatic monetary measures, was clear: in late 2008, credit markets froze. Admittedly, there is much debate about how long and to what extent this occurred. However, the fear of contagion was so palpable that the interbank lending market experienced systemic dysfunction and, at the very least, credit rationing took place.

The problem for central banks is that they have relatively few monetary tools available to them. The traditional lever to prevent overheating is to exert monetary discipline by raising interest rates, thus increasing the cost of credit.

Conversely, under the crisis conditions of the GFC, the central banks slashed interest rates to encourage consumption. However, the US Federal Reserve, the Bank of Japan, the Bank of England and the European Central Bank reached their lower limits faster than the RBA, which never adopted ZIRP.

But that may be about to change. The RBA’s cash rate is at a historic low of 1.75%, and the bank may cut further as the Australian economy plateaus, combined with the uncertainty wrought by Brexit.

The new normal

Make no mistake: ZIRP and even perhaps NIRP are the new normal. Just ask Janet Yellen. When the Federal Reserve chairman increased US interest rates by 0.25% in December 2015, the markets reacted savagely. It was the first Federal Reserve (Fed) rate rise since 2006.

US Federal Reserve chair Janet Yellen. JIM LO SCALZO/AAP

Fourteen months earlier, Yellen had tapered off the US’s third quantitative easing program (QE3), ending it on schedule in October 2014. Between 2008 and 2014, the Fed had purchased over US$4.5 trillion in government bonds and mortgage-backed securities in three rounds of QE, plus a fourth program, Operation Twist (2011–12).

The outcome was an avalanche of “free” money. Why “free”? Because, in the long run, the real cost of the capital for commercial banks was zero, or less than zero.

The Fed was effectively printing money (although it’s more complex than that). The effects were clear: the US central bank was reflating the American economy, and by extension the global economy, by injecting massive amounts of liquidity into the system in an attempt to ameliorate the worst effects of the 2008–09 financial crisis.

US Fed moves this year

No one on the markets was surprised by the central bank’s December 2015 rate rise. The clear objective was to return some semblance of normality to global interest rates.

The problem is it didn’t work. The tapering-off of QE in late 2014 meant that the last sugar hits of stimulus were wearing off in 2015.

The Yellen rate rise, plus the clear intention of the Fed to incrementally drive rates higher, spooked the markets. In May this year, undeterred by gloomy US jobs figures, Yellen indicated that she would seek to raise US interest rates “gradually” and “over time” as US growth continued to improve. Her concern was that adherence to ZIRP would ultimately bite in the form of inflation.

Not anymore. Brexit has seen to that. It was one of the factors behind the Fed committee’s decision to keep interest rates on hold in mid-June.

ZIRP – or something approximating it – is becoming the “new normal” because cheap money has become structural; the global financial system is now structured around the persistence of low-cost credit. NIRP is thus the logical continuum of this downward interest rate spiral.

Negative interest zates

Until recently, most macroeconomic textbooks argued that zero was rock bottom for interest rates. The GFC shifted the goalposts.

This is where NIRP enters the picture: negative interest rates. How do they work? Typically, commercial banks will park their money in their accounts with the central bank, or in private markets, such as the London Interbank Offered Rate (LIBOR). Thus, their money never sleeps and earns interest 24/7, even when bank doors are shut.

But NIRP is different. Negative rates mean depositors pay for the privilege of a bank to hold their money. Which means depositors are better off holding the cash than placing the funds on deposit. Japan has experienced the results of a NIRP first-hand.

Bank of Japan (BOJ) Governor Haruhiko Kuroda decided to adopt negative interest rates. FRANCK ROBICHON/AAP

There is a method in this madness: the G7 central banks want commercial banks to lend, not to accumulate piles of cash. Consequently, the policy effect of both ZIRP and NIRP is to stimulate business and consumer lending in order to drive real economic activity. With piles of cash looking for investment placements, the shadow banking system of financial intermediaries may also drive enterprise investment.

However, ZIRP and NIRP are blunt instruments; the perverse outcomes of the stimulus programs of the US Fed, the Bank of Japan and the European Central Bank were artificially inflated stockmarkets and various sector bubbles (such as real estate, classic cars).

The combination of ZIRP and QE may have also created a “liquidity trap”. This means that central banks’ QE injections caused only a sugar rush and did not inflate prices, as one would normally expect from a significant expansion of the monetary base.

Instead, many developed countries have experienced multiple recessions and a prolonged period of deflation. In April this year, the Australian economy experienced deflation for the first time since the GFC, which compelled the RBA to make its most recent 0.25% cut in May 2016.

Yellen knows the global economy cannot retain ZIRP indefinitely. But, ironically, all of the central banks are caught in their own liquidity trap: unable to relinquish ZIRP for fear of market catastrophe; unwilling to abandon QE entirely as “the new normal” demands fresh injections of virtually cost-free credit.

A lack of interest

The Australian economy has done quite well by having interest rates above the OECD average, particularly since the GFC. This has encouraged significant foreign investment flows into Australia as global investors seek somewhere – anywhere – to park their cash as other safe-haven government bonds, such as the US, Japan and Germany, are in ZIRP or NIRP territory. It also doesn’t hurt that Australia’s major banks and government bonds are blue-chip-rated. And Australian sovereign bonds have excellent yields too.

If ZIRP is the new normal, that matters to the Reserve Bank of Australia. It also matters to all Australian home buyers, businesses, banks, pensioners, investors, students and credit card holders. Everyone, in other words.

ZIRP has created hordes of winners: mortgage interest rates are at historic lows. Property buyers who borrowed when rates were relatively high (at, say, 6-7%) are now paying less than 4%. Credit card rates are still astronomically high (20–21% or more), but balance transfer rates are zero. New credit issues in terms of consumer debt represented by unsecured loans (which is what a credit card is) have a real capital cost of zero. This is virtually unprecedented.

But ZIRP or near-ZIRP produces many losers as well. There is no incentive to save because rates are so low. Hoarding cash makes no sense.

Global surplus capacity reinforces deflation as both goods and commoditised services are cheap. Wages are terminal. Pension funds’ margins are smaller, thus expanding future liabilities and reducing the value of current superannuation yields.

In a world of ZIRP, is it any wonder that all of this cheap or (effectively) free cash has been stuffed into the global stock exchange and real estate markets, creating not only a double bubble, but double trouble?

The best things in life are free

QE is like heroin: the first hit is always free. The commercial banks got their first hit in 2008 and the prospect of going cold turkey sends them into paroxysms of fear.

The problem is that the dealers – the central banks – have started using their own product and are just as hopelessly addicted to both ZIRP and QE. To rudely cut off supply would destroy their own markets.

The RBA is not immune to the elixir of ZIRP. No central bank wants to assume responsibility for a recessionary economy; the RBA took enough heat for its monetary policy mismanagement of 1989-90, which induced the 1990s recession.

Unlike the Fed, the RBA is not about to fire up the printing presses and engage in rounds of QE, if it runs out of tools and is compelled to adopt ZIRP. The RBA is too conservative to engage in such policy in any case.

But this conservatism has a direct impact upon federal government fiscal policy, irrespective of whether the LNP or the ALP is in power. From Rudd to Turnbull, Treasury has been forced to increase its borrowing time and time again, blowing out the forward fiscal projections year after year.

No government has delivered a surplus because it is no longer possible. The RBA is partly responsible for this because, rather than expanding its balance sheet via QE, it has forced Canberra to accumulate government debt of more than $AU400 billion, which the overburdened Australian taxpayer will pay for.

Like most drug deals, this will not end well.

Author: Remy Davison, Jean Monnet Chair in Politics and Economics, Monash University

Can slower financial traders find a haven in a world of high-speed algorithms?

From The Conversation.

It sounds like a scene from “Jurassic World”: fast, agile predators pursue their slower, less nimble prey, as the latter flee for safer pastures. Yet this ecology framework turns out to be an apt analogy for today’s financial markets, in which ultra-fast traders vie for profits against less speedy counterparts.

In fact, the algorithmic traders (known variously as algos, bots and AIs) proliferating in financial markets may well be viewed as an invasive species that has upended the prevailing order in their shared habitat. A 2013 article asserts that the financial world has become a “techno-social” system in which human traders are shunted aside, unable to keep up with the bots interacting in a “new machine ecology beyond human response time.”

And in a rapidly evolving world of autonomous traders, past experience may not provide reliable assurance of safety and predictability. The hallmark of a flash crash is lack of an apparent triggering event, generating uncertainty that can further destabilize markets.

Is the regime of algorithmic traders making the financial world more dangerous? How can market innovation and regulations shape this habitat for better or worse? For policy makers, the pressing question is: how can we operate our markets so that they remain stable and efficient amid fundamental technological changes?

In my research on artificial intelligence and strategic reasoning, I’ve been exploring answers to these questions by modeling how the world of trading works.

‘Latency’ arms race

What makes this world especially different and unpredictable is the unprecedented speed at which trading bots can respond to information.

A slight edge translates into profit because of the way exchanges match orders. When new information arrives, the first trader to react is able to make money off of slower rivals, while any relative delay or latency of even a fraction of a millisecond can mean no trade and no profit.

This leads inevitably to a latency arms race in which the designers of trading algorithms adopt any available method to shave milliseconds or even microseconds – one millionth of a second – from response time.

Most exchanges and trading forums have catered to the high-frequency traders, providing premium access options and interface features that preserve or enhance the advantage of speed.

An exception is the alternative trading system IEX, featured in Michael Lewis’s Flash Boys and backed by institutional investors, which introduced a 350 microsecond delay on order submission to shield against high-speed bots. On June 17, the Securities and Exchange Commission (SEC) approved IEX’s application to operate as a public exchange – rather than only as a private trading platform – against strong opposition by high-frequency traders and competing exchanges.

Ending the latency race

But there is another way to neutralize small speed advantages: change the way markets time the matching of buy and sell orders.

Today’s typical market works by matching orders to buy and sell a stock or other asset on a continuous basis. For example, when a trader submits a request to buy a share of Apple at a specific price, the exchange matches it immediately if there is an offer from someone else to sell at the same price or less. This immediacy is what allows a trader able to react more swiftly to new information (say news about the latest iPhone) to profit off of slower rivals.

In a frequent call market, on the other hand, orders to buy and sell are matched at fixed intervals (such as once every second). So our Apple buyer with knowledge of the release of a big improvement in the iPhone wouldn’t be able to get a jump on rivals because her order wouldn’t transact immediately, giving time for others to “catch up.”

By ensuring that speed no longer categorically prevails, the incentive for shaving milliseconds and microseconds is virtually eliminated. Orders within the interval compete instead based on price, leading to a more efficient overall set of trades.

Regulators have taken notice. New York Attorney General Eric Schneiderman has publicly endorsed the frequent call market – also known as a frequent batch auction – to even the playing field. And SEC Chair Mary Jo White said it could help counter problems with algorithmic trading.

At present, however, no stock exchange operates as a full-fledged frequent call market. One major hurdle to adoption is perception: the view that faster is always better.

Another problem that some have raised is that it would only be viable if all exchanges adopted the method simultaneously because otherwise traders would always pick the venue offering the most immediacy.

But is this true? Given the option of trading on either a continuous market or a frequent call market, which one would investors prefer? Or, in the terms of our ecology metaphor, would they flock to the new habitat operating in discrete time intervals or stay in the traditional continuous domains?

Predator and prey

To answer this question, in research conducted at the University of Michigan, Elaine Wah and I developed a model with two markets, one continuous and the other a frequent call market.

In this model, traders are either fast (think high-frequency) or slow (such as institutional and retail investors). Each trader can choose to buy and sell in one of the two markets and so will prefer to pick the one that offers the highest expected trading gains, taking all others’ behavior as given.

If all the agents are in one market, no individual can benefit by going to the other, as there is nobody to trade with. We therefore focused on market attraction, measured in terms of the prevalence of conditions that would make one trader want to switch.

Our results show that fast traders prefer the continuous market, where they can make the most money, but only when the slow traders are also there. In other words, the predators need their prey in order to be profitable, which means they have a pronounced tendency to follow the slow traders to whichever market they go.

Slow traders, on the other hand, can evade their pursuers by fleeing to the market with fewer fast traders. If the fast traders are prevalent in both markets, then slower ones tend to seek refuge in the frequent call market, which offers some protection from faster traders with better information, as well as generally higher trading gains.

A recent paper by Zhuoshu Li and Sanmay Das from Washington University also found, under quite different assumptions, a tendency for the frequent call market to attract traders away from continuous markets.

Lessons for exchanges

What both of these studies suggest is that we may not need a top-down mandate to transform financial markets from continuous to discrete-time trading. Simply making the option available in one or two exchanges may capture the population, as the haven for slow traders can attract both the prey and the predators in pursuit.

High-frequency traders have been relentless in their pursuit of lower latencies and faster access to market-moving information, but ultimately it’s the continuous markets that deserve blame for allowing this predator-prey dynamic to take shape.

Neutralizing the advantage of tiny speed improvements with something like a frequent call market offers a clear-cut solution. The introduction of such a market will provide an attractive haven for investors, and widespread adoption could eventually send the latency arms race the way of the dinosaurs.

Author: Michael Wellman, Professor of Computer Science & Engineering, University of Michigan

What a hung parliament could mean for super

From The Conversation.

The Australian superannuation system is already complex to navigate without the added uncertainty of looming changes. These changes may not even eventuate if the Coalition fails to gain the support it needs in parliament.

Superannuation reform was a key feature of the 2016-17 Budget handed down the week before the election was called. The Coalition, ALP and Greens all acknowledge in their policies that the current system of tax concessions favours the wealthy, who are able to contribute more to superannuation.

However the three detailed policies take different paths to reform and the final outcome may be determined by the balance of power in the new government.

Low income earners

Firstly, all are agreed that low income earners should not pay more tax on their superannuation than on their other earnings. The Low Income Superannuation Contribution is due to be repealed with effect from 30 June 2017: the Coalition and ALP propose to allow a tax credit of 15% that will effectively cancel the tax paid by the superannuation fund on contributions by low income earners.

The Greens propose a progressive tax on contributions, with a nil rate for low income earners, and would also allow a government co-contribution of 15% for people earning less than the tax free threshold of A$18,200.

Concessional (untaxed) contributions

The concessional contribution caps for the 2016-17 year are $30,000 or $35,000 for people over 50. The Coalition has proposed reducing the cap to $25,000, effective from 1 July 2017. The ALP and the Greens has not proposed any changes to these caps. However the Greens do propose a progressive tax rate on contributions based on a discount of 15% on the marginal tax rate on the contributor’s income.

Currently superannuation contributions are taxed at 30% instead of 15% to the extent that the contributor’s income, including superannuation contributions, exceeds $300,000. Both the ALP and the Coalition will reduce that threshold to $250,000. Under the Greens progressive tax sale this threshold would be reduced to $150,000.

Non-concessional (taxed) contributions

There is a clear difference between the Coalition and ALP policies.

Non-concessional contributions allow a person to contribute after tax funds into superannuation, which pays a flat rate of 15% tax on investment earnings, instead of the marginal rate of tax that would apply if invested personally. For 2016-17 a person can invest $180,000 pa or $540,000 over three years.

The Coalition proposes a lifetime cap of $500,000, including any contributions made after 2007. Where a person has already exceeded the cap there would be no penalty, but further contributions would not be permitted.

The ALP has not made any proposals in relation to the cap, and has campaigned against including pre-budget contributions in the lifetime cap.

Tax on earnings of the superannuation fund

Currently superannuation fund earnings are taxed at 15% until it starts to pay a pension. From that time the superannuation fund pays no tax on income set aside to pay the pension. Both major parties propose to limit this exemption, but will use different mechanisms.

The Coalition Transfer Balance Cap proposal caps the value of the assets that can earn exempt income at $1.6 m. The balance over this cap can remain in a fund taxed at 15%.

The ALP will exempt income up to $75,000 pa, which represents a return of about 4.7% on $1.6 m.

Although the outcomes are comparable, ultimately the ALP proposal gives more certainty in planning future income streams, as the exempt amount is less dependent on the rate of return achieved by the superannuation trustee.

Superannuation pension streams

There have been no proposals to remove the tax exemption for pensions received by a person over 60, or to change the tax concessions on lump sum withdrawals.

However the Coalition has proposed a change to the tax concessions on transition to retirement schemes. Currently such arrangements are built on the income of the superannuation fund becoming exempt from tax: the Coalition has proposed removing this exemption in the fund which will reduce, but not eliminate, the tax benefit. The ALP has no policy in this area, but has campaigned against the proposal.

Prospects for reform

It’s highly likely that the changes to the lifetime cap will proceed, with the support of both parties. The extension of a 15% credit to the superannuation account of low income earners will also proceed; regardless of who forms government. Both of these measures are likely to be supported by the Greens and Nick Xenophon Team as well as the two major parties.

The other budget proposals must pass two hurdles. Assuming that the Coalition does gain enough seats to form government, the election campaign highlighted division among the conservatives in respect of the lifetime cap on non concessional contributions, the cap of $1.6 million in pension assets and the changes to transition to retirement pensions.

If these measures are presented to the parliament, the position of the ALP is not clear. Although the published policy says no measures other than the lifetime cap change and the cap on exempt earnings will be introduced, there have been suggestions that the ALP will consider some of the other measures.

It’s unlikely that the ALP will support measures that it regards as retrospectively changing the rules . However it would not be unusual to backdate any new limits to budget night when investors were given notice of the proposals.

Without the support of the ALP, the Coalition would have to convince the Senate crossbenchers of the merits of the reforms.

Author: Helen Hodgson, Associate Professor, Curtin Law School and Curtin Business School, Curtin University

Companies may be misleading investors by not openly assessing the true value of assets

From The Conversation.

Some companies are taking years to recognise asset impairments, and may be misleading investors who are not privy to the valuation decisions. Research shows this is because managers of many firms think or hope that assets are not overvalued.

This occurs when companies either don’t recognise, or delay the recognition of asset impairments. These asset impairments represent a downward adjustment in the value of assets, to what is called “recoverable amount”. This is determined by either the value the asset could be sold for, or its value to the business right now.

One example of this process of recognising asset impairments can be easily seen in Nine Entertainment Corporation Ltd in 2015. Through the first half of 2015 the share market value declined significantly, and by year end its book value (the value of net assets on the balance sheet) would have exceeded the firm’s market value.

This was probably occurring as investors revised their estimates of future returns in response to changes in the television industry and increasing competition from pay television, internet-based television and other online media. These factors are indicators of declining asset values, which are explicitly identified in the regulation, and this requires a test for asset impairment by the firm.

Next, Nine would have determined the recoverable amount of the assets. The company would have had to estimate future returns and, while there are extensive guidelines on how this should be done, considerable judgement is still required. The end result in this case was an asset impairment of A$792 million that resulted in Nine reporting a loss for the year.

The Australian Securities and Investment Commission (ASIC) regularly reviews the financial reports of listed firms. Where necessary, it seeks their explanations for particular accounting treatments. Risk-based criteria are used to select which firms are reviewed and in some instances this leads to material changes in their reports.

The most recent review by the corporate regulator into end-of-year financial reports for 2015 found the biggest number of the queries (11 out of 24) into accounting related to the valuation of assets.

It is unlikely this is a consequence of poor regulation. The regulation sets out clear criteria, identifying the circumstances when asset impairment should be formally considered (i.e., where indicators of impairment exist) and the basis for calculating the amount of asset impairment.

In some cases determination of asset impairments should be straight forward. For example, where firms are unprofitable and the book value exceeds the market value of equity, the indicators of impairment are readily observable to all because it can be identified using “firm level” information.

However, in other cases it is not so straightforward and determining whether impairments are necessary and calculating the recoverable amount is then much more difficult.

Asset impairments are required to be evaluated at the level of business units, or what the regulation refers to as “cash-generating units”, rather than at the firm level. Accordingly, while asset impairments may be necessary in some business units, the need for or amount of asset impairments may be obscured in firm-level information.

For example, Arrium is clearly experiencing financial problems and has made a number of asset impairments. But it is not all bad; some of its business units are profitable. When the firm level information is considered it may start to mask the very poor performance in other business units. Hence, whether the need for asset impairment is obviously necessary will depend on relative size and number of poorly performing business units.

Significant judgement will be required in these cases. This includes defining business units and attributing assets to them. Only then can future returns be estimated, and this can never be done with certainty. If there are problems with the exercising of this judgement, then maybe the assumptions on which asset impairment decisions are based should be made clear and disclosed.

Unfortunately, the people who use these financial statements, such as investors, are often kept in the dark because firms are only required to disclose the assumptions behind their judgements if an impairment is actually made. However if these disclosures were always made, it would either support the asset values reported, or alternatively confirm that asset impairments are really necessary.

In the absence of these disclosures, investors and other users of financial statements do not get important up-to-date information about future returns that would underpin share prices.

It’s time to amend the regulation and reveal the explanations for not recognising asset impairments. Whenever there are indicators that impairment is necessary, companies should be required to disclose their assumptions even if the decision is not to impair.

Doing this will highlight how asset impairments are being (or, more critically, not being) determined and assets valuation will always be more transparent.

Authors: Peter Well, Professor, Accounting Discipline Group, University of Technology Sydney; Brett Govendir, Lecturer, University of Technology Sydney;  Roman Lani, Associate Professor, Accounting, University of Technology Sydney.

 

An uncertain election result may lead to stagnant financial markets

From The Conversation.

For the second time in the space of ten days, it appears that betting markets and pollsters have got it wrong. First, despite odds showing a 90% likelihood of “Remain” winning, the UK voted to “Leave” the European Union in its June 23 referendum.

Now, a mammoth federal election campaign has resulted in political stalemate in Australia, and the result will not be known until Tuesday at the earliest.

Clearly, the repercussions of a hung parliament are not as wide-ranging as “Brexit” and we are unlikely to see Canberra’s streets flooded with protesters. However, when Australian markets open on Monday they will still be faced with a high degree of political uncertainty. Investors do not tend to react favourably to such ambiguity.

Investors reduce risk under political uncertainty

Investors tend to respond in one of two ways. The most-common situation is for the political uncertainty to manifest in higher levels of market volatility and a flight to quality as investors try to reduce their exposure to risk.

This was what we witnessed post-Brexit: Australian stockmarkets and the dollar fell by more than 3%, while “safe” government bond yields hit an all-time low.

An alternative is for markets to become locked in stasis – where investors sit on their hands, unsure as to whether they should buy or sell. Market liquidity falls and asset prices become resistant to change.

This is effectively what happened following the hung parliament of August 2010. In the aftermath of that election, stock prices remained within a tight trading range and the dollar hardly budged over the course of the following week.

When the result of the 2016 election is finally known, it appears that the outcome will be either a minority Coalition government or a hung parliament. The Senate is likely to be more fractious than prior to the election.

Talk has already started about potential unrest among the conservative faction of the Liberal Party who supported former prime minister Tony Abbott. There is even discussion of an election re-run if the parliament proves ungovernable. Clearly, this uncertainty could linger for months.

Concerns for jobs and growth

The likelihood of a lengthy period of uncertainty is important. It means it will be difficult to pass any economic or budgetary reforms. Without such reforms, it is unlikely the budget will return to surplus in the near future (if ever) and it becomes more likely that the AAA credit rating will be lost.

This creates multiple concerns for Australian financial markets, and the broader economy. A credit rating downgrade will likely increase the cost of funding for Australia’s banks.

The Big Four banks will be particularly impacted given the significant role that offshore funding plays in their balance sheet management. This will mean higher interest rates for borrowers – which would not be beneficial for the housing market.

A prolonged period of uncertainty will make it difficult for firms to finalise investment decisions. At a time when the economy is still attempting to transition away from the boom in mining investment this will dent economic growth and employment. So much for “jobs and growth”.

Essentially, this is a recipe for a “risk-off” environment of declining stockmarkets and a depreciating Australian dollar. It is also likely that the market will price a higher likelihood of a reduction in the RBA target rate at the July or August meeting. This will further aid a continued rally in relatively safe government bonds (bond prices rise as yields fall).

If you consider the ongoing political uncertainty resulting from Brexit and the forthcoming US presidential elections in addition to the federal election, then months of nervous markets may lay ahead.

Author: Lee Smales, Senior Lecturer, Finance, Curtin University

Black market jobs cost Australia billions and youth are at the coalface

From The Conversation.

Young people, job creation and taxation have all been at the centre of the federal election campaign; yet almost nothing has been said about one of the sleeper issues these have in common – the cash-in-hand economy.

Youth unemployment is typically twice the national unemployment rate. Millennials are finding it harder to secure full-time work after leaving university. Shockingly, Australians aged 15-24 are at the highest risk of hospitalisation following a workplace accident.

However, there is another risk young people face that we know surprisingly little about.

A rose by any other name?

“Cash-in-hand” is a familiar phrase in our economy. Like most shady dealings, it goes by many names: unreported employment, the informal economy, or a grey labour market. Whatever we call it, it is used to circumvent Australian workplace and taxation legislation.

This should not be confused with being paid in cash. For example, let’s say an employer wanted to reduce their expenditure on transaction fees. They could add up an employee’s hours, calculate wages for the week minus tax, superannuation and other deductions. The adjusted wages could then be paid straight from the till, accompanied by a payslip.

The tell tale signs of a “cash-in-hand” job are a lack of formal employment paperwork, such as signed contracts, weekly payslips or a group certificate at tax time.

There are obvious downsides. These jobs are unlikely to pay the correct minimum wage, penalty rates, or super contributions. A greater concern is these jobs aren’t covered by workers compensation. Considering the previously mentioned risk of hospitalisation, cash-in-hand jobs become a serious concern.

Who, what and why?

The most concerning aspect is that so little data is being collected about these jobs.

A 2012 survey found that one in four young workers had recently done cash-in-hand work. While no concrete data exists on where these jobs are being offered, we can make some educated guesses.

Part-TimeThe figure above was created by selecting the top five jobs where the average age of employees was between 15-21. This gives us the most common industries for young Australians: fast food, hospitality, and retail.

Figure 2 ‘Participation in education and/or employment among young people aged 15 to 24, by age group, 2005 and 2014’ Australian Institute of Health and Welfare analysis of ABS 2015

If we look at the orange portion of Figure 2, we can see that 29% of young Australians are combining work and study. This is especially relevant when we consider Student Visas, Youth Allowance and Austudy payments.

We know that approximately 899,000 young people are both working and studying. However, 229,900 are receiving study payments, at a maximum rate of $216.60 per week with the ability to earn an additional $216.50. This puts the maximum payment as $433.10 – just over $30 above the poverty line. Let’s use some hypothetical examples, and say that “Julie” and “Ravi” are two of these student workers.

Julie is 18 and works casually at a local cafe in Brunswick while studying at the University of Melbourne. To maximise her earnings, she works 13 hours during the week at $16.61 an hour. This gives her $215.80, combined with her Austudy payments for a total of $432.40 per week.

She shares a three bedroom house in Brunswick and pays $200 a week in rent. Her average weekly expenses are $104 on food, $10 for her mobile phone, $19.50 on her public transport, and $34 a week on utilities. This leaves her $64 per week for other expenses.

Ravi is a 21 year old international student. He is studying for his Masters at Deakin University and works at a supermarket in Burwood, near the house he and his brother share. His rent and expenses are comparable, but he cannot receive Austudy. His Visa states that he can only work 20 hrs a week, giving him a maximum income of $459.64 after tax. After accounting for expenditure, Ravi is a little better off with $92.13 to cover other expenses.

Neither example accounts for business cycle/seasonal demands, parental income affecting payments, unexpected expenses, legal fees, health costs, or textbooks. Basic living costs account for 80-87% of their entire wage.

If either student faces costs that can’t be met by their usual wages, they may consider “cash-in-hand” work the only viable alternative. Julie will still get her Centrelink payments, and Ravi won’t breach his visa requirements.

What we don’t know could hurt us

The risks of this informal economy extend well beyond young workers. Professor Christopher Bajada estimates that cash-in-hand jobs make up a informal economy equivalent to 15% of Australia’s GDP. Similarly, in 2004 the government estimated the informal economy between 3-15%.

Even if we take the lowest estimate of 3% of GDP, that’s approximately AUD$48.6 billion outside our economy. A 2012 comprehensive report produced by The Australia Institute estimates a staggering $3.3 billion of revenue is being lost to cash-in-hand working arrangements. Given that taxation, debt and public spending have become key election battlegrounds, this lost revenue is potentially game changing.

Author: Shirley Jackson, PhD Candidate in Political Economy, University of Melbourne

Why rents will rise under Labor’s negative gearing proposal

From The Conversation.

In the current housing tax debate a number of studies have come out arguing that while prices will fall (by varying amounts) rents will not be affected. That rents will be unaffected is surprising and (in my view) wrong.

Outside of the heat of an election, the Henry Tax Review’s comprehensive review of the tax system argued for lower taxes on savings, a proposition that most economists would regard as unexceptional. (There is now a (small) school of thought arguing for higher taxes on savings but this author for one does not subscribe to that.)

Specifically, the Henry Review recommended the marginal tax rates on interest and rental income should be 40% lower; for example, the 35% and 45% income tax rates on labour income would be lowered to 21% and 27%. For property investors these rates would also apply to capital gains and net losses, thereby reducing the value of negative gearing.

For ‘ungeared’ investors (those who do not take on debt), the effective tax rate would be lower while for highly geared investors the effective tax rate would be higher, leading to less incentive to leverage (making the Reserve Bank of Australia happy). Overall, the effective tax rate for the “average” investor would be higher.

Now the Henry Review acknowledged that its proposed changes would, by lifting the user cost of capital of investors, lift rents. It therefore explicitly said that its proposed changes would need to be accompanied by measures to both lower the cost of housing by removing supply constraints, and to lift levels of rental assistance for households in the private rental market. In short, it did not see the increases in rents as immaterial.

If the increase in user cost of capital (on investors who are ‘geared’ by borrowing money to invest) with the Labor proposal is higher (roughly double), on what basis could rents not rise? It is not evident to me.

The key component of the user cost of capital, and the one which varies the most over time, is interest rates. When interest rates rise or fall, we expect prices to fall, or rise. But interest rates also change rents, since rent = user cost × value of house.

And what we also see is that a rise in interest rates causes the rent-price ratio (that is, the ratio of home prices to annual rent, also referred to as the rental yield) to rise, while a drop in interest rates will see it fall.

To illustrate, consider Melbourne for the period 1991-2014 when interest rates have fallen significantly and the rent-price ratio has followed suit. This has seen prices increase significantly (4.9% pa in real terms), and faster than the rise in costs (3.2%). In inner areas where there is a significant location premium (over living at the urban fringe), the rise in prices has been fastest (5.8%) as the value of that location premium has been bid up.

That is, most of the change in the rent-price ratio has come from rising prices. On the other hand, in the outer areas, where there is no location premium and the value of a house is the structure plus the cost of land, prices (3.4% pa) have moved in line with costs (3.2% pa) but rents have risen much more slowly (1.4%). That is, rents explain the decline in rent-price ratios.

So, while the assumption of most commentators is that price movements do the work in changing rent-price ratios, and that is so over the short term, over a longer time span, rents do some of the adjustment.

Changes in interest rates are uncontroversial. But the same principles apply to changes in tax if they change the cost of capital, which is why the Henry Review expected rents to rise.

In the case of the Labor’s negative gearing changes, the waters are muddied for some by its proposed exemption on new housing. A couple of points here. Firstly, ABS figures (see Table 8 from ABS5671.0 – Lending Finance, Australia) are quoted to suggest that investors’ purchases are 93% established housing, and only 7% new housing. This significantly understates the role of investors.

The NAB residential property survey has domestic investor purchases of new housing at about 20-30% – that is, domestic investors are already a significant component of the new market (adding to supply!).

Secondly, Henry also expected a change in the mix of landlords to consolidate from one with a large number of small landlords, to one with a smaller number of large landlords. More marginal investors – middle income/low wealth investors – will be the first to vacate the field as their entry point is typically cheaper, old stock not premium new stock.

High income/low wealth investors will have the option of new dwellings. High income/high wealth individuals will benefit from the higher rents and lower prices on established dwellings.

That is, the ownership of the dwelling stock (and tax benefit!) will shift to the top end of income earners. But it is not clear that the special treatment of new housing will add materially, if at all, to supply of new dwellings.

In short, the law of unintended consequences will apply. Logic says that rents will rise, and with the 30% renting in the private market skewed to low income earners, that means housing affordability will have declined for these people.

Author: Nigel Stapledon, Andrew Roberts Fellow and Director Real Estate Research and Teaching Centre for Applied Economic Research, UNSW Australia

Charging for credit and debit card use may become the norm under new rules

From The Conversation.

New standards on how much businesses can surcharge their customers for credit or debit card purchases start in September. However, it’s not clear how the rules will be policed and whether this will lead to all businesses enforcing a surcharge, rather than just those who choose to.

The Reserve Bank of Australia (RBA) has revised the regulations, aiming to limit the amount merchants can surcharge customers for paying by credit or debit cards. The new rules will initially apply to large merchants, defined as those employing over 50 staff, as these businesses are seen to be overcharging the most.

Businesses have been able to add on surcharges to these type of purchases in Australia since January 2003. This was part of RBA regulatory interventions in the first place, as it originally allowed merchants to surcharge in order to recover the costs of accepting card payments. The surcharges can be ad valorem (in proportion to the value of the transaction) or a fixed dollar amount.

A current example is that taxi fares using a Cabcharge terminal, whether they be paid by charge, credit or debit card, are surcharged at the same ad valorem level of 5%, as a processing fee. Not all the goods and services suppliers who accept card payments chose to impose surcharges on their customers, but a significant and seemingly ever increasing of them do surcharge.

The Australian airlines are well known for their fixed dollar surcharges. Qantas charges a card payment fee (per passenger, per booking) of $2.50 for debit and $7.00 for credit, on domestic flights and $10 for debit and $30 for credit, on international flights.

JetStar charge a booking and service fee (per passenger, per flight) of $8.50 domestic and up to $12.50 for international, whilst Virgin charges a Fee of $7.70 for payments made by credit or debit card. These examples of surcharging have caused much angst amongst consumers and the recent Financial System Inquiry had over 5,000 submissions to its final report, complaining about surcharging, particularly by airlines.

But how will the new standards be enforced? In February, The Australian Competition and Consumer Commission (ACCC) was given the power to issue infringement notices worth up to just over $100,000 to listed corporations who charge their customers excess payment card surcharges

These are defined as charges that exceed the costs of acceptance of payment cards. It remains to be seen if the size of these penalties deters merchants from excessive surcharging.

In May, the RBA published new standards as to the average cost a merchant is permitted to charge for accepting credit or debit cards. These apply to the following so-called card schemes, EFTPOS; MasterCard credit and debit; Visa credit and debit and American Express companion cards, issued by Australian banks.

Under the new rules the average cost of accepting a debit or credit card is defined in percentage terms of cost of the transaction. This will vary by merchant, but it means that merchants will not be able to levy fixed dollar surcharges.

The permitted surcharge for an individual merchant will be based on an average of their card costs over a 12 month period. In the interests of transparency, the financial institution who processes each merchant’s transactions, will be required to provide regular statements of the average cost of acceptance for each of the card schemes.

This information will of course also be important for the ACCC in any cases where enforcement is required if merchants are surcharging excessively.

Now that surcharges are well defined by the RBA, the risk is that surcharging will become a normal extra charge like GST, an unintended consequence of the new rules. Also why should merchants be allowed to charge their customer a surcharge for such payments, which are surely just another cost of doing business, just as is their utility bills and employee wages?

The ACCC is currently finalising guidance material for consumers and merchants which will provide further information on the ACCC’s enforcement role and how consumers can make complaints if they believe that a surcharge is excessive.

Surcharges on card payments have certainly already provoked rage amongst consumers, the final question is, will the next iteration of surcharge standards make surcharging the norm?

Author: Steve Worthington, Adjunct Professor, Swinburne University of Technology

Don’t believe the Brexit prophecies of economic doom

From The Conversation.

The shock and horror at the Brexit vote has been loud and vociferous. Some seem to be revelling in the uncertainty that the referendum result has provoked. The pound falling in value, a downturn in markets – it lends credence to the establishment’s claims before the referendum that a Leave vote would lead to economic Armageddon.

But there are plenty of reasons to reject the consensus that Brexit will be costly to the UK’s economy. Even though markets appear stormy in the immediate aftermath of the vote, the financial market reaction to date has more characteristics of a seasonal storm than of a major catastrophe.

We were told that the consensus of economic experts were overwhelmingly opposed to a Brexit. Lauded institutions – from the IMF, OECD to the Treasury and London School of Economics – produced damning forecasts that ranged from economic hardship to total disaster if the UK leaves the EU. Yet 52% percent of the British electorate clearly rejected their warnings.

Something that my professional experience has taught me is that when an “accepted consensus” is presented as overwhelming, it is a good time to consider the opposite. Prime examples of this are the millennium bug, the internet stock frenzy, the housing bubble, Britain exiting the European exchange rate mechanism (ERM) and Britain not joining the euro. In each of these examples, the overwhelming establishment consensus of the time turned out to be wrong. I believe Brexit is a similar situation.

Downright dangerous

The economic models used to predict the harsh consequences of a Brexit are the tools of my profession’s trade. Used properly, they help us to better understand how systems work. In the wrong hands they are also downright dangerous. The collapse of the hedge fund Long-Term Capital Management in 1998 and the mispricing of mortgage backed securities leading up to the 2008 financial crisis are just two of many examples of harmful consequences arising from the abuse of such models.

The output of these often highly sophisticated models depends entirely upon the competence and integrity of the user. With miniscule adjustment, they can be tweaked to support or contradict more or less any argument that you want.

The barrage of dire economic forecasts that were delivered before the referendum were flawed for two main reasons. First, they failed to acknowledge the risks of remaining in the EU. And second, the independence of the forecasters is open to question.

Let’s start with the supposed independence of the forecasting institutions. While economists should in theory strive to be independent and objective, Luigi Zingales from the University of Chicago provides a compelling argument that, in reality, economists are just as susceptible to the influence of the institutions paying for their services as in other industries such as financial regulators.

Peer pressure

Another challenge faced by economists is presented by the nature of the subject matter. Economics is a social science which, at its heart, is about the psychology of human social interactions. Many models try to resolve the difficulties that human subjectivity causes by imposing assumptions of formal rationality on their models. But what is and is not rational is subjective. In further recognition of this difficulty the sub-discipline of behavioural economics has evolved.

Herding is a concept that has been used to rationalise financial market bubbles and various other behaviour. It describes situations in which it seems rational for individuals to follow the perceived consensus. Anyone who has found themselves in a position where the majority of their company has a radically different view to their own will have experienced the difficulty of standing out from the crowd.

In 2005-06, variouspeople (including myself) presented the view that house prices would crash. While some audiences were sympathetic, the majority view at the time was both hostile and derisory. Challenging the received wisdom exposes you to feelings of isolation.

Received wisdom among academia has been that the EU is a force for good that should be defended at all costs. Respected colleagues are incredulous that anyone with their education and professional insights could think otherwise and remain part of the academic “in” crowd. In such an environment, it is very difficult to challenge this orthodoxy.

I – and the bulk of the UK population – might have been convinced by the pro-Remain economists if they had been a little more honest about the limitations of their models, and the risks of remaining inside the EU.

Market reactions

Despite reports of markets crashing following the Brexit result, when you put the current level of volatility in context of other shocks, market conditions are not as bad as they might seem. The FTSE 100 is still higher than it was barely two weeks ago and the more UK-focused FTSE 250 is currently higher than it was in late 2014. This is the kind of volatility that markets see two or three times a year.

The volatility index for the US S&P, known as the VIX or the “fear gauge”, is what is widely used to measure how uncertain global financial market participants are about the outlook for stocks. When the Brexit result was first announced, the VIX moved sharply, but has since settled in the mid-20s. To put this in context, the all-time average is 20.7, the all-time closing low is 8.5 and the all-time closing high on Black Monday in 1987 was 150. More recently during the financial crisis, it reached a closing high of 87.2 in November 2008.

VIX volatility chart. CBOE

Other financial indicators also moved rapidly as the referendum results came through. On the face of it, the Japanese market suffered a severe shock falling almost 8%. However, the 8% fall in the Japanese stock market is almost exactly matched by an 8% gain of the Japanese yen relative to the pound. Therefore, the net effect for UK-based investors in Japanese equities is close to zero.

The fall in the value of the pound following the Brexit result is also not as bad as it may first appear. The size of the fall was exacerbated by the previous day’s assumption that Remain would win. There is also precedent for a dramatic fall – after the ERM crisis – which proved beneficial for many British exporting companies and arguably helped sustain the economic recovery of the 1990s.

A lower pound benefits companies that add most of the value to their products inside the UK, and companies that sell their produce on international markets. This includes exporters like pharmaceutical company GlaxoSmithKline, drinks company Diageo and technology company ARM – all of which saw stock price gains on the morning after the vote. Companies that rely on imports and add little value within the UK will be hardest hit in the short term as they adapt to the exchange rate volatility.

There will undoubtedly be winners and losers from the UK’s decision to leave the EU. But indexes for volatility are already lower than they were in February this year, suggesting that markets are not abnormally worried about the outlook, and UK government borrowing costs are at an all time low. This is further reason to reject the pre-referendum consensus that Brexit would bring economic doom.

Author: Isaac Tabner, Senior Lecturer in Finance, University of Stirling

What’s wrong with the web and do we need to fix it?

From The Conversation.

More than 20 years after the first web server started bringing the internet into our lives, a recent conference in San Francisco brought together some of its creators to discuss its future.

The general tone of the conference is probably best summed up by the Electronic Frontiers Foundation’sCory Doctorow:

In the last twenty years, we’ve managed to nearly ruin one of the most functional distributed systems ever created: today’s Web.

This might seem like a surprising statement. To many of us, the web has become an indispensable part of modern life. It’s the portal through which we get news and entertainment, stay in touch with family and friends, and gain ready access to more information than any human being has ever had. The web today is probably more useful and accessible to more people than it has ever been.

Yet for people such as Sir Tim Berners-Lee, the inventor of the world wide web, and Vinton Cerf who is often referred to as one of the “fathers of the internet”, Doctorow’s comment cuts right to the heart of the problem. The internet has not evolved in the way they had envisioned.

The centralised web

Their main concern is that the internet – and the information on it – has become increasingly centralised and controlled.

In the early days of the web, people who wanted to publish online would run their own web servers on their own computers. This required a reasonably good understanding of the technology, but meant that information was distributed across the internet.

The photo-sharing website flickr. Screenshot

As the web grew, companies that took the technical hurdles out of web publishing were established. With Flickr, for example, a photographer can easily upload his or her photos to the internet and share them with other people.

YouTube did the same thing for video, while tools such as WordPress made it easy for anyone to write blogs.

Social media in particular has made it easy for everyone to get online. The period in which these services really took off is generally referred to as web 2.0.

But along with this development of easy-to-use publishing technologies came a centralisation of the internet, and with that, the loss of some of the internet’s potential.

The decentralised web

Proponents of the decentralised web argue that there are three main problems with the web today: openness and accessibility; censorship and privacy; and archiving of information.

Openness and accessibility refers to the tendency of centralisation to lock people into a particular service. So, for example, if you use Apple’s iCloud to store your photos, it’s difficult to give someone access to those photos if they have a Microsoft OneDrive account, because the accounts don’t talk to each other.

The second issue – censorship and privacy – is a deep concern for people like Doctorow and Berners-Lee. Centralised web services make it relatively easy for internet use to be monitored by governments or companies. For example, social media companies make money by trading on the value of personal information.

As we use social media, fitness trackers and health apps to document our lives, we generate a lot of personal data. We freely give this personal data to social media companies by agreeing to their terms of service when we create our accounts.

The third issue with today’s web is that it is ephemeral; information changes and websites go offline all the time, and very little is retained or archived. Vinton Cerf has referred to this as the “digital dark age” because when historians look back at this point in history, much of the material on the internet won’t exist anymore – there will be no historical record.

A good example of this loss of history occurred when GeoCities, which hosted millions of web pages created by individuals, was first bought out by Yahoo and then discontinued.

The technologies to support a more decentralised web are already being developed, and are based upon some you are probably already familiar with.

One of the key technologies to support a decentralised web is peer-to-peer networking (or more simply, P2P). You might be familiar with this concept already, as it’s the technology behind BitTorrent – the software used by millions of Australians to illegally download new episodes of Game of Thrones.

On P2P networks, information is distributed across thousands or millions of computers rather than residing on a single server. Because the contents of the files or website are distributed and decentralised, it’s much more difficult to take the site offline unless you own of the files.

It also means that information uploaded to these networks can be retained, creating archives of old information. There are already organisations such as MaidSafe and FreeNet who are creating these P2P networks.

Other technologies, such as encryption and something called blockchain, provide levels of security that make transactions on these networks extremely difficult to track, and very robust.

Together these technologies could protect the privacy of internet users and would make censorship very difficult to enforce. It could also allow people to securely pay creators for online content without the need to an intermediary.

For example, a musician could make a song available online and people could pay the artist directly to listen to it, without the need for a recording company or online music service.

But do we need it?

Perhaps the biggest question with decentralising the web is whether it is actually something most people want or value. While archiving some parts of the internet is clearly valuable, there is probably a lot on the internet that can safely be forgotten, and some things that should be.

The technology itself is a hurdle to adoption. Peer-to-peer and blockchain technology are clever, but they are also complex. If decentralised web technologies are going to be widely used, they need to be easy to install and operate.

This isn’t an insurmountable problem, though. In the early 1990s, installing the software to get the internet working on your computer required substantial technical knowledge. Today it’s simple, and that’s one of the main reasons the internet took off.

Beyond the technical challenges, there are other social concerns that are potentially more substantial. Recently Facebook’s live streaming facility has raised questions about the level of control that should be exercised over internet media.

At the end of the day, it may be that the decentralised web is ready for us, but we’re not yet ready for it.

Author: Sam Hinton, Assistant Professor in Web Design, University of Canberra