Blogroll Category: Current Affairs
I read blogs, as well as write one. The 'blogroll' on this site reproduces some posts from some of the people I enjoy reading. There are currently 447 posts from the category 'Current Affairs.'
Disclaimer: Reproducing an article here need not necessarily imply agreement or endorsement!
Various statistics that governments produce on a regular basis carry the label "seasonally adjusted". What is the meaning of this label? According to popular thinking the data that is observed over time (labelled as time series) is determined by four factors, which are:
- The trend factor
- The cyclical factor
- The seasonal factor
- The irregular factor
It is accepted that the trend determines the general direction of the data over time, while the cyclical factor causes movements that are related to the business cycle. The influence of seasons like winter, spring, summer and autumn and various holidays is conveyed by the seasonal factor. The irregular factor depicts the effect of various irregular events. It is held that the interplay of these four factors generates the final data.
Popular thinking regards the cyclical influence as the most important part of the data. It is held that the isolation of this influence would enable the analysts to unravel the mystery of the business cycle. Moreover, to pre-empt the negative effect of the business cycle on people's well being it is important to observe the influence of the cyclical factor on as short a duration basis as possible. Like any disease the earlier it is detected the better are the chances of combating the disease. Once the central bank has identified the size of the cyclical influence it could offset this influence by means of a suitable monetary policy, so it is held.
According to various statistical studies the monthly fluctuations of data are dominated by the influence of the seasonal factor.1 As the time span increases, the importance of the cyclical influence rises while the influence of the seasonal factor diminishes. The cyclical influence will be more powerful in quarterly data than in monthly data. The trend, it is assumed, exerts a strong influence on a yearly basis while having an insignificant effect on the monthly variations of the data. While the irregular factor can be very "wild", however, it is held the effect it produces is of a short duration. With the effect of positive shocks are offset by negative shocks.
It follows that in order to be able to observe the influence of the business cycle on a short-term basis all that is required is to remove the influence of the seasonal factor. The method of the removal however, must make sure that the cyclical influence is not affected in the process.
Most economists regard the seasonal effect as constant and hence known in advance. For example, every year people buy warm clothes before the arrival of the winter, not before the arrival of the summer. In addition, people follow a similar pattern of behaviour year-after year before major holidays. For example, people tend to spend a larger fraction of their incomes before Christmas. The assumption that the seasonal influence is constant year after year means that its removal will not distort the influence of the cyclical factor. This in turn will permit an accurate assessment of the business cycle effect. By means of statistical methods, economists generate numbers for each month that supposedly provide an estimate of the seasonal effect. The data becomes seasonally adjusted once these computed numbers are subtracted from the raw data.
Despite the application of various statistical and mathematical methods to extract the seasonal influence, which varies in degree of complexity, the entire framework is based on arbitrary assumptions that have nothing to do with reality. If one were to accept that the data is the result of the interaction of the trend, cyclical, seasonal and irregular factors, then one would conclude that only these factors affect the data, irrespective of human volition. Regardless of human behaviour it is these factors that determine what human beings are going to do, implying a robotic behaviour.
However, human action is not robotic but rather conscious and purposeful. The data is the result of people’s assessments of reality in accordance with each individual's particular end, at a given point in time. The individual's action is set in motion by his valuing mind and not by external factors. This in turn means that the constancy of various seasons does not mean that individuals are expected to follow the exact pattern of behaviour year after year. Changes in individual goals will produce different responses towards holidays or seasons of the year. Consequently a framework, which disregards that humans are not robots and treats the seasonal effect as being constant, will contribute to the wrong assessment regarding the state of the cyclical influence.
Currently most government statistical bureaus worldwide utilise the US government computer programs X-11, X-12 and X-13 to estimate the seasonal influence on data. By means of sophisticated moving averages, these programs generate estimates of the seasonal effect. The computer program then uses the obtained estimates to de-seasonalise the data i.e. adjust for seasonality. Designers of these seasonal adjustment computer programs have attempted to address the issue of the constancy of the seasonal effect by allowing this effect to vary over time. For example, the seasonal effect for retail sales in December will not be the same year after year but will rather vary. Furthermore, these programs instructed to establish whether the seasonal effect is stable. It would appear that by means of sophisticated statistical and mathematical methods these programs could generate realistic estimates of the seasonal influence on the data. The truth is that these programs generate arbitrary figures, which have nothing to do with the reality.
The crux of the problem is that people's responses to various seasons or holidays are never automatic but rather part of a conscious purposeful behaviour. There are however, no means and ways to quantify individual valuations. There are no constant standards for measuring the act of a mind's valuation of reality. On this Rothbard wrote,
It is important to realize that there is never any possibility of measuring increases or decreases in happiness or satisfaction. Not only it is impossible to measure or compare changes in the satisfaction of different people; it is not possible to measure changes in the happiness of any given person. In order for any measurement to be possible, there must be an eternally fixed and objectively given unit with which other units may be compared. There is no such objective unit in the field of human valuation. The individual must determine subjectively for himself whether he is better or worse off as a result of any change. His preference can only be expressed in terms of simple choice, or rank.21
Since it is not possible to quantify a mind's valuation of the facts of reality, obviously this valuation cannot be put into a mathematical formulation. This in turn means that the so-called estimates of seasonal factors generated by the computer programs must be arbitrary numbers.
Contrary to the accepted view, the adjustment for seasonality merely distorts the raw data, thereby making it much harder to ascertain the state of the business cycle. These distortions have serious implications for policy makers who employ various so-called counter-cyclical policies in response to the seasonally adjusted data. For example, the strength of the seasonally adjusted employment data could determine whether the central bank would raise or lower interest rates. This pretense by the central bank policy makers that they can quantify something that cannot be quantified is a major source of economic instability.
The seasonally adjusted data also forms the basis of so-called applied economics. Various theories are derived by observing the inter relationships of the seasonally adjusted time-series.
The whole idea that by being able to observe the influence of the business cycle could enhance our understanding of this phenomenon is fallacious. The business cycle here is presented as something that is inherent in the economy. It is held that this mysterious something is the source of the sudden swings in economic activity.
It is however, overlooked that the swings in economic activity are the result of central bank monetary policies, which falsify interest rates, and set the platform for the generation of money out of “thin air” thereby contributing to people's erroneous valuations of the facts of reality.
Even if it were possible to quantify the cyclical influence, this would not help us to understand what the business cycle is all about. Without a coherent theory, which is based on the fact that human actions are conscious and purposeful, it is not possible to begin to understand the causes of business cycles and no amount of data torturing by means of the most advanced mathematical methods will do the trick.
- 1. See the Bureau of Census X-11 program.
- 21. Murray N. Rothbard, Man, Economy, and State (Los Angeles: Nash Publishing, 1962), vol. 1, pp. 15–16.
A Christian street preacher who was arrested and held for 13 hours in a police cell, after displaying placards depicting love for Muslims and criticising the ideology of Islam, has this week been informed by the Crown Prosecution Service (CPS) that no charges will be brought against him.
Ian Sleeper was arrested outside Southwark Cathedral in June this year, under Section 5 Public Order Act 1986, religiously aggravated by section 31 Crime and Disorder Act 1998.
When I was a student at the University of Colorado, I regularly walked by the Dalton Trumbo memorial fountain which, of course, was named after the communist Stalin-sympathizing novelist and screenwriter.
Once upon a time, the fountain had been simply known as "the fountain," but around 25 years ago, it was unnecessarily renamed after a controversial person.
The reason for the renaming, of course, was the same as with any memorial or monument designed to honor a person or idea — to create an emotional connection and familiarity with the person or idea connected to the place; to communicate a certain view of history.
The renaming of the fountain followed an earlier renaming controversy. One of the University's dorms, Nichols Hall, was named after a participant in the infamous Sand Creek Massacre. Even in its own time, the massacre had been denounced, earning condemnation from Indian fighters like Kit Carson. Not surprisingly, the dorm that bore Nichols's name was eventually renamed "Cheyenne Arapahoe" in honor of the Indian tribes whose members Nichols had helped attack.
As with the Trumbo fountain, the dorm's name was changed in order to send subtle messages — messages about what is valued, what is good, and what is bad.
There's nothing inherently wrong with this, of course. The problem only arises when we begin to use taxpayer funded facilities and institutions to carry out these attempts at education.
Thus, in a sense, when approaching the problem of government monuments and memorials, we encounter the same problem we have with public schools. Whose values are going to be pushed, preserved, and exalted? And, who's going to be forced to pay for it?Ideology Changes Over Time
This problem is further complicated by the fact that these views change over time.
Over time, the "good guys" can change as majority views shift, as new groups take over the machinery of government institutions, and as ideologies change.
In 1961, when Nichols Hall was named, few people apparently cared much about the Sand Creek Massacre. 25 years later, however, views had changed considerably among both students and administrators.
For a very obvious illustration of how these changes takes place, we need look no further than the schools.
In the early days of public schooling — an institution founded by Christian nationalists to push their message — students were forced to read the King James Bible. Catholics were forced to pay taxes so schools could instruct students on how awful and dangerous Catholicism was. Immigrant families from Southern and Eastern Europe were forced to pay for schools that instructed their children on the inferiority of their non-Anglo ethnic groups.
A century later, things have changed considerably. Today, Anglo-Saxons are taught to hate themselves, and while Catholics are still despised (but for different reasons), they now are joined in their pariah status by most other Christian groups as well. Italians and Eastern Europeans who were once treated in public schools as subhuman are now reviled as members of the white oppressor class.
Similar changes have taken place in art and in public monuments and memorials.Public Memorials Serve the Same Function as Public Schools
But the principle remains the same, whether we're talking about public schools or public monuments: we're using public funds and facilities to "educate" the public about what's good and what's not.
This has long been known by both the people who first erected today's aging monuments, and by the people who now want to tear them down. The leftist who support scrapping certain monuments actively seek to change public monuments and memorials to back up their own worldview because they recognize that it can make a difference in the public imagination. They're fine with forcing the taxpayers to support their own worldview, of course, and actively seek to use public lands, public spaces, public roads, and public buildings to subsidize their efforts. They already succeeded in doing this with public schools decades ago.The Answer: Privatize the Monuments
In a way, the combined effect of public memorials, monuments, streets, and buildings function to turn public spaces into a type of large open-air social studies class, reinforcing some views, while ignoring others.
Libertarians have long noted the problem of public education: it's impossible to teach history in a value-neutral way, and thus public schools are likely to teach values that support the state and its agendas. Even some conservatives have finally caught on.
To combat this problem, those who object to these elements within public schooling support homeschooling, private schooling, and private-sector alternatives that diminish the role of public institutions.
Governmental public spaces offer the same problem as public schools.
In both cases the answer is the same: minimize the role of government institutions in shaping public ideology, public attitudes, and the public's view of history.
Rather than using publicly funded thoroughfares, parks, and buildings as a means of reinforcing public "education" and "shared history" as we do now, these government facilities should be stripped down to their most basic functions. Providing office space for administrative offices, providing streets for transport, and providing parks for recreation. (The last thing we need is a history lesson from the semi-illiterates on a typical city council.)
Some might argue that all these properties and facilities should be privatized themselves. That's fair enough, but as long as we're forced to live with these facilities, we need not also use them to "honor" politicians or whatever persons the current ruling class happens to find worthy of praise.
The nostalgia lobby will react with horror to this proposition. "Why, you can't do that!" they'll complain. "We'll be robbed of our heritage and history." Even assuming these people could precisely define exactly who "we" is they still need to explain why public property is necessary to preserve this alleged heritage.
After all, by this way of thinking, the preservation of one's culture and heritage relies on a subsidy from the taxpayers, and a nod of assent from government agencies.Preserving and Promoting Culture Through Private Action
Once upon a time, however, people who actually valued their heritage did not sit around begging the government to protect it for them. Many were willing to actually take action and spend their own money on preserving the heritage that many now rather unconvincingly claim is so important to them.
A good example of the key role of private property in cases such as this can be seen in the work of the Catholic Church in the US — which has never enjoyed majority support from the population or from government institutions. If Catholics were to get their symbols and memorials in front of the public, they were going to have to build them on private property, and that's exactly what they did.
In Denver, for example, the Catholics of the early 20th century knew (correctly) that no public park or government building was going to erect any Catholic-themed art or memorials on their property. So, the Catholics proceeded to erect an enormous cathedral on a hilltop one block from the state capitol. The new cathedral was highly visible and provided easy access to religious ceremonies for the few Catholic politicians and officials who worked at the capitol. It provided meeting space. It contained stained glass art created by German masters. Moreover, the new building served as a huge symbolic middle finger to the anti-Catholic Ku Klux Klan which was growing in importance in Denver at the time.
So, did Church officials sit around whining about how there was no crucifix on the front lawn of the State Capitol? Did they demand that the taxpayers pay to maintain a central town plaza featuring a statue of Saint Peter? Some probably did. Those who made a difference, though, took action and acquired real estate in prominent places throughout the city. They put universities on that land, and cemeteries, and convents, and friaries, and schools, and even some memorials and statues. Today, next to the cathedral, on a busy street corner, is a large statue of a Catholic pope: John Paul II. It's on private property. It's seen by thousands every day.
And why should the self-appointed protectors of American "traditional" values think they deserve anything different? Indeed, we'd all have been saved a lot of trouble if the organizations that demanded statues of Confederate generals everywhere had put them on private land instead of in public parks.
In the past, had the purveyors of publicly-funded culture instead taken a principled and successful stand against using public lands and funds to push a certain view of history, no one would have to now waste his time sitting through city council meetings where politicians decide who deserves a statue, and who is to be thrown in the dustbin of history. Were we to quit using public parks as showcases for public indoctrination, we wouldn't have to worry about the Church of Satan erecting a monument in the "free speech area" of a public park — as they recently did near Minneapolis.
The next time someone wants a statue of some politician, artist, or intellectual — whether they be communists, Confederates, or satanists — they ought to be told to buy a nice little plot of land somewhere — perhaps along a busy street or next to an important street corner in town — and put their statue there.
If we don't have the information, just not a scoobie of the knowledge required, to make a decision then what is it that we should do? Sensible folk would probably say we should go and find out before we make our decision:
The rise of the UK’s nascent shale industry is "overhyped" and 55 million years too late, according to new research of the UK’s geology.
A team of scientists has warned that the UK’s most promising shale gas reservoirs have been warped by tectonic shifts millions of years ago which could thwart efforts to tap the gas reserves trapped within layers of shale.
Professor John Underhill, a chief scientist at Heriot-Watt University, said the debate over whether or not to develop domestic gas sources could prove redundant because Britain’s shale layers are “unlikely” to be an economic source of gas.
OK, excellent. There's a scientific prediction. What is it that we do when using the scientific method? We attempt to design experiments to disprove the assertions being made. If they survive such attempts at disproof then we upgrade assertions and speculations into something quite possibly true. That is, we attempt to go and find out.
So, what should be the reaction to this assertion?
Quentin Fisher, a professor of petroleum geoengineering at the University of Leeds, said more work was needed as the disadvantages pointed out in the seismic imaging could be balanced by other factors with an advantage for shale extraction.
“Prof Underhill is quite correct to highlight the great uncertainties that exist regarding the likely productivity of shale in the UK and is correct that the geology in the UK tends to be structurally more complex than in the US. Many of us involved in this debate have regularly highlighted the large uncertainties that exist,” Fisher said.
“Although geological complexity and late tilting may be detrimental to shale gas prospects in the UK, there are other factors that may be more favourable, such as having thicker shale sequences.”
He said the only way to find out was through testing. “The bottom line is that the only way to truly assess the viability is to drill wells, and we need to get on with that.”
Well, yes, quite so. We've now got duelling theories and the only way we can decide between them is to go drill. So, go drill we should.
We all know how Professor Underhill's speculations are going to be used of course. The anti-frackers will be shouting that there just ain't any gas there so instead let's continue with the cucumber storage of moonlight scheme. When the correct response is as above. If there's gas there then we're copacetic (we,. not the anti-frackers), if there isn't then, well, so let's go find out.
There is a similarity here with a point made about climate change. The greater the uncertainty about how bad the effects will be the more careful we've got to be about it happening. Certainly true but the same logic applies here. The more the uncertainty about the shale gas contents of Britain the more the answer is drill baby, drill.
Throughout history, the state has justified itself on the grounds that it is necessary to protect us from others whose habits and beliefs — we are meant to believe — are dangerous. For millennia, this fiction was easy to maintain because most people interacted so little with people outside their nearly autarkic — and therefore impoverished — communities.
But, with the rise of industrialization and international trade in recent centuries, the state's claim that it is necessary to keep us “safe” from outsiders has become increasingly undermined.
Much of this is thanks to the fact that in order to benefit from the market, one must engage in activities designed to serve others and anticipate their needs. As a result, trade increases our understanding for both members of our community and even the stranger; it also makes us realize that other people are much like us. Even if they speak strange languages or have odd customs and traditions.The Market Order and Civilization
This is in essence Say’s Law, or the Law of Markets, which states that in the market we produce in order to trade with others so that we can thereby, indirectly, satisfy our own wants: our demand for goods in the market is constituted by our supply of goods to it. In order to effectively satisfy other people’s wants we need to not only communicate with them, but understand them. If we don’t, then we’re wasting our productive efforts for a random result. Obviously, we’d benefit personally from learning what other people want, both their present wants and anticipated future wants, and then produce it for them.
So far so good. Most people (except for Keynesians) grasp this very simple point about the market — and how it contributes to civilization and peaceful interaction. But all people aren’t saints, so good, hard-working people risk being taken advantage of as they have nothing to set against such actions. Without a central power such as the state, who will protect us from such people?
Answer: the web of voluntary transactions aligns people’s interests. In the market, “bad people” are not only defrauding, stealing from, or robbing a single person or family. They are, in effect, attacking the community of interdependent producers and network of traders.
Imagine a town with a baker who specializes in baking bread that people in the town like, but that he doesn’t necessarily fancy himself. Instead, he sells the bread in order to earn money that he uses to buy from others what he truly wants. Others similarly specialize their production to produce what others want, including the baker, so that they can use part of their income to buy bread. When a thief steals from this baker, he negatively affects the town’s bread supply — and thereby also makes the baker unable to effectively demand goods from others. This affects a lot of people, not only the baker: it affects all people who wanted to but now can’t buy bread and all those who expected to but no longer can sell their goods to the baker.
The network of exchanges and the specialized production for others thus creates a community of interdependent producers whose interests are generally aligned: they have all increased their productive effort by supplying a single good that is in high demand, and thereby made everybody better off. But it also means it is in their own interest that no one is unjustly treated and disadvantaged, whether the victim of a “bad person” is an existing or potential supplier of goods they desire or existing or potential customer of the goods they produce.
They all benefit from this order, since their productive efforts are used where they do most good. But they are also all in it together — they are all affected if things go wrong. It is not strange, then, to see how towns used to spontaneously organize to deal with crime. Robbing the baker involves not only a robber and his victim: an attack on one is an attack on the community. The robber has by his very actions chosen to not partake in community — to be an outcast.Effect of the Welfare State
What’s happened over the course of the last century with the rise of the democratic welfare state is that these market-based bonds between people within a community have been severed. With the growing state, more and more people have found positions in the economy and society where they do not need to serve others. In other words, the state has made it possible to live off what other people produce rather than contribute to satisfying everybody’s wants.
As these bonds between people are severed, the threshold to engage in criminal behavior becomes lower. But more importantly, as people do not need to rely on their ability to satisfy the wants of others, they don’t understand other people: they have no incentive to learn about their needs and wants, and they have nothing to gain personally from satisfying them. In other words, there is no interdependence and therefore less of a reason to stay away from destructive behavior.
This is exactly what we’ve seen over the course of the past century when the very large state has replaced civil society with centralized systems and market with power. The problem is that when people stop learning about each other, it is easier to resort to conflict rather than cooperation — and it is much easier to see other people as obstructions to your own happiness. Getting rid of them thus increases your share of the (now diminishing) pie, and using and exploiting others for your own benefit appears a means toward satisfaction of one’s own wants.
We increasingly see examples of this type of thinking among entrepreneurs and those who want to be entrepreneurs. They start businesses not as a means to make a living — that is, to indirectly benefit themselves according to the Law of Markets — but in order to do “what they like.” It’s a lifestyle choice that many seem to think they have a “right” to make. Even worse, sometimes they even blame their entrepreneurial failure on “society” for not being supportive enough and not appreciating what they’re offering at the price they’re demanding.
This is exactly backward: to be able to do what you like for a living is a privilege that you can enjoy only if you, by doing so, satisfy others. If you create value for others, you gain value for yourself.
In this type of society where the bonds between people are weakening, it is not strange that people find the idea of a decentralized, spontaneous order outrageously naïve. Competition is here not the sound striving to better serve others by trying different and differentiated ways of satisfying wants, but rather a zero-sum game where there are winners and losers. In this situation, whoever is willing to cut corners, lie, and deceive is immediately better off. The incentives, in other words, are for destroying value and to prioritize short-term gains even if they come at high long-term costs — because those costs may be another’s burden. It’s the very opposite of civilization and an existence that will, if left unchecked and unchanged, eventually degenerate into a Lord of the Flies-type tribalism.
It is not strange that people have a hard time understanding the harmony argument for markets in a time when the state has alienated them from productive interdependence as explained by Say’s Law. The market’s informal, spontaneous cooperation for mutual benefit has been replaced by a statist mindset, which seeks guarantees — and finds it only in formal power.
But it should be obvious from the discussion above that this is not in any sense a guarantee — especially against bad behavior. It is the opposite. Yet it should be recognized that the market also offers no guarantee, strictly speaking. But do we need one when people’s interests are aligned? All we need to trust is that people do what is good for themselves. That’s hardly naïve.
How did Americans fall for the government's reefer madness? Chris Calton explains how junk science, overt racism, and myths of bloodthirsty soldiers all played a role in the criminalization of marijuana in America.
The supposed fiscal burden of refugees (how much they cost the state) is often touted as a reason to rein in refugee resettlement programs. This doesn’t seem to be the case for adult refugees in the United States, according to a new paper released in June by the National Bureau of Economic Research. It shows that adult refugees aged 18-45—the majority of the researchers’ sample—make a net fiscal contribution over their first 20 years in the U.S.
The authors argue that current literature examining social and economic outcomes for refugees “tends to concern very specific populations, uses very small samples, relies on data from a small number of countries with high refugee totals, or focuses on very short-term outcomes.” But this study was different. It tracked a group representative of refugees in general and was based on an extremely large, diverse sample. The NBER Digest explains:
They separated refugees from other immigrants using Department of State data, and created a sample of 20,000 refugees who entered the country in 1990-2014. Their sample represents a third of refugees who arrived during the period.
The initial fiscal impact of refugees was (unsurprisingly) negative due to resettlement costs, low human capital, and high welfare use. However, this was only the case for 8 years after arriving in the country:
Using the NBER’s TAXSIM model, the study estimates that “refugees pay $21,000 more in taxes than they receive in benefits over their first 20 years in the U.S.” This may well be a low estimate of refugees’ positive net fiscal impact:
...we assumed that refugees paid the same amount in sales taxes as they did in state income tax. Data from the Quarterly Summary of State and Local Tax Revenues, between quarter 1 of 2010 and quarter 4 of 2014, indicates that revenues from state income tax and sales tax have been essentially the same over this period, with only a 2% aggregate difference. This most likely understates the amount of sales tax paid by refugees, as it is a regressive tax.
The authors also found that many child refugees enjoy positive educational outcomes, although older teenage refugees tended to fare worse:
Among young adults, we show that refugees that enter the U.S. before age 14 graduate high school and enter college at the same rate as natives. Refugees that enter as older teenagers have lower attainment with much of the difference attributable to language barriers and because many in this group are not accompanied by a parent to the U.S.
What does this new evidence mean for the UK’s approach to refugee resettlement? Firstly, it shows the importance of conducting more research into the net fiscal impact of refugees arriving in the UK; data on this topic is remarkably hard to find. The closest thing we have is estimates of the net fiscal impact of general immigration flows, and these estimates tend to be static rather than employing the NBER study’s dynamic approach.
Some evidence from Australia does suggest a negative fiscal impact of refugee immigration; although refugees became net contributors after 10-15 years, they were net drains on public finances over the course of a full 20-year period. However, it’s vital to view refugees’ fiscal impact in comparison to that of natives; if a country is running a budget deficit, the average natives will also have a negative net fiscal impact that may be similar in magnitude to the average refugee.
Any discussion of fiscal impacts must also include potential for positive effects on natives not captured by narrow measures of fiscal impact. My colleague Sam Bowman has previously referenced an innovative paper on Denmark’s experience with refugees:
Mette Foged and Giovanni Peri looked at refugee influxes from Yugoslavia, Somalia, Iraq and Afghanistan to Denmark between 1985 and 1998.
These refugees were distributed evenly across the country’s municipalities without any regard to labour market conditions. This counts as an ‘exogenous shock’...like a new influx of refugees to the UK would.
Forty to fifty percent of these immigrants had only secondary school education or lower and “were in large part concentrated in manual-intensive occupations”. By allowing for a deeper division of labour, the “refugee-country immigrants spurred significant occupational mobility and increased specialisation into complex jobs, using more intensively analytical and communication skills and less intensively manual skills.” That meant that native workers who might otherwise have done low-skilled jobs were able to move into more specialised, productive, highly-paid work.
These considerations aside, there are various external factors that could hamper the ability of refugees to make a positive contribution to public finances. Compared to the United States, many European countries have notoriously inflexible markets and generous welfare states, posing a dilemma for progressive supporters of immigration. As IMF analysts have put it, negative fiscal impact could also partly reflect “the existence of legal obstacles preventing refugees from starting to work quickly upon arrival.” There are sensible ways to maximize the benefits of refugee influxes, such as ‘keyhole solutions’ and private refugee sponsorship schemes.
With a Republican in the White House, the anti-gun-control lobby smells a bit of blood in the water. Now is the time, they suggest, to pass national gun-licensing reciprocity laws forcing gun-restrictive states to recognize permits issued by gun-permissive states.
Writing in The Hill, Tim Schmidt sums it up:
It is time for there to be national reciprocity for concealed carry permits, instead of the patchwork of laws governing reciprocity that vary by state. Virginia, where the [recent shooting of Congressman Steve Scalise] happened, has reciprocity for some states’ concealed carry permits, but if members would have brought their guns back and forth from D.C., they would have been breaking the law. It should never be a crime to be responsibly prepared to defend yourself in any possible situation.
Sen. John Cornyn (R-Texas) and Rep. Richard Hudson (R-N.C.) have introduced the Constitutional Concealed Carry Reciprocity Act of 2017, which would allow legal gun owners and concealed carry permit holders nationwide to responsibly arm themselves no matter where they are.
The way this is phrased sounds nice and totally unobjectionable: this bill sounds like it's just saying people should be left alone.
The problem, however, is that the drive for mandated reciprocity is essentially a drive to increase federal involvement and federal control in the realm of gun policy.
Schmidt is right in the sense that, of course it should never be a crime to defend one's self. The question remains however: should the federal government be the agency that guarantees that right? Should the feds have the power to overturn state and local laws that limit gun ownership?
This issue can be addressed from both a legal and Constitutional standpoint, and from a general philosophical decentralist view.The Constitutionalist View
Suzanne Sherman at the Tenth Amendment Center has already weighed in against the idea on Constitutional grounds, based on two main arguments:
1. Reciprocity laws are compacts made among the states, and are not imposed by the federal government.
2. The Bill of Rights Doesn't apply to the states.
On the first matter, Sherman notes that the proposed legislation would impose reciprocity on the states. This, Sherman notes, is a departure from what we usually mean by reciprocity, which denotes compacts that two or more states have voluntarily entered into.
RELATED: "Should Libertarians Care about the Constitution?" by Allen Mendenhall and Brion McClanahan
Many advocates of forced National Reciprocity point to the “Full Faith and Credit Clause” found in Article IV, Section 1 of the Constitution. Such application is likewise problematic because it deviates from the original intent of the clause, lifted directly from the Articles of Confederation without any change to its meaning. This clause, as ratified, simply ensured citizens in one state could own land or property in another with the full rights of a citizen of that state. It in no way implied that one state had to recognize the institutions or licensing of another state. Driver’s licenses are acceptable for passing through various states, but it is, like CCW licensing, by mutual assent of the states. In other words, there is no federal statute mandating that one state must honor another state’s driver’s licenses.
In other words, the sort of "reciprocity" imagined by the backers of nationwide forced reciprocity is a new kind ofreciprocity that substitutes federal policy for decentralized state-level policy.
The enormous downside to this is that it federalizes what has long been recognized as largely the domain of state and local governments. Further federalizing gun policy may look like a fine idea right now, but as Sherman notes, it only takes a couple of new anti-gun appointments to the Supreme Court for the whole idea to blow up in the faces of pro-gun advocates. It's far more prudent, Sherman contends, to work against any increase in federal involvement in gun policy.The Bill of Rights Was Never Meant to Apply to the States
Sherman's second point is one that Constitutionalists and decentralists have made for years. Namely, that the Bill of Rights is properly understood as a document that limits the federal government, not state governments.
When he introduced the proposal for a Bill of Rights to Congress, Madison wanted some of the provisions to be made applicable against the states. He argued that was where liberty would be most likely threatened. Again, he was defeated unanimously. The Bill of Rights was never understood to be applicable against the states. There is absolutely no historical evidence of the Bill of Rights being made enforceable against the states. Even nationalist John Marshall, in the 1833 case Barron v. Baltimore, was forced to admit this when he said that the first ten “amendments contain no expression indicating an intention to apply them to the state governments. This court cannot so apply them."
...It was not until 1925, in the case of Gitlow vs New York, that the Supreme Court magically “found” the authority to apply the Bill of Rights against the states supposedly hidden away in the 14th Amendment..."
Sensing that things are going their way, it has become fashionable for some gun-freedom advocates to push for more federal control over state and local gun laws. One example is the recent case of Mcdonald vs. the City of Chicago which finally declared that the Second Amendment — like other portions of the Bill of Rights — applies to the states. Nevertheless, by pushing for more federal control in this case, gun-rights advocates are only pushing for more federal control over the states.
Even those who have no particular affinity for the current American Constitution have noted this as well.
Lew Rockwell writes:
[T]he purpose of the Bill of Rights was to state very clearly and plainly what the Federal Government may not do. That's why they were attached to the Constitution. The states, under the influence of skeptics of the Constitution's limits on the central power, insisted that the restrictions on the government be spelled out. The Bill of Rights did not provide a mandate for what the Federal Government may do. You can argue all you want about the 14th amendment and due process. But a reading that says it magically transforms the whole Bill of Rights to mean the exact opposite of its original intent is pure fantasy.
In other words, appealing to the 2nd Amendment as a means of limiting state and local gun laws is based on newly invented federal powers that have no basis in legal or historical facts around the Constitution as written. Thus, it is ironic that many conservatives — who often fancy themselves to be "strict constructionists" and "local control" people — have suddenly made peace with the idea of using the Bill of Rights to boss state governments around.The Decentralist View
The Constitutional arguments are all well and good, but the US Constitution should never be viewed as the final word on any matter. The current constitution has always gone much too far in terms of centralizing political power in the United States, and the United States should never have been anything more than a loose military alliance and customs union. It's no more necessary that the federal government regulate gun laws than it is necessary to define marriage or prohibit prayer at school sporting events.
In fact, gun policy, like abortion policy, wage policy, land-use policy, and everything else, should be relentlessly decentralized.
RELATED: "Anarchism and Radical Decentralization Are the Same Thing" by Ryan McMaken
In his article "What We Mean by Decentralization," Lew Rockwell explains the various reasons why decentralization is a mroe effective check on power than handing everything over to a Supreme Court or other federal "protectors" or rights.
Rockwell lists five reasons for this:
First, under decentralization, jurisdictions must compete for residents and capital, which provides some incentive for greater degrees of freedom...
Second, localism internalizes corruption so that it can be more easily spotted and uprooted....
Third, tyranny on the local level minimizes damage to the same extent that macro-tyranny maximizes it....
Fourth, no government can be trusted to use the power to intervene wisely...
Fifth, a plurality of governmental forms—a "vertical separation of powers," ... prevents the central government from accumulating power. Lower governments are rightly jealous of their jurisdiction, and resist...
Also key to this equation is the fact that decentralization offers a multitude of choices between different regimes in the face of government restrictions and persecution. If only one huge government has been granted the power to protect rights, to where will one go when the government fails to do its prescribed task? On the other hand, when a wide variety of smaller governments are charged with protecting rights, the failure by one regime is not nearly as catastrophic since the offending regime can be far more easily avoided through emigration and boycott than can a large centralized regime.
Thus, it might sound nice to put the federal government in charge of protecting gun rights, but the potential downside is immense given that federal policy can change easily, and then be imposed nationwide.
This isn't to say that small, decentralized government are a cure-all either. Ideology always plays an important role, and in a world where the majority wants all private citizens disarmed — well, that will happen regardless of what level of decentralization exists.
However, if what we desire is a governmental landscape that offers more choices for residents and more limitations on state power, decentralization is the proper path, and handing over gun policy to federal "protectors" is a terrible idea.
This posting is the third in a series on the 2016 Bank of England stress tests. A fuller report, “No Stress III: the Flaws in the Bank of England’s 2016 Stress Tests”, will be published later in the year by the Adam Smith Institute.
The previous posting is here.
The Bank of England repeatedly reassures us that its stress tests demonstrate the resilience of the UK banking system.
Well, let’s put the stress tests to a stress test.
We have the performance measure, the leverage ratio at the peak of the stress scenario and we have the pass standard. A bank passes the stress test if its leverage ratio at the peak of the stress is at least as high as the pass standard, and it fails the test if the leverage ratio at the peak of the stress falls short of the pass standard.
Let’s consider the five biggest banks: Barclays, HSBC, Lloyds, RBS and Standard Chartered.
In its 2016 stress tests, the BoE used the ratio of Tier 1 capital to leverage exposure as its leverage ratio. The Bank refers to this leverage ratio as the ‘Tier 1 leverage ratio’. The leverage exposure is a measure of the amount at risk and will be of a similar order of magnitude to, and for UK banks will typically be a little smaller than, total assets.
Across the big five, the average Tier 1 leverage ratio at the peak of the stress was 3.95 percent.
The pass standard used in the test was based on Basel III rules and was 3 percent.
By this test, the UK banking system looks to be in reasonable shape and only RBS failed to meet the 3 percent pass standard.
It would, however, be premature to get the champagne out just yet.
On July 8th this year I wrote to Governor Carney about the stress tests and one question I put to him was “How does the Bank justify the 3% Tier 1 minimum required leverage ratio?”
On August 3rd the Bank’s Executive Director for Financial Stability Strategy and Risk, Alex Brazier, wrote back to me with the following answer:
Our minimum leverage requirement for the major UK banks is now 3.25% of assets excluding central bank reserves. … But this is a minimum. On top of that the systemic and countercyclical leverage ratio buffers will, once phased in, add around 0.75% to the average leverage requirement of the largest UK banks. Furthermore, to pass stress tests, firms typically need to hold a buffer of around 1 percentage point on top of this. (My italics)
I am grateful to Mr. Brazier for the clarification, which I interpret as an authoritative statement that the largest UK banks will typically face a minimum required leverage ratio of around 5 percent once the new buffers are phased in.
I am however puzzled why the Bank did not use this higher minimum required leverage ratio as the pass standard in its stress tests. After all, what is the point of the Bank using a 3 percent pass standard in the stress tests whilst simultaneously arguing that the actual minimum required leverage ratio is, or will be, considerably higher than 3 percent? The reason this is a problem is that it opens up the incongruous possibility that a bank might be deemed to pass the stress test whilst simultaneously failing to meet the minimum required leverage ratio.
I am even more puzzled when Mr. Brazier writes that the banks need to meet these higher standards in order to pass the stress tests. Whatever is one to say when the Bank of England official in charge of the stress tests maintains that to pass the stress tests the banks must meet a higher pass standard than the pass standard used in the stress tests?
So the question then arises: how would UK banks have performed in the stress test had the BoE used a minimum required leverage ratio of around 5 percent as its pass standard, instead of the 3 percent pass standard that it did use?
Recall that across the five big banks, the average Tier 1 leverage ratio at the peak of the stress was 3.95 percent. Since 3.95 percent is nowhere near close to 5 percent, then it would appear that, taken as a group, the big five UK banks would have failed the stress test.
The “incongruous possibility” mentioned earlier would appear to be a reality: taken as a group, the big five banks passed the stress tests even though they did not meet minimum regulatory requirements during the projected stress.
In fact, it would appear that they passed the stress tests even though they did not meet the pass standard required to, er, pass the stress tests.
 The Bank’s headline capital ratio, the ratio of CET1 capital to Risk-Weighted Assets, is not considered here because the denominator is deeply flawed to the point of being discredited. See, e.g., K. Dowd, Math Gone Mad: Regulatory Risk Modeling by the Federal Reserve, Cato Policy Analysis No. 754, Cato Institute, Washington D.C., September 2015 or No Stress II: The Flaws in the Bank of England’s Stress Testing Programme, Adam Smith Institute, London, August 3rd 2016.
 At this point. Mr. Brazier inserted a flag to the following footnote: “See the Governor’s letter to Andrew Tyrie of 5 April 2016 for a fuller explanation of the impact of buffers on leverage requirement available here: https://www.parliament.uk/documents/commons-committees/treasury/Correspondence/Mark-Carney-Governor-Bank-of-England-to-Rt-Hon-Andrew-Tyrie-MP-5-04-16.pdf”.
 When I replace the leverage exposure measure in the denominator of the leverage ratio with total assets, I estimate that the average leverage ratio across the big 5 banks at the peak of the stress would have been in the region of 3.7 percent, a comfortable fail.
We also rather love it when non-economists, but people expert in other fields, try to tell us about matters outside their own area of expertise and inside economics. And here we have Mark Buchanan, a physicist, and a good one to boot, who would tell us about the economic and regulatory impact of Artificial Intelligence. Stepping off one's area of expertise is a dangerous thing:
Humanity has a method for trying to prevent new technologies from getting out of hand: explore the possible negative consequences, involving all parties affected, and come to some agreement on ways to mitigate them.
Well, no, humanity doesn't do that and never has done. In that universe where things are planned, possibly, but that isn't the one we inhabit nor have we ever done. No one did say that the Spinning Jenny was going to free up women from that household labour so they should be paying the inventor. Many were aware that being in charge of a half tonne of metal while intoxicated could be a problem but it was 1925 before the previous laws about steam engines were extended to cars. It was 1934 before even the most basic compotentcy tests were applied to those who could drive even sober.
We don't, and never have, sat down and argued out the costs and benefits of a new technology. What we have done instead is those technologies which have spread, seem useful, ponder on whether they need some regulation, after that popularity and general usage is established.
And of course there can be no other way in a market economy. We do not wish ethicists, philosophers, bootleggers or bandits, politicians or bureaucrats to tell us what we may try. Rather, we want to be able to try everything and only if actual harm to others is proven then perhaps ameliorate this.
People use laws, social norms and international agreements to reap the benefits of technology while minimizing undesirable things like environmental damage. In aiming to find such rules of behavior, we often take inspiration from what game theorists call a Nash equilibrium, named after the mathematician and economist John Nash. In game theory, a Nash equilibrium is a set of strategies that, once discovered by a set of players, provides a stable fixed point at which no one has an incentive to depart from their current strategy.
Sure, Nash is great, and far brighter than you or we, probably more so than us all collectively. But that's still not what we do:
But what if technology becomes so complex and starts evolving so rapidly that humans can’t imagine the consequences of some new action? This is the question that a pair of scientists -- Dimitri Kusnezov of the National Nuclear Security Administration and Wendell Jones, recently retired from Sandia National Labs -- explore in a recent paper. Their unsettling conclusion: The concept of strategic equilibrium as an organizing principle may be nearly obsolete.
But we never have done and hopefully never will do. That market is the process of exploration. So we never do say "What do we do if?" rather, we say "We've found that people like this!" and then consider if anyone has been hurt, are their public goods from it, externalities?
Or as we should put it, sure, many things need regulation, many things don't. Nash Equilibriums should be found, most certainly. But this is something we do after the deployment of a technology, not before. For if we have to have this discussion first then what new technology would ever be deployed?
This error is what people mean by the precautionary principle of course, and it's why it's wrong.
Although today high levels of inequality in the United States remain a pressing concern for a large swath of the population, monetary policy and credit expansion are rarely mentioned as a likely source of rising wealth and income inequality. Focusing almost exclusively on consumer price inflation, many economists have overlooked the redistributive effects of money creation through other channels. One of these channels is asset price inflation and the growth of the financial sector.
The rise in income inequality over the past 30 years has to a significant extent been the product of monetary policies fueling a series of asset price bubbles. Whenever the market booms, the share of income going to those at the very top increases. When the boom goes bust, that share drops somewhat, but then it comes roaring back even higher with the next asset bubble.The Cantillon Effect
The redistributive effects of money creation were called Cantillon effects by Mark Blaug after the Franco-Irish economist Richard Cantillon who experienced the effect of inflation under the paper money system of John Law at the beginning of the 18th century.1 Cantillon explained that the first ones to receive the newly created money see their incomes rise whereas the last ones to receive the newly created money see their purchasing power decline as consumer price inflation comes about.
Following Cantillon and contrary to Fisher and other monetary theorists of his time, Ludwig von Mises was the first to emphasized these Cantillon effects in terms of marginal utility analysis. With an increase in the stock of money, the cash balances of the early receivers of the newly created money increase. Correspondingly, the marginal utility they give to money decreases and the individuals in question buy either investment or consumption goods, thus bidding up the prices of those goods and increasing the cash balances of their sellers. With this step by step process, the price of goods will increase only progressively and affect both the distribution of income and wealth as well as the different price ratios.Financialization, Asset Price Inflation and Inequality
In accordance with the Cantillon effect, inflation can increase inequality depending on the channel it takes, but increasing inequality is not a necessary consequence of inflation. If it happened that the poorest in society were the first receivers of the newly created money, then inflation could very well be the cause of decreasing inequality.
Under modern central banking however, money is created and injected into the economy through the credit channel and first affects financial markets. Under this system, commercial banks and other financial institutions are not only the first receivers of the newly created money but are also the main producers of credit money. This is so because banks can grant loans unbacked by base money. In a free-banking system, this credit creation power of banks is strictly limited by competition and the clearing process. Under central banking however, the need for reserves is relaxed as banks can either sell financial assets to the central bank in open market operations, or the central bank can grant loans to banks at relatively low interest rates. In both cases, central banks remove the limits of credit expansion by determining the total reserves in the banking system. In other words, commercial banks and other financial institutions are credited with so-called base money that has not existed before. Thus, the economics of Cantillon effects tells us that financial institutions benefit disproportionately from money creation, since they can purchase more goods, services, and assets for still relatively low prices. This conclusion is backed by numerous empirical illustrations. For instance, the financial sector contributed massively to the growth of billionaire’s wealth (see table below).
We can list four main reasons why the growth of financial markets is triggered by an expansion of the money supply:2 (1) because financial titles are often used as collateral in debt contracts; (2) because the anticipation of price-inflation, which is a common trait among all fiat money regimes, discourages the hoarding of money thus encouraging both the demand for and the supply of financial titles; (3) because the production of money through central banks is a matter of sheer human will and is therefore prone to developing moral-hazard in the financial world. This leads to an artificially high demand for financial titles and increases the supply of such titles by the same token. And (4) because the manipulation of credit by central banks and banks, by lowering the interest rate in the short run, particularly affects the demand for capital and the capital structure during the course of the business cycle.
One of the most visible consequence of this growth of financial markets triggered by monetary expansion is asset price inflation. In a completely sound money system where credit only depends on the amount of saving rather than on fiduciary credit, there is very little room for generalized and persistent asset-price inflation as the amount of funds which can be used to purchase assets is strictly limited. In other words, the phenomenon of asset-price inflation is a child of credit inflation.
Asset price inflation in turn benefit mostly the richest in society for several reasons. First, the wealthy tend to own more financial assets than the poor in proportion to his income. Second, it is easier for the richest individuals to contract debt in order to buy shares that can be sold later at a profit. Since credit easing lowers the interest rate and therefore funding costs, the profits made by selling inflated assets bought at credit will be even greater. Finally, asset price inflation coming with the growth of financial markets will benefit the workers, managers, traders, etc. working in the financial sector. It will also benefit the CEO's of the publicly traded companies who will be paid more as the capitalization value of their company increases. Hence, the correlation between asset prices and income inequality has been, as expected, very strong.
However, most monetary economists ignored — and continue to ignore — asset-price inflation and do not see it as a consequence of an inflated money supply. A reader of A Monetary History of the US (1963) by Friedman and Schwartz or of Allan Meltzer's A History of the Federal Reserve (2004) will not find one mention of asset price inflation. This oversight leads to the effects of inflation on inequality to be underestimated or ignored. Periods of growing inequality and monetary inflation such as the 1920's or the 2000's were associated with a high rate of asset-price inflation but relatively stable consumer prices. Therefore, to focus on consumer price inflation as the only variable accounting for monetary policy leaves out most of the effects of money creation on inequality.
Since the 2008 financial crisis, the so-called unconventional monetary policies have often been justified on the ground that something must be done in the short run since, as would have said J.M. Keynes, "In the long run, we are all dead." But as our monetary system tends to increase inequality, and if the goal is to improve the standards of living of the least well-off in society, then central banking and artificial monetary creation may be more costly than usually assumed by policy-makers.
- 1. Blaug, M. (1985) Economic Theory in Retrospect, 4th edition, Cambridge: Cambridge University Press.
- 2. I owe the three first reasons to Mises Fellow Karl Friedrich Israel. See: Israel, K. F. (2016a). In the long run we are all unemployed? The Quarterly Review of Economics and Finance. (64). 67-81.
In the wake of the Chalottesville riot, it's been interesting how quickly the focus has shifted away from the actual events in Charlottesville and toward the public pundits and intellectuals are expressing opinions about the events.
Already, the media has lost interest in analyzing the details of the event itself, and are instead primarily reporting on what Donald Trump, his allies, and his enemies have to say about it.
This is an important distinction in coverage. Rather than attempt to supply a detailed look at who was at the event, what was done, and what the participants — from both sides — have to say about it, we are instead exposed primarily to what people in Washington, DC, and the political class in general, think about the events in which they were not directly involved.
This focus illustrates what has long been a bias among the reporters and pundits in the national media: a bias toward focus on the national intellectual class rather than on events that take place outside the halls of official power.
Note, however, that those quoted rarely have any special knowledge about the events themselves. Their opinions are covered not because they are knowledgeable, but because their quotations fit easily into a narrative that the media wishes to perpetuate.
In a March 2017 column, Peter Klein noted this bias and what economist F.A. Hayek had to say about it:
The intellectual, according to Hayek, is not an expert or deep thinker; "he need not possess special knowledge of anything in particular, nor need he even be particularly intelligent, to perform his role as intermediary in the spreading of ideas. What qualifies him for his job is the wide range of subjects on which he can readily talk and write ... Such people wield enormous influence because most us learn about world events and ideas through them. "It is the intellectuals in this sense who decide what views and opinions are to reach us, which facts are important enough to be told to us, and in what form and from what angle they are to be presented" (pp. 372–73).
Klein then quotes Hayek at length:
It is perhaps the most characteristic feature of the intellectual that he judges new ideas not by their specific merits but by the readiness with which they fit into his general conceptions, into the picture of the world which he regards as modern or advanced. . . . As he knows little about the particular issues, his criterion must be consistency with his other views and suitability for combining into a coherent picture of the world. Yet this selection from the multitude of new ideas presenting themselves at every moment creates the characteristic climate of opinion, the dominant Weltanschauung of a period, which will be favorable to the reception of some opinions and unfavorable to others and which will make the intellectual readily accept one conclusion and reject another without a real understanding of the issues.
Consequently, the media's focus is not on relating the specifics of a particular event, and then allowing the reader to come to his own conclusions. Instead, the focus is on appealing to the opinions of those in position of power, and filtering all events through this lens, as to let the consumers of media know how they should think.
Bias is not the only factor at work here, though. The excessive reliance on reliable and predictable "expert" sources stems from a need to constantly invent new news stories for broadcast and publication — and from a general laziness among publishers, editors, and journalists themselves. Traditional journalism requires true investigation and compilation of a variety of messy and disorganized facts. It's much easier, however, to simply call up a politician or an expert and create the facts by eliciting a "newsworthy" opinion from an important person. This approach becomes especially lucrative in a world of the 24-hour news cycle where considerations of time and money entice news organizations to create their own news rather than report on the events created by others.The World of Pseudo Events
This sort of cut-rate journalism has reached especially objectionable levels in recent years, but this approach isn't nearly as novel as many people imagine.
Indeed, thanks to the work of historian Daniel Boorstin, we can trace this habit among the the media class going back decades.
In his book The Image: A Guide to Pseudo Events in America — first published in 1962 — Boorstin examines how reporting on the news had become less and less about researching and reporting on spontaneous events, and instead had shifted toward reporting on what important people have to say about events.
Looking at Boorstin's analysis from our vantage point in 2017, it may look like Boorstin is splitting hairs, but this is only because we've been so inundated with reporting on pseudo events that we've come to regard such reporting as normal — and we now confuse pseudo events with the real thing.
A real event, Boorstin writes, is reported when "newspapers ... disseminate up-to-date reports of matters of public interest written by eyewitnesses or professional reporters near the scene."
In this type of reporting, Boorstin notes, there is a sense that the reporters are at the mercy of the events themselves.
Eventually, however, the need to sell newspapers and create more copy for printing helped reporters and their editors realize that they could create news themselves, and then report on those events as if they were spontaneous. Thus, reporters began to rely more and more on press releases, interviews, press conferences and other types of pre-packaged pseudo events that could give media outlets something new to report on. And then, of course, the politicians themselves — and the public relations people who work for them — are more than happy to supply the media with "pre-cooked" news, press conferences, prepared statements, and opinions designed to shape opinions about an event.
On of the first politicians to master these methods was Franklin Roosevelt. Boorstin writes:
In recent years our successful politicians have been those most adept at using the press and other means to crate pseudo-events. President Franklin Delano Roosevelt, whom Heywood Broun calls "the best newspaperman who has even been President of the United States," was the first modern master. While newspaper owners opposed him in the editorials few read, F.D.R. himself, with the collaboration of a friendly corps of Washington correspondents, was using front-page headlines to make news read by everybody. He was making "facts" — pseudo events — while editorial writers were simply expressing opinions. It is a familiar story how he employed the trial balloon, how he exploited the ethic of the off-the-record remarks, how he transformed the Presidential press conference from a boring ritual into a major national institution which no later president dared disrespect, and how he developed the fireside chat. Knowing that newspapermen lived on news, he helped them manufacture it. And he knew enough about news-making techniques to help shape their stories to his own purposes.
Indeed, by the 1950s, it had become "possible to build a political career almost entirely on pseudo-events" as in the case of Joseph McCarthy. McCarthy, Boorstin notes "was a natural genius at crating reportable happenings that had an interestingly ambiguous relation to underlying reality."
Boorstin quotes Richard Rovere, who frequently covered McCarthy as a reporter, who notes that McCarthy "invented the morning press conference called for the purpose of announcing an afternoon press conference." Reporters, Rovere admitted "were beginning, in this period, to respond to his summonses like Pavlov's dogs at the clang of a bell."
Eventually, this obsession with the utterances of politicians blurred the line between facts and feelings.
This distinction was once represented by the difference between hard news and soft news. Boorstin writes:
The the traditional vocabulary of newspapermen, there is a well-recognized distinction between "hard" and "soft" news. Hard news is supposed to be the solid report of significant matters: politics, economics, international relations, social welfare, science. Soft news reports popular interests, curiosities, and diversions: it includes sensational local reporting, scandalmongering, gossip columns, comic strips, the sexual lives of movie stars, and the latest murder....but the rising tide of pseudo-events washes away the distinction."
Boorstin illustrates this assertion with examples from a trip made by President Eisenhower to Hawaii. when the events of the trip itself proved to offer few interesting details, the reporters instead invented events and provided "factual" statements such as "Eisenhower's reaction to his Far Eastern trip remains as closely guarded a secret as his golf score," and "sooner or later the realities will intrude." These "facts" were not mere speculations on the side. They formed the heart of the article which was purported to be a news story.
In other words, the reporter is offering nothing other than speculation about nothing in particular because he has nothing else to write. But, when put into a news story, the end result is that the reporter is changing public perceptions of the president. Boorstin concludes: Nowadays a successful reporter must be the midwife — or more often the conceiver — of his news. By the interview technique he incites a public figure to make statements which will sound like news. During the twentieth century this technique has grown into a devious apparatus which, in skilled hands, can shape national policy."
It's not difficult to see how these techniques have been greatly expanded in our own time.
With the actual events of Charlottesville long over, the "news" continues as reporters and their sources among the intellectual class continue to opine on what Trump did or didn't say, and which of the interviewee's political enemies are to be blamed. Increasingly, the reporter need no longer even attend a press conference or leave his office. He need only monitor Twitter. If the reporter agrees with a statement, he need merely report that it happened. If he disagrees, then he need do little more than call one of his trusted sources for a rebuttal.
Moreover, when reporting these opinions, many reporters won't even provide the basic facts of who the speaker is. Thus, a reliance on anonymous sources has become almost mundane. And, as a perfect illustration of Hayek's point, CNN's recent debacles involving anonymous sources illustrates how these sources don't even necessarily demonstrate any level of expertise with the topic being discussed.
One can make the case that the majority of what passes for "news coverage" nowadays really falls within the parameters of Boorstin's pseudo events. When new facts would require hard work and serious journalism, it's much easier instead to rely on a few trusted sources — which have already been quoted countless times before — and get the usual predictable opinions to fill out an article. This is then reported as "news" of a new "event," but is really just an opinion piece in which the opinions of an interviewee are portrayed as "facts." This has been going on so long, few journalists even see a problem with this approach anymore.
If you tax investment, you tend to get less of it. And because workers rely on invested capital to produce the goods and services we consume everyday, falls in investment inevitably lead to falls in wages. In fact, economic theory tells us that because investment is so responsive to changes in tax rates, workers would be better off if we abolished taxes on capital investment (like corporation tax) entirely and instead raised taxes on consumption to compensate for lost revenue. Top economists, such as Greg Mankiw, Bob Lucas, and Marty Feldstein believe that we could boost long-run wages by almost 10% if we made these changes.
Defenders of taxing capital (such as Thomas Piketty) typically argue that the models used to advocate for abolishing capital taxes are overly simple or make unrealistic assumptions. That can’t be said for a new paper by Kotlikoff, Benzell and LaGarda that simulates the effect of the US adopting Congressman Paul Ryan’s ‘Better Way’ tax plan.
Ryan’s tax reform proposal replaces the U.S. federal corporate income tax with a 20 percent business cashflow tax (BCFT), which allows firms to write-off all investments and wages against their bills, but at the same time ends the deductibility of net interest payments. It also includes a border-adjustment mechanism that exempts net exports (exports minus imports) from business tax receipts. Put simply, it transforms the corporate income tax into a VAT style tax on domestic consumption (levied on firms) with a payroll tax cut. As Kotlikoff et al points out, this would effectively lower the marginal tax rate on capital to zero.
Typically, models assessing the effect of switching from capital to consumption taxes make a number of restrictive simplifying assumptions, such as infinitely-lived agents, homogenous skill levels and zero trade. Kotlikoff, Benzell, and LaGarda take a different approach.
Their model assesses the effect on 17 different regions, taking into account realistic estimates of life expectancy; demographic change; migration flows; a separate energy sector; government transfer programs; and international corporate tax rates. It is the most comprehensive attempt to model the effect of fundamental tax reform I’ve ever seen.
They find that compared to the status quo, in the first ten years of the reform:
- The US Capital Stock would increase by 25 per cent
- Pre-tax wages would increase by 6 per cent
- US GDP would be nearly 8 per cent higher – an 0.8% boost to GDP growth for the first decade of the reform
They also model what would happen if other countries match the US’s tax rates. They find that:
- GDP would still be about 5% higher, but not as high as if other countries didn’t try to compete with the US with. lower tax rates
- Interestingly, because Americans own a significant proportion of overseas assets, lower overseas tax rates will lead to increased asset incomes in turn boosting income tax receipts and allowing for extra income tax cuts.
One of the more bizarre findings of the paper is that in the long-run (2100) GDP would be lower under the Ryan plan. But, this shouldn’t be seen as a negative. In fact, Kotlikoff, Benzell and LaGarda point out that the lower GDP result is driven by higher wage rates leading people to work slightly shorter hours and spend more time on leisure. In other words, people are still better off.
Kotlikoff, Benzell and LaGarda’s results are even more powerful when you consider they do not consider two of the biggest arguments for switching to business cashflow tax. First, they don’t consider the possibility that the reform will make overseas tax avoidance harder and make collecting taxes from IP intensive tech firms easier. Second, they don’t consider the effect of ending the debt-equity bias, which many top economists believe would make financial crises less frequent.
Paul Ryan’s been forced to drop major aspects of his tax reform plan in order to keep the Senate and Trump administration on side. Instead, Ryan will go for straightforward corporate tax rate cuts and shorter capital allowances, an improvement to the status quo, but sub-optimal when he could be take advantage of what Nobel Laureate Bob Lucas once called “the largest genuinely free lunch I have seen in 25 years in this business”.
In the UK our corporation tax set-up isn’t quite as bad as in the US, but it’s far from perfect. We may have a low statutory rate but the effective rate (i.e. the one people actually pay) is still high. That’s because we have some of the least generous capital allowances in the world. We should pick up the baton that Ryan dropped and fix our broken corporate tax system.
By suggesting that he might order a US regime-change invasion of Venezuela, President Trump has inadvertently shown why North Korea has been desperately trying to develop nuclear weapons — to serve as a deterrent or defense against one of the US national-security state storied regime-change operations. In fact, I wouldn’t be surprised to see Venezuela and, for that matter, other Third World countries who stand up to the US Empire, also seeking to put their hands on nuclear weapons. What better way to deter a US regime-change operation against them?
Think back to the Cuban Missile Crisis. The US national-security establishment had initiated a military invasion of the Cuba at the Bay of Pigs, had exhorted President Kennedy to bomb Cuba during that invasion, and then had recommended that the president implement a fraudulent pretext (i.e., Operation Northwoods) for a full-scale military invasion of Cuba.
That’s why Cuba, which had never initiated any acts of aggression against the United States, wanted Soviet nuclear missiles installed in Cuba. Cuba’s leader Fidel Castro knew that there was no way that Cuba could defeat the United States in a regular, conventional war. Everyone knows that the military establishment in the United States is so large and so powerful that it can easily smash any Third World nation, including Cuba, North Korea, Iraq, Afghanistan, and Venezuela.
Castro’s strategy worked. The Soviet nuclear missiles installed in Cuba drove Kennedy to reject the Pentagon’s and CIA’s vehement exhortations to bomb and invade Cuba. The way the Pentagon and the CIA saw the situation was that Kennedy now had his justification for effecting a violent regime-change operation in Cuba. The way Kennedy saw the situation was that a violent regime-change operation through bombing and invasion could easily result in all-out nuclear war between the United States and Russia.
It turned out that Kennedy was right. What the Pentagon and the CIA didn’t realize at the time is that Soviet commanders on the ground in Cuba had fully armed tactical nuclear weapons at their disposal and the battlefield authority to use them in the event of a US bombing or invasion of the island. If Kennedy had complied with the dictates of the Pentagon and the CIA, it is a virtual certainty that the result would have been all-out nuclear war between the Soviet Union and the United States. To his ever-lasting credit, Kennedy struck a deal in which he vowed that the United States would cease and desist from invading Cuba in return for the Soviet Union’s withdrawal of its nuclear missiles from Cuba.
The point is this: If the Pentagon and the CIA had not been trying to get regime-change in Cuba, Cuba would never have felt the need to get those Soviet missiles. It was the Pentagon’s and CIA’s commitment to regime change in Cuba that gave us the Cuban Missile Crisis.
Equally important, the resolution of the crisis showed that if an independent, recalcitrant Third World regime wants to protect itself from a US national-security-state regime-change operation, the best thing it can do is secure nuclear weapons. Thus, the current crisis over North Korea’s quest to get nuclear weapons to deter a US regime-change operation is rooted in how Cuba deterred the US national security establishment’s regime-change efforts in 1962.
Americans would be wise to view regime change operations in North Korea and Venezuela in the context of the US government’s overall foreign policy of military empire and interventionism.
Recall, first of all, that the US government has a long history of interventionism in Latin America, where it has brought nothing but death, destruction, suffering, misery, and tyranny. Nicaragua, Guatemala, Chile, Brazil, Panama, and Grenada come to mind.
In fact, the situation in Chile that resulted in US intervention was quite similar to today’s situation in Venezuela. In Chile, a socialist was democratically elected and began adopting socialist policies, which caused economic chaos and crisis. The CIA and Pentagon intentionally and secretly did everything they could to makes matters worse. US officials even engaged in bribery, kidnapping, and assassination in Chile. They incited and encouraged a coup that succeeded in ousting the democratically elected socialist and replaced by a “pro-capitalist” military general, whose forces proceeded to round up, kidnap, torture, rape, or execute tens of thousands of people, including the murder of two Americans, all with the support and complicity of the Pentagon and the CIA.
Haven’t we seen the same types of results with the US regime-change operations in Iraq, Afghanistan, Libya, Yemen, Syria, and elsewhere? Death, destruction, and chaos, not to mention a gigantic refugee crisis for Europe.
And look at what the pro-empire, interventionist system has done to the American people. Constant, never-ending crises and chaos, with North Korea being just the latest example. Out of control federal spending and debt that are threatening the nation with financial bankruptcy and economic and monetary crises. Totalitarian-like powers being exercised by the president and his national-security establishment, including assassination, torture, and indefinite detention. Weird, bizarre random acts of violence that reflect the same lack of regard for the sanctity of human life that US officials display in faraway countries.
None of this is necessary. It’s entirely possible for Americans to live normal, healthy, free lives. All it takes is a change of direction — one away from empire and interventionism and toward a limited-government republic and non-interventionism in the affairs of other nations. That’s the way to achieve a free, prosperous, harmonious, and friendly society.
Reprinted with permission of the author.
Suzanne Moore insists that we must discuss the gender implications of automation:
Surely there can be no discussion of neoliberalism, austerity and automation that leaves out gender.
So let us consider the gender implications of automation - it has been the most women liberating, pro-feminist, process of the past few centuries. It is near entirely responsible for the economic equality of women that we all enjoy today. Compared to any time in the past whatever remains of gender inequality is a mere rump, a triviality - perhaps one we should still work on but by comparison it's tiny.
Brave and bold words, yes, but also true in two manners. The first is what Hans Roslin and Ha Joon Chang call the "washing machine," a grab all term for the automation of household tasks. As we've noted before we think these numbers might be a little overcooked but at least one estimate has the time taken to run a household, internally in unpaid labour, falling from 60 hours a week a century ago to 15 now. Roombas, vacuum cleaners instead of carpet beaters, washing machines, microwaves, gas stoves instead of wood or coal that must be blacked and on and on. The largest change in working hours over this past century has been the fall in female unpaid hours inside the household.
We automated much of that household work.
The second largest, and it is only the second largest as leisure time has risen for both sexes over this period of time, change in working hours has been the rise of women into the paid, market, world of work. 250 years back when the world was animal or human muscle powered there was a natural, even if unfair if you like, advantage that men enjoyed. Muscles were what was being hired, men had more of them, men got the work and the higher wages for having more of what was being hired. In more technical jargon men were more productive at the tasks of the day.
We've automated that now, there are very few of us indeed who make our living by sheer grunt, that thing where men have that advantage. Thus that discrimination has, pace whatever rump you'd like to complain about, disappeared.
Domestic automation has led to women having the time to be economically equal, automation of the world of market production has given them the means to be so.
So Huzzah! for the interaction of automation and gender then.
And that's before we even start talking about the Spinning Jenny. As Brad Delong has pointed out to one of us, any women you meet in literature before about 1600 are occupied with spinning thread near constantly, from Penelope (perhaps more weaving there) in the Odyssey onwards. By Jane Austen's time it simply isn't something mentioned, it has been automated. Homespun just isn't a thing any more.
Automation liberated women - let's have some more of it to liberate us all, eh?