Blogroll: Mises Institute
I read blogs, as well as write one. The 'blogroll' on this site reproduces some posts from some of the people I enjoy reading. There are currently 211 posts from the blog 'Mises Institute.'
Disclaimer: Reproducing an article here need not necessarily imply agreement or endorsement!
President’s Day was this week, and there was no better way to celebrate than making the case for abolishing the position. As Ryan McMaken noted, history has vindicated the fears of the Anti-Federalists that were concerned that the position vested too much power into the hands of a single ambitious politician. This abuse of power looks to continue with the Trump administration, as Sean Spicer announced plans to continue the executive branch’s war on federalism. While we are unlikely to see Trump give up his new personal office in the next four years, we could settle for abolishing a number of executive agencies whose time has come. We could also do without the central banks that help empower the Emperor-of-the-day, as Karl-Friedrich Israel notes their existence is power politics rather than economic reason.
On Mises Weekends, Jeff was joined by Dr. Kevin Gutzman to discuss who may have been America’s most radical president — Thomas Jefferson. Dr. Gutzman has recently released a new book on the man considered to be one of the most libertarian individuals to assume the office, covering all the many ways in which Jefferson shaped American government, society, and higher education as we know it.
The Mises Institute is excited to be in San Diego this weekend, discussing the Strategy for Liberty. We will be joined by Patrick Byrne, Tom Woods, Michael Boldin, Nomi Prins, and many more! If you can’t attend the event in person, watch it live at Mises.org/live, or on Facebook Live.
And in case you missed any of them, here are this week's articles from Mises Wire:
- Trump to States with Recreational Pot: Drop Dead by Ryan McMaken
- The "Washington Monument Syndrome" Strikes Again as Trump Imposes Hiring Freeze by Tate Fegley
- Justice and "Social Justice" Are Two Very Different Things by Gary Galles
- Minouche Shafik: Apologist for the Experts by C.Jay Engel
- Why Government Spending Matters More than the Size of the Deficit by Frank Shostak
- Will the United States Survive to 1984? by Ralph Raico
- Mises on Political Compromise by Carmen Elena Dorobăț
- Four Agencies to Abolish along with the Dept. of Education by Ryan McMaken
- Which is Worse — A Trade Surplus or a Trade Deficit? by Antony P. Mueller
- FOMC Minutes: More of the Same by C.Jay Engel
- We Need Hope by Matthew McCaffrey
- Five Reasons for Central Banks: Are They Any Good? by Karl-Friedrich Israel
- Billions Gone: 2016 Olympics Venues in Brazil Are Now in Ruins by Alice Salles
- Alan Greenspan Admits Ron Paul Was Right About Gold by Ryan McMaken
- Break Up the USA by Llewellyn H. Rockwell Jr.
- What's In Store for the Next Four Years? by Allen Mendenhall
- Abolish the One-Man Presidency by Ryan McMaken
- Why the "Experts" Can't Agree About Fed Rate Hikes by Jay Engel
- The Economic Evil of Eugenics by Matthew McCaffrey
- Our Huge Hidden Tax: Government Regulations by Scott Powell
- Nietzsche and the State by David Gordon
- Democracy, the God That's Failing by Jeff Deist
- The Great Gatsby and the Fed by Louis Rouanet
In a recent television interview, Aetna CEO Mark Bertolini, head of one of America’s largest health insurers, commented that selling insurance across state lines is “an outdated concept” in these days of the Affordable Care Act (ACA). Bertolini went on to explain the rationale for his statement: “Insurance products are now tightly aligned with networks, so buying an insurance product from another state, that’s tied to a network in another state, really doesn’t work for people seeking care.”
The sale of health insurance as interstate commerce is often cited as a pillar of healthcare reform by proponents of market-based solutions. In fact, I offered up this idea in a previous article as one of the ways to return empowerment and control to Americans seeking quality, affordable healthcare in the aftermath of Obamacare. While there are a number of issues that would need to be resolved in order to make healthcare across state lines work, they are not insurmountable, nor is the concept outdated.
At one time, nearly all individual health insurance was regulated at the state level. Each set of state regulations established insurance mandates requiring plans within the state to cover a specific set of treatments. With the passage of the ACA, the federal government usurped health insurance regulatory control from the states making the individual mandate even more onerous. As the last year of Obamcare demonstrated, insurance mandates raise the cost of premiums. Younger, healthier individuals are forced to pay more for insurance due to mandated coverages they do not need or want. If individuals were able to purchase insurance across state lines and tailor their coverage, costs would decrease and, in time, create more competitive insurance markets. Some speculate that the interstate commerce of health insurance may even draw individuals currently enrolled in employer-sponsored plans — Aetna’s bread and butter — in favor of less expensive out-of-state individual plans. In order for any of this to occur, however, the repeal of Obamacare must return regulatory control of health insurance to the states.
Once regulatory control is returned to the states, insurers in those states could begin to craft offerings which reflect the desires of the marketplace. It’s here that Mr. Bertolini’s statement regarding provider networks comes into play. How could a woman in Oregon purchase health insurance, allowing her to see her local doctor, from an insurer in Ohio with ties to a network of Ohio doctors? The answer is: She couldn’t — for now.
Networks are established when health-insurance companies contract with healthcare providers in order to serve their policy holders. Building provider networks is a time-consuming process and will not happen overnight, but it will happen. While a nationwide solution would be ideal, it is likely that the health-insurance market would evolve slowly at first, focusing around large metropolitan areas near state lines. The proximity of eastern Pennsylvania, metro New York, and New Jersey, as well as eastern Maryland, Washington, D.C., and northern Virginia, serve as examples. The next evolution in across-state-lines health insurance would likely be the emergence of a handful of larger regional insurers offering a variety of plans across multiple states. As provider networks grow and risk pools and product offerings increase, more individual Americans will enjoy greater healthcare choice, access, and affordability.
Crossing the line with American’s healthcare is not for the impatient, but unlike the Edsel, disco, or rotary phones, the idea of pursuing greater market-based reforms in our healthcare system will never be outdated.
In 1850, French economist Frédéric Bastiat published an essay that is misunderstood, or more often, unread, titled, “That Which is Seen, and That Which is Not Seen.” Bastiat brilliantly illustrated, through the parable of the broken window, the destructive effects of unintended consequences that result from government intervention in the economy.
Unfortunately, because of misplaced belief in government benevolence, even the most powerful and successful members of the American citizenry often miss the point and the true magnitude of these consequences.
According to Reuters, Ramin Arani, a co-portfolio manager of the $25 billion Fidelity Puritan fund, said while discussing his current bullish stance of gold, “In terms of unpredictability, there is a tail risk with this administration that did not exist with the prior. … There is a small but present possibility that government action is going to lead to unintended consequences.”
Arani’s overall bullish stance on gold is sound. Given the political climate, gold is attractive “insurance” for equity exposure. The problem doesn’t lie in his financial analysis, but rather in the seemingly innocuous comment that followed.
“There is a small but present possibility that government action is going to lead to unintended consequences.”
To suggest the chances of unintended consequences are merely “small” is extremely naïve.
Notwithstanding myriad examples of government action leading to unintended consequences, including, but certainly not limited to, minimum wage laws, rent control, social security, and the disastrous war on drugs, there are countless examples of unintended consequences brought on by government action that should resonate with a multi-billion-dollar portfolio manager. Yet they seem to have fallen on deaf ears.Unintended Consequences of Gold Confiscation
Besides being theft, gold confiscation didn't work. The price of gold was increased from $20.67 to $35.00 per ounce, a 69% increase, but the domestic price level increased only 7% between 1933 and 1934, and over the rest of the decade it hardly increased at all. FDR’s devaluation provoked retaliation by other countries, further strangling international trade and throwing the world's economies further into depression.
Looking for government action that led to the unintended consequence of literally worsening the worst depression in world history? Check.Unintended Consequences of the Community Reinvestment Act
In 1977, Congress passed a piece of legislation called the Community Reinvestment Act. The evolution of this act played a significant role in establishing the lowered lending standards that contributed to the 2008 housing crisis. Combined with the Federal Reserve artificially lowering interest rates, Fannie Mae and Freddie Mac taking on the “philanthropic” effort of improving homeownership of low and middle class families, and many other factors, the unintended consequences of government action raised the rate of foreclosure by 225% from 2006 to 2009.
Looking for government action that led to the unintended consequence of close to a million American families losing their homes? Check.Unintended Consequences of the Affordable Care Act
The first half of Arani’s statement speaks to rising unpredictability under the Trump administration relative to the Obama administration. It has been barely a month since President Trump was inaugurated, but one would be remiss to speak on the Obama administration as if it was the bastion of predictability.
Without examining the disparity between Obama’s foreign policy campaign rhetoric and his unpredictable drone-happy administration, there is a glaring example of an unintended but extremely foreseeable consequence stemming from his signature health care law.
In September 2013, President Obama said the following in a speech on the Affordable Care Act:
In the United States of America, health care is not a privilege for the fortunate few — it is a right. And I knew that if we didn’t do something about our unfair and inefficient health care system, it would keep driving up our deficits, it would keep burdening our businesses, it would keep hurting our families, and it would keep holding back economic growth.
The most predictable consequences of passing the Affordable Care Act came to the surface. A large spike in premiums, increase in taxes, millions of Americans losing their plans, and job losses, just to name a few.
Looking for government action that led to literally 100 unintended consequences throughout the health care system? Check.
Mr. Arani is correct about gold, but to minimize the severity and predictability of unintended consequences brought on by government action as a “small but present possibility” is disingenuous.
In the days following the 2016 election, there were already worrying signs that the Trump administration didn't merely view the War on Drugs as a useful source of rhetoric to please some Conservatives. With the appointment of Jeff Sessions — who appears to be a true believer in the War on Drugs — the threat to federalism, states's rights, and local control was all too real.
The fears continue to be stoked by the administration itself, and yesterday White House spokesman Sean Spicer announcing that "I do believe that you'll see greater enforcement of [federal law against marijuana]."
So, in an administration where Trump's promised health care reforms are anything but a done deal — and which is plagued with leaks and conflict with the US intelligence establishment — Spicer suggests the administration has enough extra time to ramp up prosecutions of American citizens for smoking a joint. The fact that 81 percent of all drug arrests are for simple possession means that yes, increasing federal enforcement is about arresting and prosecuting small-time users.
Spicer justifies this with the well-worn claim often made by Conservatives that "There is still a federal law that we need to abide by ... when it comes to recreational marijuana and other drugs of that nature."
At the core of this statement is the same hypocrisy that infects the entire right wing on the Drug War issue.
Conservatives like to talk a good game about states's rights and local control when it comes to issues like gun laws and Obamacare, but federalism and the Constitution go right out the window on the drug issue.
This has long been obvious, and was solidified in federal court when Trump's nominee to head the EPA, Scott Pruitt, sued Colorado in federal court when he was attorney general of Oklahoma. Pruitt and the GOP attorney general from Nebraska both attempted to get the federal court to render Colorado's drug laws null and void — which would have essentially destroyed what's left of federalism and states's rights down to its foundations. Pruitt, however, was making this same argument at the very same time he was arguing that the states had the right to override Obamacare mandates.
But the hypocrisy does not stop there. Conservatives love to talk about following the "original intent" of the US Constitution and demanding the federal government do nothing that is not authorized by the Constitution. That, of course, is then conveniently forgotten on the drug issue.
Although Sean Spicer certainly won't admit it, the "federal law we need to abide by" is not some federal statute passed by Congress about drugs. The law we need to abide by is found in the US Constitution — specifically the Tenth Amendment — where it clearly states that "The powers not delegated to the United States by the Constitution, nor prohibited by it to the States, are reserved to the States respectively, or to the people."
So does the Constitution delegate to the United States government the power to regulate what sort of plants people eat, smoke, or grow? Here's a hint: No, it doesn't.
This refrain of Drug Warriors that those who don't like the Drug War need to "change the law" before they can complain requires a willful ignorance of the law contained in the US Constitution itself.
Indeed, in more honest times, everyone knew the Constitution did not allow federal control of such matters which is why most everyone accepted that a Constitutional amendment was necessary to authorize federal prohibition of alcohol. It was only later that politicians realized they could just forget about all that Constitution stuff and pass federal statutes banning various substances at will.
Of course even if the Constitution did authorize such things, it would be worthy of being ignored, just as federal laws and Constitutional provisions protecting slavery were always worthless and should have been ignored by everyone everywhere.
Spicer then went on to make other fact-free claims in his attempt to connect marijuana use to recent surges in opioid deaths. Lizzy Acker in The Oregonian reports:
"I think that when you see something like the opioid addiction crisis blossoming in so many states around this country," Spicer said, "the last thing that we should be doing is encouraging people."
Though Spicer drew a connection between opioid use and marijuana, there is no known connection between the two. According to the Centers for Disease Control, in 2015 more than 33,000 people died from opioid overdoses, which includes both heroin and prescription painkillers, "more than any year on record."
The CDC reported that "nearly half of all opioid overdose deaths involve a prescription opioid."
Marijuana overdoses account for no deaths, according to the Drug Enforcement Administration. In fact, a study reported in "Time" in 2016, said that "when states legalized medical marijuana, prescriptions dropped significantly for painkillers."
As Mark Thornton shows, the problem of opioid deaths can be traced back to the mainstream medical profession's frequent use of prescription painkillers, and has nothing at all to do with marijuana:
One class of prescription drugs is directly related to the heroin epidemic, on which I have recently reported. To recap, drug companies that make opiate pain killers have influenced the American Academy of Pain Medicine to change their guidelines for prescribing pain killers. The changes in the guidelines have made it much more likely for doctors to prescribe pain killing opiate drugs such as Oxycontin and Vicodin for things like ordinary injuries and surgeries. The DEA, FDA, and the AMA monitor prescribing behavior of doctors, so they are more likely to follow such guidelines to avoid risk of sanction.
These drugs are highly effective for pain, but can be addictive and deadly themselves (16,000 deaths in 2015 alone). When the injuries heal, addicted patients can no longer get refills for the drugs. For those who have become addicted their choices are going cold turkey, enter an addiction treatment program, or obtain the drugs on the black market. In other words, they have no good choices.
And, while Spicer suggests arresting some pot users might somehow miraculously do something to cut down opioid use, the FDA is approving opioid use for 11 to 16 year olds, thus encouraging greater use on children. If the Trump administration is in the mood to crack down on somebody connected to the opioid addiction problem, there's no need to go out to Colorado or Oregon to do it. Trump can just drive over to the FDA headquarters in Maryland.
And finally, this is just the latest indication that the Trump administration's priorities are not where they need to be. Earlier this month, David Stockman complained that Trump is letting himself get sidetracked from the important business of freeing up the economy. Stockman was apparently more right than he knew.
When asked about drug issues in far-off states that have legalized recreational marijuana, Spicer could have simply said "we're concentrating on repealing Obamacare right now" or "we're really focused on helping small business people make a living" or "we're focused on finding peaceful solutions to pressing international issues right now, as in Syria." All of those issues require immense focus, time and effort from Trump himself and his advisors. But no, the administration decided to declare war on seven US states instead.
There are only so many hours in the day. Trump might want to take a closer look at how he uses them.
Dr. Kevin Gutzman is a history professor at Western Connecticut State University, a New York Times best-selling author, and one of the leading Constitutional scholars in the country today. He and Jeff talk about his new book, Thomas Jefferson—Revolutionary: A Radical's Struggle to Remake America. Dr. Gutzman discusses some of the overlooked ways Jefferson shaped America, and how his radical views are often underplayed by many academics today. Jefferson’s views on self-governance freedom of conscious, and rejection of centralized control made him perhaps the most libertarian Founding Father — one whose ideas are still relevant today.
Recently, Harvard political theorist Danielle Allen wrote in the Washington Post of “The most important phrase in the Pledge of Allegiance” — “with liberty and justice for all.”
Allen recognized that justice required “equality before the law” and that freedom exists “only when it is for everyone.” But she confused democracy — defined as progressives “build[ing] a distributed majority across the country, as is needed for electoral college victory” — with liberty, which is very different. Similarly, she replaced the traditional meaning of justice (“giving each his own,” according to Cicero) with a version of “social justice” inconsistent with it. And her two primary examples of rights — “rights” to education and health care — were inconsistent with both liberty for all and justice for all.
Americans cannot have both liberty and this type of social justice — under whose aegis one can assert rights to be provided education and health care, not to mention food, housing, etc. Positive rights to receive such things, absent an obligation to earn them, must violate others’ liberty, because a government must take citizens’ resources without their consent to fund them. Providing such government benefits for some forcibly violates others’ rights to themselves and their property.
The only justice that can be “for all” involves defending negative rights — prohibitions laid out against others, especially the government, to prevent unwanted intrusions — not rights to be given things. Further, only such justice can be reconciled with liberty “for all.” That is why negative rights are what the Declaration of Independence and the Constitution, especially the Bill of Rights, were intended to protect. But those foundational freedoms continue to be eroded by the ongoing search to invent ever-more positive rights.
Echoing John Locke, The Declaration of Independence asserts that all have unalienable rights, including liberty, and that government’s central purpose is to defend those negative rights. Each citizen can enjoy them without infringing on anyone else’s rights, because they impose on others only the obligation not to invade or interfere. But when the government creates new positive rights — which require extracting resources from others — these new “rights” violate others’ true unalienable rights. In other words, people recognize these positive rights as theft except when the government does it.
Almost all of Americans’ rights laid out in the Constitution are protections against government abuse. The preamble makes that clear, as does the enumeration of the limited powers granted to the federal government. That is reinforced by explicit descriptions of some powers not given, particularly in the Bill of Rights, whose negative rights Justice Hugo Black called the “Thou Shalt Nots.” Even the Bill of Rights’ central positive right — to a jury trial — is largely to defend innocent citizens’ negative rights against being railroaded by government. And the 9th and 10th Amendments leave no doubt that all rights not expressly delegated to the federal government (including health care and education) are retained by the states or the people.
Liberty means I rule myself, protected by my negative rights, and voluntary agreements are the means of resolving conflict. In contrast, assigning positive rights to others means someone else rules over the choices and resources taken from me. But since no one has the right to rob me, they cannot delegate such a right to the government to force me to provide resources it wishes to give to others, even if by majority vote. For our government to remain within its delegated authority, reflecting the consent of the governed expressed in “the highest law of the land,” it can only enforce negative rights.
Our country was founded on unalienable rights, not rights granted by Washington. That means government has no legitimate power to take them away. However, as people have discovered ever-more things they want others to pay for, and manipulated the language of rights to create popular support, our government has increasingly turned to violating the rights it was instituted to defend. And there is no way to square such coercive “social justice” with “liberty and justice for all.”
On Monday, January 23rd, President Trump announced a hiring freeze on federal workers, with an exception for jobs that are ostensibly related to national security or public safety. The exception includes uniformed military personnel, but does not include the military’s civilian support staff, which already numbers around 750,000.
This has led to somewhat of a brouhaha regarding two major US Army bases – one in Fort Knox, Kentucky and the other in Wiesbaden, Germany — that have sent out memos to soldiers informing them that child care services would no longer be provided by the Army, specifically claiming that the closure is due to the hiring freeze. On its face, such a cut seems like a great burden to put on military families and may lead one to question whether this hiring freeze is really worth it. And that is precisely the intention.The First Rule of Budget Cuts: Cut the Most Popular Programs First
If one takes any time at all to consider the options Army bases have for cutting expenses, it requires an extreme form of naiveté to think that they had no choice but to cut child care. Don’t just take my word for it: go to the Garrison Wiesbaden website and look at the recreational opportunities that remain funded. A family on base may no longer receive complimentary child care, but they can attend Zumba, yoga, German, English as a Second Language, and money management classes (to name but a few) for free. And the kids aren’t left out either. Every Friday they can still attend “Play Morning” where they can interact with other children while their parents learn about child development and parenting skills.The Washington Monument Syndrome
Of course, cutting any of these other programs would not have been newsworthy nor caused much outrage, as it’s harder to feel much sympathy for someone losing access to their Zumba classes provided at taxpayer expense. Rather, the decision to cut child care is simply the latest example of a long-running phenomenon called the “Washington Monument Syndrome,” in which government bureaucrats facing pressure to cut their budgets choose to eliminate the most visible and/or popular services they provide in an effort to create a public outcry.
We saw this during the 2013 budget sequestration when the National Park Service apparently had no choice but to turn away park visitors (how much money this saved is anyone’s guess), but did have the spare money to create large metal signs to inform frustrated travelers that it was the sequestration’s fault. At the same time, the Department of Homeland Security, knowing that outright abolition of the Transportation Security Administration would likely be an extremely popular move, threatened to make travelers wait for hours to get through airport security, no doubt in order to convince them of the horrors of budget cuts.
There are of course the more mundane examples at the local level. For whatever reason, when budget cuts are on the way, teachers seem to be on the chopping block before any administrator is. Some police officers are apparently so confident that the public will support them no matter what kind of job they do, that they openly advertise on billboards how many homicides they failed to prevent in the hope that public will be convinced that their benefits should not be cut. The almost universally adored fireman is also such a popular target for budget cuts that an alternative name for the Washington Monument Syndrome is the “firemen first principle.”
Don’t be fooled. In times of fiscal crisis, you would never expect a bureaucrat to consider it his patriotic duty to resign and find work in the private sector in order to decrease the fiscal burden on his countrymen. So why would you expect him to quietly cut the fat from his budget when there are opportunities to stoke pity or outrage among the public?
Minouche Shafik of the Bank of England recently spoke to the Oxford Union in defense of the monetary Experts. The “Experts,” she pointed out, “have come in for a great deal of criticism of late.” She suggests this phenomenon may have something to do with the 2008 financial crisis. She also mentions various currency manipulation and interest rate scandals as possible motivations for public outrage. We applaud her keen insight.
However, she warns, it was due to the Experts that we have “gained about 20 years of life expectancy since 1950,” essentially eradicated polio, seen massive increases in world incomes, experienced a plunge in global poverty, and so on. She also brings up sanitation, roads, and education. Thus, it shouldn’t be surprising that so many decisions have been delegated to experts. Even Caesar (that bastion of freedom) turned to the experts to help him manage the empire.
More specifically to monetary problems, we learn that governments have created independent central banks full of Experts to decide on monetary policy. This was to protect the monetary policy decision making from the influence of politics. Politicians couldn’t adequately run an effective monetary system, so they outsourced it to the Experts. Seriously.
Of course, there’s no mention of how the “experts” got it wrong in 2008, or why we should keep trusting them. We do get some dismissal that the whole thing was a simple “failure of collective imagination of many bright people… to understand the risks to the system.” Presumably, these “bright people” are the experts and one wonders how self-blinded they are to overlook the fact that they caused the very risks they don’t even understand!
The lesson we simpletons are to take from all this is that the experts have everything under control. They are the ones “who sift through all the information and make informed judgments," according to Shafik. We just need to trust them, to keep the faith. Sort of like one of those “let go and let God” kind of things, except the god in this case would be the Experts.
Now, if the reader thinks referring to this special class of officials as “The Experts” is a little creepy, the entire tone of the speech reads the same. She has a self-labeled “agenda” to communicate and speak to the frustrations of the masses in a way that will make them more trusting of what the experts have in store. On one hand, it's the same ancient need of the regime to maintain control via propaganda. But on a more optimistic note, perhaps these speeches are signs of a concerned regime that is aware of an angry populace.
We don't want their expertise, thank you very much. In the words of Mises:
There is no other planning for freedom and general welfare than to let the market system work. There is no other means to attain full employment, rising real wage rates and a high standard of living for the common man than private initiative and free enterprise.
Budget deficits are often in the media spotlight. The budget deficit is defined as the difference between what the government spends and what the government collects. When the government spends more than it collects, a budget deficit exists. When the government collects more than it spends, a budget surplus emerges.
The conventional view is that one can show that budget deficits reduce national saving. National saving is typically defined as the sum of private saving (the after-tax income that households save rather than consume) and public saving. When the government runs a budget deficit, public saving is negative, which reduces national saving below private saving.
By generating surpluses, so it would appear, the government creates real wealth, thereby strengthening the economy’s fundamentals. This argument would be correct if government activities were of a wealth-generating nature.Government Spending Doesn't Create Wealth
This is, however, not the case. Government activities are confined to the redistribution of real wealth from wealth generators to wealth consumers. Government activities result in taking wealth from one person and channeling it to another.
Various impressive projects that the government undertakes also fall into the category of wealth redistribution. The fact that the private sector didn’t undertake these projects indicates that they are low on the priority list of consumers.
Given the state of the pool of real wealth the implementation of these projects will undermine the well-being of individuals since they will be introduced at the expense of projects that are higher on the priority list of consumers.
Let us assume that the government decides to build a pyramid that most people regard as low priority. The people who will be employed on this project must be given access to various goods and services to sustain their life and well-beings.
Since the government is not a wealth producer it would have to impose taxes on wealth producers (those individuals who produce goods and services in accordance with consumers’ most important priorities) in order to support the building of a pyramid.
Whenever wealth producers exchange their products with each other, the exchange is voluntary. Every producer exchanges goods in his possession for goods that he believes will raise his living standard.
The crux therefore is that the exchange or the trade must be free and thus reflective of individual’s priorities. Government taxes are, however, of a coercive nature: they force producers to part with their wealth in exchange for an unwanted pyramid. This implies that producers are forced to exchange more for less, and obviously this impairs their well-being.
The more that pyramid-building that is undertaken by the government the more real wealth is taken away from wealth producers. We can thus infer that the level of tax, i.e. real wealth, taken from the private sector is directly determined by the size of government activities.
Observe that by being a wealth consumer, the government cannot contribute to savings and to the pool of real wealth. Moreover, if government activities could have generated wealth then they would have been self-funded and would not have required any support from other wealth generators. If this were otherwise then the issue of taxes would never arise.The Effects of Surpluses on Inflation and the Money Supply
The essence of our previous analysis is not altered by the introduction of money. In the money economy the government will tax (take money from wealth generators) and disburse the received money to various individuals that are employed directly or indirectly by the government.
This money will give these individuals access to the pool of real wealth that is the total stock of goods and services. Government-employed individuals are now able to exchange the taxed money for various goods and services that are required to improve their lives.
What then is the meaning of a budget surplus in a money economy? It basically means that the government’s inflow of money exceeds its expenditure of money. The budget surplus here is just a monetary surplus. The emergence of a surplus produces the same effect as any tight monetary policy.
On this Ludwig von Mises wrote,
Now, restriction of government expenditure may be certainly a good thing. But it does not provide the funds a government needs for a later expansion of its expenditure. An individual may conduct his affairs in this way. He may accumulate savings when his income is high and spend them later when his income drops. But it is different with a nation or all nations together. The treasury may hoard a part of the lavish revenue from taxes, which flows into the public exchequer as a result of the boom. As far and as long as it withholds these funds from circulation, its policy is really deflationary and contra-cyclical and may to this extent weaken the boom created by credit expansion. But when these funds are spent again, they alter the money relation and create a cash-induced tendency toward a drop in the monetary unit's purchasing power. By no means can these funds provide the capital goods required for the execution of the shelved public works.1Government Spending — Not Surpluses and Deficits — Is What Matters Most
Thinking that government spending is a wealth generator in itself, some will argue that the proper response to a government surplus shows there is no need to reduce spending, and that taxes should simply be reduced. But, a budget surplus — i.e. a monetary surplus — does not "make room" for lower taxes. Only if real government outlays are curtailed (i.e. only when the government cuts the number of pyramids it plans to build) can tax effectively be lowered. Lower government outlays imply that wealth generators will now have a larger portion of the pool of real wealth at their disposal.
On the other hand, if government outlays continue to increase, notwithstanding budget surpluses, no effective tax reduction is possible; the share of the pool of real wealth at the disposal of wealth producers will diminish.
For example, if government outlays are $3 trillion and the government revenue is $2 trillion then the government will have a deficit of $1 trillion. Since government outlays have to be funded this means that the government would have to secure some other sources of funding such as borrowing, printing money or new forms of taxes. The government will employ all sorts of means to obtain resources from wealth generators to support its activities.
What matters here is that government outlays are $3 trillion, not that the deficit is $1 trillion. For instance, if government revenue on account of higher taxes were $3 trillion then we would have a balanced budget. But would this alter the fact that the government still takes $3 trillion of resources from wealth generators?We Must Build Wealth Before We Can Spend It
The critics of a smaller government will react that the private sector cannot be trusted to build up and enhance the nation’s infrastructure. For instance, the US urgently requires the building and upgrading of bridges and roads.
There is no doubt that this is the case. However, can Americans afford the improvement of the infrastructure? The arbiter here should be the free market where individuals, by buying or abstaining from buying, decide on the type of infrastructure that is going to emerge.
If the size of the pool of real wealth is not adequate to afford better infrastructure then time is needed to accumulate real wealth to be able to secure better infrastructure. The build-up of the pool of real wealth cannot be made faster by raising government outlays. As we have seen, an increase in government spending will only weaken the pool of real wealth.
The government can force various non-market chosen projects. The government, however, cannot make these projects viable. As time goes by the burden that these projects will impose on the economy through higher ongoing levels of taxes is going to undermine the well-being of individuals and will make these projects even more of a burden.Spending Reductions Must Come With Tax Cuts
What about the lowering of taxes on businesses – surely this will give a boost to capital investment and strengthen the process of real wealth formation? This is what President Trump is being rumored to be considering. As long as this lowering of taxes is not matched by a reduction in government spending this will encourage a misallocation of capital.
The emerging budget deficit is going to be funded either by borrowings or by monetary pumping. Obviously, this amounts to the diversion of real wealth from wealth generating activities to non-wealth generating activities. Various capital projects that emerge on the back of such government policy are likely to be the equivalent of useless pyramids.
We have seen that one of the ways of securing the necessary funds by the government is by means of borrowing. But how can this be?
A borrower must be a wealth generator in order to be able to repay the principal loan plus interest. This is, however, not the case as far as the government is concerned, for government is not a wealth generator – it only consumes wealth.
So how then can the government as a borrower, producing no real wealth, ever repay its debt? The only way it can do this is by borrowing again from the same lender – the wealth-generating private sector. It amounts to a process whereby government borrows from you in order to repay you.
We can conclude that the only meaningful contribution the government can make to the pool of real wealth, and hence people’s living standards, is by focusing on a reduction in real outlays – not whether there is a surplus or a deficit. This in turn means the government must remove itself from business activities and permit wealth generators to get on with the business of wealth generation.
- 1. Ludwig von Mises Human Action Scholars edition chapter 31 p793
The title of this talk, as some of you will know, is taken from a recent book by the heroic Russian dissident intellectual, Andrei Amalrik, Will the Soviet Union Survive to 1984?1 What is implied is not that things will suddenly go kaput in 1984 — that would be too much of a coincidence — but that, in terms of the present discussion, for the next generation and the foreseeable future a pessimistic prognosis is in order; to put it briefly, that it is fundamentally over with the noble American experiment. What I have in mind by this is that element in the make-up of the United States which Lord Acton referred to when he said, of the cause of liberty in mid-18th century Europe:
Europe seemed incapable of becoming the home of free states. It was from America that the plain idea that men ought to mind their own business, and that the nation is responsible to Heaven for the acts of state, burst forth like a conqueror upon the world they were destined to transform under the title of the Rights of Man.
There have been, God knows, other features in the history of the United States, but there was always this also, and it has continued to serve as a reference point and refuge for the libertarian-minded in each generation; it has served to keep the United States a relatively free country.
It is the possibility of the continuation of this ideal in any meaningful sense that I am pessimistic about. This is not to say that there are not contrary signs — very important is that it seems improbable that the United states will be able to engage in more far-away military actions such as the Indo-China War for a couple of decades; to that extent, American imperialism has become politically difficult and will for a time have to retrench. So the signs are somewhat ambiguous. But considering the domestic political situation — although it would provide a contemporary H.L. Mencken with material for unending satire on all fronts — from the point of view of libertarianism, there is little cause for anything but pessimism, in regard to the Left, to the Right and to the Middle.
At first, it seemed as if the New Left held out a good deal of promise to a libertarian, particularly in the student movement. It was anti-authoritarian and individualistic; much of its background was provided by the '50's popular sociological works such as Whyte's Organization Man and Riesman's Lonely Crowd, which were basically individualistic and communicated the fear of uniformity and control of individuals by the sheer mindless weight of crowds. It was pacifistic. Not that pacifism is an unconditional principle; but in view of the propensity of governments to go to war at the drop of a hat (think of the thousands and millions who have died for "essential" national causes in the Crimea and against the Boers, in Flanders fields and in Viet Nam), pacifism is quite a good rule of thumb. The slogan of this earlier New Left was that everyone should be left to do his own thing — and with all the instant obsolescence of the ideas and catchphrases of the '60's, it would be good if this one somehow survived. Essentially, this is what Hayek was talking about in his Constitution of Liberty, although he phrased it somewhat differently.
But although a libertarian such as Thoreau was a major hero of the New Left in its earlier phase, the radical students seem never to have taken to heart the libertarian spirit that Thoreau showed in such a passage from Civil Disobedience, as the following:
the people must have some complicated machinery or other, and hear its din, to satisfy the idea of government which they have. Governments show thus how successfully men can be imposed on. It is excellent, we must all allow. Yet this government never of itself furthered any enterprise, but by the alacrity with which it got out of its way. It does not keep the country free. It does not settle the West. It does not educate. The character, inherent in the American people has done .all that has been accomplished; and it would have done somewhat more, if the government had not sometimes got in its way.
Instead, the leftist students have become statists, and Marxist statists for the most part. The enemy for them is not primarily the State, or those who make use of the State to give them an advantage over others in the give-and-take of the market, but the market and the private sector themselves. Here they simply take over and even exaggerate the ideas of the Old Left — on the omnipotence of advertising, the shamelessness of searching for a profit, the evil inherent in exchanging goods rather than giving them away, all the tedious Galbraithian pseudo-wit on consumer habits — adding the ecology business. Almost all bright students one runs across are against commerce and the principle of exchange; it often takes real courage to defend these things against the astonishing aggressiveness of many of the leftist students (more than against their professors, who to some degree continue to respect argumentation).
This was, however, more or less to be expected. Marxism is the most thoroughly elaborated left-wing ideology, and it comes — one might almost say — like second-nature to anyone who begins reflecting on social affairs from an unorthodox point of view. Once set out on the waters of opposition to "the System," the radical students wound up with Marxism almost inevitably.
This has been enormously helped along by the simultaneous emergence to wide-spread public notice of a "new sort" of Marxism — that is based on the idea of "alienation," an idea which, by its vagueness and utopianism seems to have been tailor-made for the new leftists. Everything unpleasant that exists — from political apathy to sexual frustration, from the slightest inconvenience to the limiting conditions of life itself (for instance, that men must work to live) — can be ascribed to the "alienation" caused by capitalism and the class society.
The early emphasis of the New Left on individualism has been eclipsed by a concern for ''community." As Richard Zorza, a student who has written about the Harvard strike of a couple of years ago, states: the important thing is "the right to say 'we'; that right is more precious than all others to this confronted generation. It is a right that gives us an identity and allows us dignity." In a sense, of course, there is nothing in this incompatible with individualistic libertarianism, the essence of which is anti-statism, not anti-societism. But one suspects that with many young leftists attracted by the "we" idea, it will ultimately be the political community, backed by force and revolutionary ideology, to which they will look to realize the "right to say 'we'."
Very often, the young New Left is much worse than the old line leftists were and are. A good example is a recent book by the husband of Joan Baez, David Harris; the book is titled Goliath, and is now in paperback. (Since Soul on Ice has sold two million copies — in our repressive fascist society — it is not unlikely that Harris's book will sell at least a half million copies — although Harris was imprisoned for a less interesting crime than Cleaver's, and does not benefit from that really great name.) In Goliath, in the chapter on "The Myth of Property," we read of the dichotomy of property and need:
Property dictates that we pursue ownership. The pursuit of need follows a logic of use. When you need something, you use it. Instead, of using things, America demands they be controlled. ... In fact, the person who owns a resource owns the lives of all who must use it. ... If the world were shaped according to our need of it, then its resources would be available as a function of that need: the hungry would eat, the homeless would build homes. (italics added)
And when Harris states that: "Property is the negation of the common conglomerate man," I think I can detect a strong direct or indirect influence from the Economic and Philosophical Manuscripts of 1844.
With this sort of mentality there is obviously no arguing, as there was with the old-line leftists. Property — whether private or public — is the right to dispose of things, generally material things. How the things of the world could conceivably be left up to the claims of needs is not broached — the very suggestion that a problem might exist is disgusting and unmasks the questioner as one who does not share in the mystical community of the well-intentioned. I would guess that there are not ten facts — ten hard facts — that Harris knows about the American economy, or capitalism at large. This sort of work — and it's an example in print of the kind of mindless thought that goes on among millions — is based simply on the undigested and unexamined personal impressions of the wide world on the part of the "thinker." The pathetic thing is that the model for all of economic activity — for all the things that keep people alive and keep them above the animal level — is the friendly communal passing of a joint. People like this want to legislate for the nations of the world.
A similar sort of thoughtlessness has become increasingly evident among churchmen, more recently among the Catholic clergy. Their social position, of course, has become difficult in the extreme; no one takes their religious claims seriously anymore, not even they themselves for the most part. The rationalization of the world, in Weber's sense, has proceeded too far for that, for good or bad. But the momentum of ecclesiastical institutions within society naturally continues, at least for the time being; such social formations do not suddenly disappear, simply because the ground has been taken from under them. Moreover, there are vast endowments and thousands of sinecures involved. So, deprived of their traditional raison d'être, the clergy increasingly take to politics; here they carry over the mental habits of dogmatism, contempt for rational discussion and moral authoritarianism that have always marked them as a class. Their commanding instincts are paternalistic — thus, it is only natural that their concern, as far as domestic politics go, is directed to taking the under-privileged under their wing. They are sure to prove particularly useful to the leftist mainstream in connection with the creation and dissemination of guilt, guilt for the possession of any wealth above some undefined "decent" minimum. Past masters in the art of generating guilt, the clergy will be valuable allies in the cause of transforming the traditional American pride in economic success into an anxiety-ridden defensiveness. Thus, the Christian churches, which entered modern history as the "desperate foes" of the free society — in Lord Acton's phrase — will be able to close their eyes happy in the knowledge that their old enemy has not survived them.
The out-and-out revolutionaries, the violent ones, are even worse than the people I have been considering. It is difficult not to put a bad interpretation on Bernadine Dohrn's praise of Charles Manson (whom she, and the other people of the "Movement" who consider him to be a hero, assumed to be guilty of the Tate murders). It is clear enough at this point that those who call for a violent revolution in the United States at this time do not have the interests of the American people at heart, whatever else they may be aiming at. In this connection, I think it's a good thing that my friend Murray Rothbard has given up his support of revolution. Considering the state of opinion among would-be revolutionaries, and their only potential mass-base — liberal arts students and the black under-class — there is not the slightest chance for a libertarian-oriented revolution in the foreseeable future.
(This is not to say that because a revolution is an illusion, then every law is sacred; there are obvious cases, such as Selective Service, where the existence of the law constitutes no presumption whatsoever that the law should be obeyed.)
But everywhere the radical Left is in distintegration and disarray. The gushing of the totalitarian romantics over the Cuban experiment is drying up, since it is increasingly difficult to deny its failure even in its own terms; and it doesn't seem likely that anyone on the Left will be able to secrete much enthusiasm for the incipient whitecollar and university-professor dictatorship in Chile.
By and large, it appears that the American leftist radical movement will evaporate simply into a re-enforcement of the old New Deal demand for "more programs" to deal with our problems. This will become the "compromise" solution that will demonstrate the "responsibility" of those who will still feel entitled to call themselves "radicals" simply because they push for more and larger budgets, and never let up on the hysterical note of the pain and suffering existing side by side with color TV's, motor boats, etc., etc. (The logical implication of this line of thought — of emotion, really — is that an American auto-mechanic ought to feel guilty about drinking a beer, when there are Pakistani children who go without milk, and are actually starving to death. But the people I'm talking about never carry the argument beyond a step or two. Actually, there is no argument involved here at all.)
This accommodation with New Dealism has already occurred in the case of Charles A. Reich, the celebrated author of The Greening of America, who recently wrote in the New York Times Op-Ed section:
The first affirmative requirement of a new society is a system of planning, allocation and design. Today there is no control over what any organization may invest, produce, use up, distribute. The need for planning has been obvious since before the New Deal, but we have refused to see it. It is time for us to grow up to acknowledge that the great forces of technology cannot be left the playthings of corporate expansionism and personal ambition.
Reich adds, however, that: "Planning cannot be left to the planners," and goes on to recommend a sort of long-range comprehensive planning that will still be compatible with immediate, direct decision-making by people in small groups. How these two ideals would be reconciled doesn't begin to be a problem for Reich.
Similarly, Jack Newfield of The Village Voice, writing on "The Death of Liberalism" in the current Playboy, finds the "remedies" that "are as obvious as they are radical in Galbraith's concise and precise words," which are that "the cures lie in 'taxing the rich, regulating private enterprise and redeeming power and policy from military and civilian bureaucracy!" (This last — the attack on bureaucracy — is Galbraith's concession to the Zeitgeist.) This is pretty much of a perfect example of what I'm talking about: a self-styled radical, writing on the "Death of Libertarianism" finds the cures for our problems to lie in the thought of — of all people — the old ADA'er and hater of the world of private relations!
Having accommodated themselves to the great statist mainstream of the twentieth century, these "radicals" will find their natural candidate in Ted Kennedy; all but the most "alienated" will flock to him. The Kennedy administration, if it comes into being, would probably enact a system of National service for young men — Kennedy's stated goal. Possibly the system will be extended to young women, as well; just as there are Gay Liberationists who demand the "right" to serve in the People's Revolutionary Army, or even the right to be drafted into the State's army, it may be possible for the image-builders to present the conscription of young women as a concession to Women's Lib. Thus, a labor draft for people in their late teens and early twenties — conscription into social work, nursing, as forest rangers and to eradicate oil slicks. It's hard to think what could be a more ironic ending for a movement that began by invoking Thoreau and the sacredness of the individual personality.
If we consider the Right in America, the prospects are about as dim. In regard to the mass-base, there is still some residual hope in the rhetorical opposition to Big Government and High Taxes on the part of sections of the middle and upper-middle classes. But the Rightist mass-base that the author of the Emerging Republican Majority — and those who, before the last election, listened to him — pins his hopes upon, is obviously useless to libertarians, or pretty much so. A possible ray of hope here is provided by the fact that the Catholic ethnic groups which are to furnish a crucial part of this majority do not, for the most part, share the ethic of service and moral uplift that is so important to mainstream left-liberalism. (As Edward Banfield points out, in The Unheavenly City, this is largely a WASP and Jewish trait in American politics.) It may be that these ethnic groups are more healthily materialistic, and will tend to prefer lower taxes to government programs that lack any evidence of a reasonable possibility of succeeding. The idea that we must do something — in spite of the fact that there is no reason to think that the proposed "something" will do the trick, and although similar "somethings" in the past have done more harm than good — the idea that we must do this, or fail in our duty to our own social consciences, may exert little influence on the Catholic ethnics. But the not-so-very hidden idea behind the notion that a grand Republican coalition can be formed has nothing to do with libertarianism. Rather, it banks on dislike of Negroes and impatience with the "outrageous" culture of students and liberal professionals, a dislike and impatience that will draw together Southerners, Northern working-class whites and rural types. Kevin Phillips, the author of The Emerging Republican Majority — where he outlines this strategy — has expressed his disgust that obsolete "laissez-faire" economics continues to interfere with the realization of his grand design.
Then there are the conservative intellectuals. I do not deny that some of them have a few good instincts; some of these may be activated at times, as they are with Buckley when he debates leftists. But the conservatives are neither dependable on the question of liberty, nor particularly helpful.
In the first place, because they continue to fight silly and useless battles. The very paradigm of this is Russell Kirk's little feud, in the pages of National Review, with masturbation, which he persists in calling "onanism," against scholarship — since that wasn't Onan's sin, and no doubt in order to suggest the connection of his thought on this with the Great Tradition, not to say with the Great Chain of Being itself. Their refusal to take the obvious libertarian position on the legalization of drugs and their priggish literary harassment of pornographers are further indications of this, as is National Review's obsession with the unspeakable abomination of homosexual acts. All these Puritanical views are supposed, in their ideology, to flow from their Christian commitment. But it is a not very well kept secret among those who know anything about the American conservative movement, that by rational, if not by orthodox, standards, the Christianity of these conservative intellectuals is bogus. It is not only that they have little in common with the Gospel of Jesus, but, beyond that, that many of them show that extra nastiness and viciousness in debate that seems to be common to the Right-Wing the world over, for reasons on which Wilhelm Reich had certain ideas. ( I would strongly suggest to libertarians who might suppose, because of their familiarity with the usual run of American conservatives, that Christianity has nothing of importance to offer them, to take a look at some of the writings of a real and highly intelligent Christian, C.S. Lewis — particularly his essay, The Abolition of Man and his basically libertarian novel, That Hideous Strength.)
The conservatives are of little use, also, because they have no real sense of the meaning of the market economy, of the dignity of the act of exchange and the injustice inherent in replacing exchange by force. For them, any property that happens to be in the possession of individuals is true private property. They are bitter about welfare, but not about the proposed subsidy to Penn Central — which was evidently stopped, not by any of the conservatives in Congress, but by the old populist, Wright Patman of Texas. Conservatives have never begun to make the obvious argument against most government programs — the "Chicago" argument — that they are instruments for taking money by force from the relatively poor to give it to the relatively rich. They cannot bring themselves to make this sort of argument, because for them the fact that a person is relatively rich is already near-proof of his soundness and respectability, and thus of his right to be the recipient of government money and privileges. The position of libertarianism is much more realistic here. Classical liberalism — in the form of British classical political economy — began with an attack on government privileges for the rich: an attack on the system known as "mercantilism." "Sinister interests" and "monopoly" were the names that Adam Smith, the Philosophical Radicals and the Manchesterites gave to the wealthy and powerful who used the State to exploit the rest of society. The concept of "class conflict," in this sense, is part of the libertarian's intellectual inheritance, and it helps him to make much more sense of the contemporary world than the conservative is able to do.
The attraction that power in the hands of the upper classes exerts on conservatives is shown also by their approval of exploitative military and "feudal" regimes in other, countries, particularly in the Third World. It is difficult to conceive why on earth anyone friendly to liberty should support the Brazilian and Greek dictatorships, and approve of American support of them. The bogusness of the conservatives' Christianity is evident here, too: the Brazilian dictatorship has been censured both by the Vatican and the bishops of Brazil for its systematic use of torture against political opponents, but I don't recall a single word in the American conservative press on what conservatives theoretically should regard as a vicious infringement of human dignity.
Finally, and most obviously, the conservatives cannot be depended upon by libertarians because they are almost invariably nationalists and militarists. But this is so clear that I don't think it needs any elaboration.
What then of the "Middle"? Of those who share none of the traditional ideologies, neither Marxist nor left-anarchist, conservative nor libertarian?
All indications are that the Middle will increasingly dominated by the New Class — in the sense of the French scholar, Raymond Ruyer, who applies the term comprehensively to a social phenomenon that bridges the division of the world into socialist and so-called "capitalist" countries. This is the class that lives, directly or indirectly, from government programs and subsidies — the class that has gone beyond the market and come to rest in a fairly comfortable economic position. Included in this New Class, in the United States, are the NASA scientists and engineers (temporarily on the wane); H.E.W. sociologists and psychologists; those who have received the $500,000,000 that Congresswoman Edith Green says has been spent since 1965 by the Office of Economic Opportunity on "studies conducted by experts on research and evaluation of the poor." Also included would be municipal urbanologists; the recipients of government "cultural" subsidies, which are sure to increase to European proportions and beyond; and that great portion of the bourgeoisie that has an intimate relationship to the State — through defense contracts, of course, but also by way of programs such as urban renewal, highway and mass-transit programs, and foreign aid; also forming part of the New Class would be the Medicare and Medicaid doctors and dentists.
In addition, and this should be emphasized, because it has very serious ideological implications, there is the element of the educational and mental health bureaucracies. (The second has been brilliantly treated by the psychiatrist and libertarian, Thomas Szasz, in a number of books; the most recent, The Manufacture of Madness, I strongly recommend to you.) The budgets for these, as is well-known, are sacred, taboo. One is half way to mental illness already, if one wants — like Reagan — to cut down on the mental health budget. There are parallels between education and psychiatry in contemporary America that are sociologically most interesting. In both of these fields we are witnessing the creation of a vast class of state-subsidized intellectuals (or near intellectuals), who lack any accountability to the "consumers" of their "products," and who make use of this lack of accountability to promote their own cultural ideals — which are usually those of the State, as well. Of course, they always promote these personal or social ideals in the name of "culture" or "sanity" per se. It is only the know-nothings or the emotionally disturbed who could possibly doubt the burning need for the transfer of money and power to the educators and state-psychiatrists. When the two groups are allied, the result is spectacular. This has happened, for instance, with the issue of sex-education in the public schools, and here the two groups mutually reinforce the purity and absolute unquestionableness of their cause to such a pitch to remind one of John Lindsay, or even of Woodrow Wilson himself. To them it seems perfectly obvious that the public school teachers, who have succeeded in rendering poetry an object of hatred to millions of their subjects, should now have a go at sex.
- 1. This talk is unpublished as far as we can find and no date or other information was attached. Raico cites a "recent" book by Andrei Amalrik, Will the Soviet Union Survive to 1984? This book was first published in 1970. The talk has been edited for typos only.
Turmoil in the international political arena has driven home the point that politics is about the art of compromise. Not the kind, voluntary type of compromise one is expected to make every so often in a happy marriage, or in business negotiations on a free market. Politics breeds a type of coercive compromise which can only be achieved by backing down on your principles, or better yet, if you hold no principles to begin with. The most skilled politicians know that this is the way up into the political world.
But many other people can fall into the compromise trap as well: “The enemy of my enemy is my friend”, the saying goes. To defeat one’s enemy, one must make otherwise unlikely friends. A common example is that of economists bending their views to align themselves with the program of one political figure or other. Those who hold no principles to begin with do so to obtain funding and prestige. But there are also some who do it because they believe that in this way, they can effect a change otherwise impossible, or find a platform and more followers for their own views. In doing so, however, they disregard the fact that many a time, the minor change they’re after comes at the expense of a greater sacrifice of principles, which in the long run provides for a dilution of the initial message and the loss of principled followers as well.
Ludwig von Mises was well known for his intransigent character. His work for the Chamber of Commerce in Vienna, or for the free trade movement in Europe, provided numerous opportunities for political compromise, which he knew was inevitable. Looking back on those years, when his friends thought he could have gained much more if he had loosened up his principled stance on economic issues, Mises actually regretted having compromised too much, and renewed his resolve to return to the battle of ideas.
Perhaps the best example of how he conducted himself in situations which required political compromise is the difficult relationship he had with his student and protégé, Fritz Machlup. Machlup, who called Mises “the scholar who would not compromise”, saw his friendship with his teacher fray in the mid-1960s when Machlup argued that only special political interests supported the restoration of the gold standard. Mises is said to have remarked: “He was in my seminar in Vienna… he understands everything. He knows more than most of them and he knows exactly what he is doing” (Hűlsmann 2007, p. 698).
But Machlup’s attraction to political compromise had in fact begun decades earlier, after his move to the United States (Hűlsmann 2007, pp. 860-62). In 1946, Machlup had asked Mises, in private correspondence, for advice on how to present a rather evasive approach to labour unions to the U.S. Chamber of Commerce, given that his audience was likely to be pro-unions and that, “it is politically unthinkable to outlaw unions today”. Obviously torn, he explained:
“If [my] lecture were to be presented in a scientific forum, I could go into the history of ideas, and in particular Mill and so forth. But for the Chamber, I must be practical and political. […] I will have no choice but to say that monopoly wages are the only purpose of labor unions, and that strong labor unions mean unemployment and inflation and lead to an authoritarian state. Can an honest man avoid such statements? Are there any alternatives?”
Mises’s response is a blueprint for how to deal with political compromise, whenever it rears its ugly head:
“[I would tell the Chamber]: First of all, liberate yourself from false ideas. Study economics. Then go on to convince others. […] I reject any outlawing or limitation of the liberty of association. No liberties shall be abolished, only coercion.”
In his obituary for Mises, Hazlitt remarked that “[Mises’s] outstanding moral quality was moral courage, the ability to stand alone, and an almost fanatical intellectual honesty and candor that refused to deviate or compromise an inch. This… set an ideal to strengthen and inspire his students and all the rest of us who were privileged to know him” (Hűlsmann 2007, p. 1039).
Mises’s own choices teach us that the enemy of your enemy is not your friend if, through that friendship, you abandon the principles you stood by before. In the grand scheme of things, there is nothing to be gained from trading off a bit of socialism now for the (often illusory) promise of more freedom in the future, or personal integrity for a one-off policy. No truth is small enough to be sacrificed in the game of political compromise.
In the wake of the Senate's confirmation of the appointment of Betsy DeVos, the protests from the left prompted Republican Congressman Thomas Massie to offer them a way to get rid of DeVos: eliminate the Department of Education.
According to Massie, he'd been planning to introduce the bill for more than a year, and the controversy over DeVos appeared to be as good a time as any.
There's no harm in Massie introducing the bill, of course, although as I've noted here, the odds of Republicans offering much help to Massie in passing the bill are pretty low.
But as long as we're identifying cabinet-level agencies for the chopping block, why stop with the Department of Education?
There are plenty of other Departments which oversee activities that could easily be done by state and local agencies, or which should just be reduced to their former less-exalted positions in the federal ecosystem.
For starters, we'll just address some of the low-hanging fruit. Here are agencies that can be eliminated with relative ease, either because they are recently-created, redundant, or utterly unnecessary.One: The Department of Homeland Security, $51 Billion
Somehow, the United States managed to get along for more than 225 years before this Department was created by Congress and the Bush Administration in 2002.
The Department Quickly became a way for the federal government to spread federal taxpayer dollars to state and local law enforcement agencies, thus gaining greater control at the local level. The DHS administers a number of grant programs that have helped to purchase a variety of new toys for law enforcement groups including new weapons, and new technologies. Also included in this is the infamous military surplus program which is supplies tanks and other military equipment to police forces everywhere from big cities to small rural towns. The crime-free town of Keene, New Hampshire made sure its police received a tank through this program as have many larger cities.
When the Orlando gunman opened fire in the Pulse nightclub in 2016, the police eventually rolled up in a tank — which did nothing to stem the bloodshed inside the club.
Police claim they need these half-million-dollar vehicles from the DHS to deal with civil unrest. Never mind, of course, that every state already has a National Guard force specifically for that purpose.
While the Department was created in response to the 9/11 attacks, the Department does nothing to address anything like a 9/11-style attack, and all the agencies that were supposed to provide intelligence on such attacks — the FBI for instance — already exist in other departments and continue to enjoy huge budgets.
DHS also includes agencies that already existed in other departments before, such as the Federal Emergency Management Agency, and the agencies that handle immigration and customs. Those agencies should either be returned to the departments they came from or be abolished.
And, few would miss the Transportation Security Administration — an agency that has never caught a single terrorist, but has smuggled at least $100 million worth of cocaine.Two: The EPA, $8.3 billion.
It seems at least one member of Congress already beat me to this one, and a bill to "terminate the Environmental Protection Agency" was introduced on February 3.
Created under Nixon in 1970, this agency largely exists today to push around small-time business owners, entrepreneurs, and mom-and-pop organizations that run afoul or some obscure federal regulation. More recently, The EPA dumped three million gallons of toxic sludge into a Colorado river, poisoning the Navajo Nation's watershed. Meanwhile, the agency is suing a city in Colorado because the city's storm drains aren't exactly right.
Local property owners and local governments already have a large incentive to avoid the destruction of rivers and air used by local communities. In the modern era of nature-based recreation, destroying a mountain river — as the EPA has done — is an easy way to destroy the local economy.
Moreover, most of the environmental cleanup we attribute to federal regulation today was simply the result of growing wealth in the US. As Americans became wealthier, they began to value clean air and water more than the jobs associated with the "dirty" industries. Does anyone seriously believe that the Cuyahoga River would start catching on fire again without an EPA? It's not going to happen.Three: Department of the Interior, $14 billion
The most notorious agency within the Department of the Interior is the Bureau of Indian affairs. The BIA controls 55 million acres of land which is — to use the darkly euphemistic term employed by the Feds — "held in trust" by the US government. That means the Indian tribes can't control their own land unless a bureaucrat at the Department of the Interior says so.
Given that the tribes should be totally independent of federal regulation, the BIA should be abolished immediately. Any relations between the tribes and US government should be handled by the State Department, which is the appropriate place to deal with organizations that are supposed to be governed primarily by treaties with the United States.
The other main purpose of the Interior is the control of immense amounts of "public lands" including national parks. The Department is unnecessary here as well, given that public land should be administered by the communities that are economically dependent on those lands. Moreover, whether we like the idea of public lands or not, the chances of public lands being privatized — even after being made into state lands — is approximately zero. State parks, national forests, and national parks are very popular with voters and moving them from federal control to state control won't change this.1Four: The Department of Agriculture, $153 billion
This is the most expensive of the Departments funded here — primarily because the USDA oversees the Food Stamp program — now known as SNAP — which costs more than $70 billion. The reason the SNAP program is in the USDA is that SNAP has always largely been a subsidy program for farmers. One of its original selling points was that it would get people to buy more food. SNAP could be rolled into the Department of Health and Human Services this afternoon, and virtually no one would notice or care. the USDA bureaucracy simply adds more cost.
That wouldn't do anything to eliminate that $70 billion food stamp spending, of course. But it would make it much easier, politically speaking, to get rid of the remaining 80 billion of the USDA's budget.
The rest of the USDA is composed of pork projects for farmers, researchers, and other corporate interests that continually receive the taxpayer's largesse.
The USDA also administers its own affordable housing programs, even though several major programs for affordable housing already exist in the Department of Housing and Urban Development.The Problem with Cabinet Level Agencies
A lot of what we've discussed here falls short of totally abolishing the government spending associated with these Departments. These are all extremely mild reforms and mere baby steps toward a more human-sized federal government.
But ending cabinet level status for many of these agencies is a crucial first step in cutting these agencies down to size. It is likely not a coincidence that no cabinet-level agency, with the exception of the Postal Service, has ever lost its cabinet-level status, and certainly none have ever been abolished.
When a government agency is lifted to the cabinet level, it gains political prestige, permanence, and direct access to the President. In other words, it makes that agency more easily able to lobby Congress, the White house, and to fight budget cuts. The fact that abolishing the Department of Education — without even abolishing all its programs — is now seen as some sort of wildly radical position — illustrates the power of the cabinet-level agency.
- 1. This same argument can be applied to the Forest Service, which is under the Dept. of Agriculture.
Germany is currently the country with the largest trade surplus, and many Germans think that this is a good thing. In the United States, the situation is the reverse. The US has the world’s largest trade deficit. It amounts to USD 502.25 billion for 2016.
A country’s trade balance is equal to the difference between a country’s national savings and its gross investment. Savings reflect the difference between income and consumption. Thus, a country with a surplus consumes or invests too little given its income. That is the case with Germany. In America, it is the other way around.
How is this disparity possible? It is possible because the dollar and the euro are both fiat monies. Different from the gold standard, countries can now get along with persistent trade imbalances. These trade imbalances, in turn, permit grave domestic discrepancies among income, consumption, and investment.
Germany is a pathological case because its trade surplus results from both too little investment and too little consumption. Too little investment means that the capital stock does not rise as much as it could. This, in turn, will lead to lower future income as otherwise could be the case. This means that while Germans consume now less than they actually could, in the future they will have to consume less because they must.
It seems paradoxical but it is not: while Germany has a huge trade surplus — or for that matter the more comprehensive current account balance — and the United States has a huge trade deficit, each country invests too little. In both the US and Germany, the gross investment rate is very low (table 1).
United States and Germany: savings, investment, consumption, and current account average in percent of gross domestic product (GDP) for 2006-2016:
Gross domestic savings
Gross fixed capital formation
Final consumption expenditure
Current account balance
Source: Trading Economics and own calculations (www.tradingeconomcis.com)
The lack of capital formation undermines productivity and future economic growth in both countries. In the United States, the trade deficit is the result of the overvalued U.S. dollar, which in turn comes from its role as the global reserve currency. Americans consume and invest less than they should because they can afford to do so because of the role of the dollar.
Why should a country with either a surplus or deficit change course? Because persistent trade imbalances are unsustainable. They lead to the accumulation of foreign debt at the deficit country and to increasing foreign assets in the surplus country. One country’s surplus is another country’s deficit. Similarly, one country’s foreign asset position is another country’s foreign debt.
With a foreign debt (net foreign investment position) of eight trillion in mid-2016 (Figure 1), the United States is approaching a critical level.
If debt accumulation should go on, the credit-worthiness of the United States will eventually crumble and consequently its currency will crash. A soft depreciation of the dollar in time is surely better than an abrupt currency collapse in the future.
President Trump wants to have more jobs at home and calls for the elimination of the trade deficit. A first step to accomplish this would be to do away with the role of the US dollar as the leading reserve currency.
Putting an end to the policies of the previous American governments to bomb and sanction these oil exporters that wish to use a currency other than the greenback would automatically bring about the demise of the US-dollar. The elimination of America’s trade deficit and the end of foreign debt accumulation would follow. Doing away with fiat money would then be the next big step towards a sound economy.
Well, the Fed released the minutes of its January FOMC meeting and lo and behold, there was nothing of interest. It was the same bland "Real Soon Now" talk regarding rates, coupled with a dose of "we don't really have a clear picture just yet." Imagine that. They're going to keep their eye on inflation trends and the unemployment rate, which is comforting given the fact that they are the source of inflation and unstable labor markets.
The Financial Times reports:
After lifting rates twice in two years, Janet Yellen, Fed chair, and her fellow rate-setters are contemplating stepping up the pace of increases as economic growth accelerates and prospects rise of tax cuts by the Republican-controlled Congress.
The thing is, the rate-setters have been in deep contemplation for almost a decade. But they won't give up until they've got rates back to normal! Talk about job security. But here's the good news: they reflected on the acceleration of economic growth, which is depicted below. If you don't see any growth (after all, the Obama years marked the first administration since Hoover without a single full year of 3% GDP growth), just use your imagination.
The entire rate hike narrative is a sham, and the only reason they are talking about it is because they have to maintain the idea that they "fixed the economy." A fixed economy shouldn't require absurd monetary policy. They know that. So they keep kicking the can, pretending like all they are looking for is more verification from the data. But as was mentioned Monday, such data accumulation plays right into their central planning hands. They can always interpret it however they want, whenever they want.
Our country is beset by a large number of economic myths that distort public thinking on important problems and lead us to accept unsound and dangerous government policies. Here are ten of the most dangerous of these myths and an analysis of what is wrong with them.
Deficits are the cause of inflation; deficits have nothing to do with inflation.
In recent decades we always have had federal deficits. The invariable response of the party out of power, whichever it may be, is to denounce those deficits as being the cause of our chronic inflation. And the invariable response of whatever party is in power has been to claim that deficits have nothing to do with inflation. Both opposing statements are myths.
Deficits mean that the federal government is spending more than it is taking in in taxes. Those deficits can be financed in two ways. Ifthey are financed by selling Treasury bonds to the public, then the deficits are not inflationary. No new money is created; people and institutions simply draw down their bank deposits to pay for the bonds, and the Treasury spends that money. Money has simply been transferred from the public to the Treasury, and then the money is spent on other members of the public.
On the other hand, the deficit may be financed by selling bonds to the banking system. If that occurs, the banks create new money by creating new bank deposits and using them to buy the bonds. The new money, in the form of bank deposits, is then spent by the Treasury, and thereby enters permanently into the spending stream of the economy, raising prices and causing inflation. By a complex process, the Federal Reserve enables the banks to create the new money by generating bank reserves of one-tenth that amount. Thus, if banks are to buy $100 billion of new bonds to finance the deficit, the Fed buys approximately $10 billion of old treasury bonds. This purchase increases bank reserves by $10 billion, allowing the banks to pyramid the creation of new bank deposits or money by ten times that amount. In short, the government and the banking system it controls in effect "print" new money to pay for the federal deficit.
Thus, deficits are inflationary to the extent that they are financed by the banking system; they are not inflationary to the extent they are underwritten by the public.
Some policymakers point to the 1982–83 period, when deficits were accelerating and inflation was abating, as a statistical "proof" that deficits and inflation have no relation to each other. This is no proof at all. General price changes are determined by two factors: the supply of, and the demand for, money. During 1982–83 the Fed created new money at a very high rate, approximately at 15 percent per annum. Much of this went to finance the expanding deficit. But on the other hand, the severe depression of those two years increased the demand for money (i.e. lowered the desire to spend money on goods), in response to the severe business losses. This temporarily compensating increase in the demand for money does not make deficits any the less inflationary. In fact, as recovery proceeds, spending will pick up and the demand for money will fall, and the spending of the new money will accelerate inflation.
Deficits do not have a crowding-out effect on private investment.
In recent years there has been an understandable worry over the low rate of saving and investment in the United States. One worry is that the enormous federal deficits will divert savings to unproductive government spending and thereby crowd out productive investment, generating ever, greater long-run problems in advancing or even maintaining the living standards of the public.
Some policymakers have once again attempted to rebut this charge by statistics. In 1982–83, they declare, deficits were high and increasing, while interest rates fell, thereby indicating that deficits have no crowding,out effect.
This argument once again shows the fallacy of trying to refute logic with statistics. Interest rates fell because of the drop of business borrowing in a recession. "Real" interest rates (interest rates minus the inflation rate) stayed unprecedentedly high, however — partly because most of us expect renewed heavy inflation, partly because of the crowding,out effect. In any case, statistics cannot refute logic; and logic tells us that if savings go into government bonds, there will necessarily be less savings available for productive investment than there would have been, and interest rates will be higher than they would have been without the deficits. If deficits are financed by the public, then this diversion of savings into government projects is direct and palpable. If the deficits are financed by bank inflation, then the diversion is indirect, the crowding-out now taking place by the new money "printed" by the government competing for resources with old money saved by the public.
Milton Friedman tries to rebut the crowding-out effect of deficits by claiming that all government spending, not just deficits, equally crowds out private savings and investment. It is true that money siphoned off by taxes could also have gone into private savings and investment. But deficits have a far greater crowding,out effect than overall spending, since deficits financed by the public obviously tap savings and savings alone, whereas taxes reduce the public's consumption as well as savings.
Thus, deficits, whichever way you look at them, cause. grave economic problems. If they are financed by the banking system, they are inflationary. But even if they are financed by the public, they will still cause severe crowding-out effects, diverting much-needed savings from productive private investment to wasteful government projects. And, furthermore, the greater the deficits the greater the permanent income tax burden on the American people to pay for the mounting interest payments, a problem aggravated by the high interest rates brought about by inflationary deficits.
Tax increases are a cure for deficits.
Those people who are properly worried about the deficit unfortunately offer an unacceptable solution: increasing taxes. Curing deficits by raising taxes is equivalent to curing someone's bronchitis by shooting him. The "cure" is far worse than the disease.
For one reason, as many critics have pointed out, raising taxes simply gives the government more money, and so the politicians and bureaucrats are likely to react by raising expenditures still further. Parkinson said it all in his famous "Law": "Expenditures rise to meet income:' If the government is willing to have, say, a 20 percent deficit, it will handle high revenues by raising spending still more to maintain the same proportion of deficit.
But even apart from this shrewd judgment in political psychology, why should anyone believe that a tax is better than a higher price? It is true that inflation is a form of taxation, in which the government and other early receivers of new money are able to expropriate the members of the public whose income rises later in the process of inflation. But, at least' with inflation, people are still reaping some of the benefits of exchange. If bread rises to $10 a loaf, this is unfortunate, but at least you can still eat the bread. But if taxes go up, your money is expropriated for the benefit of politicians and bureaucrats, and you are left with no service or benefit. The only result is that the producers' money is confiscated for the benefit of a bureaucracy that adds insult to injury by using part oft hat confiscated money to push the public around.
No, the only sound cure for deficits is a simple but virtually unmentioned one: cut the federal budget. How and where? Anywhere and everywhere.
Every time the Fed tightens the money supply, interest rates rise (or fall); every time the Fed expands the money supply, interest rates rise (or fall).
The financial press now knows enough economics to watch weekly money supply figures like hawks; but they inevitably interpret these figures in a chaotic fashion. If the money supply rises, this is interpreted as lowering interest rates and inflationary; it is also interpreted, often in the very same article, as raising interest rates. And vice versa. If the Fed tightens the growth of money, it is interpreted as both raising interest rates and lowering them. Sometimes it seems that all Fed actions, no matter how contradictory, must result in raising interest rates. Clearly something is very wrong here.
The problem here is that, as in the case of price levels, there are several causal factors operating on interest rates and in different directions. If the Fed expands the money supply, it does so by generating more bank reserves and thereby expanding the supply of bank credit and bank deposits. The expansion of credit necessarily means an increased supply in the credit market and hence a lowering of the price of credit, or the rate of interest. On the other : hand, if the Fed restricts the supply of credit and the growth of the money supply, this means that the supply in the credit market declines, and this should mean a rise in interest rates.
And this is precisely what happens in the first decade or two of chronic inflation. Fed expansion lowers interest rates; Fed tightening raises them. But after this period, the public and the market begin to catch on to what is happening. They begin to realize that inflation is chronic because of the systemic expansion of the money supply. When they realize this fact of life, they will also realize that inflation wipes out the creditor for the benefit of the debtor. Thus, if someone grants a loan at 5% for one year, and there is 7% inflation for that year, the creditor loses, not gains. He loses 2%, since he gets paid back in dollars that are now worth 7% less in purchasing power. Correspondingly, the debtor gains by inflation. As creditors begin to catch on, they place an inflation premium on the interest rate, and debtors will be willing to pay. Hence, in the long-run anything which fuels the expectations of inflation will raise inflation premiums on interest rates; and anything which dampens those expectations will lower those premiums. Therefore, a Fed tightening will now tend to dampen inflationary expectations and lower interest rates; a Fed expansion will whip up those expectations again and raise them. There are two, opposite causal chains at work. And so Fed expansion or contraction can either raise or lower interest rates, depending on which causal chain is stronger.
Which will be stronger? There is no way to know for sure. In the early decades of inflation, there is no inflation premium; in the later decades, such as we are now in, there is. The relative strength and reaction times depend on the subjective expectations of the public, and these cannot be forecast with certainty. And this is one reason why economic forecasts can never be made with certainty.
Economists, using charts or high speed computer models, can accurately forecast the future.
The problem of forecasting interest rates illustrates the pitfalls of forecasting in general. People are contrary cusses whose behavior, thank goodness, cannot be forecast precisely in advance. Their values, ideas, expectations, and knowledge change all the time, and change in an unpredictable manner. What economist, for example, could have forecast (or did forecast) the Cabbage Patch Kid craze of the Christmas season of 1983? Every economic quantity, every price, purchase, or income figure is the embodiment of thousands, even millions, of unpredictable choices by individuals.
Many studies, formal and informal, have been made of the record of forecasting by economists, and it has been consistently abysmal. Forecasters often complain that they can do well enough as long as current trends continue; what they have difficulty in doing is catching changes in trend. But of course there is no trick in extrapolating current trends into the near future. You don't need sophisticated computer models for that; you can do it better and far more cheaply by using a ruler. The real trick is precisely to forecast when and how trends will change, and forecasters have been notoriously bad at that. No economist forecast the depth of the 1981–82 depression, and none predicted the strength of the 1983 boom.
The next time you are swayed by the jargon or seeming expertise of the economic forecaster, ask yourself this question: If he can really predict the future so well, why is he wasting his time putting out newsletters or doing consulting when he himself could be making trillions of dollars in the stock and commodity markets?
There is a tradeoff between unemployment and inflation.
Every time someone calls for the government to abandon its inflationary policies, Establishment economists and politicians warn that the result can only be severe unemployment. We are trapped, therefore, into playing off inflation against high unemployment, and become persuaded that we must therefore accept some of both.
This doctrine is the fallback position for Keynesians. Originally, the Keynesians promised us that by manipulating and fine-tuning deficits and government spending, they could and would bring us permanent prosperity and full employment without inflation. Then, when inflation became chronic and ever-greater, they changed their tune to warn of the alleged tradeoff, so as to weaken any possible pressure upon the government to stop its inflationary creation of new money.
The tradeoff doctrine is based on the alleged "Phillips curve," a curve invented many years ago by the British economist A. W. Phillips. Phillips correlated wage rate increases with unemployment, and claimed that the two move inversely: the higher the increases in wage rates, the lower the unemployment. On its face, this is a peculiar doctrine, since it flies in the face of logical, commonsense theory. Theory tells us that the higher the wage rates, the greater the unemployment, and vice versa. If everyone went to their employer tomorrow and insisted on double or triple the wage rate, many of us would be promptly out of a job. Yet this bizarre finding was accepted as gospel by the Keynesian economic establishment.
By now, it should be clear that this statistical finding violates the facts as well as logical theory. For during the 1950s, inflation was only about one to two percent per year, and unemployment hovered around three or four percent, whereas nowadays unemployment ranges between eight and 11 percent, and inflation between five and 13 percent. In the last two or three decades, in short, both inflation and unemployment have increased sharply and severely. If anything, we have had a reverse Phillips curve. There has been anything but an inflation-unemployment tradeoff.
But ideologues seldom give way to the facts, even as they continually claim to "test" their theories by facts. To save the concept, they have simply concluded that the Phillips curve still remains as an inflation-unemployment tradeoff, except that the curve has unaccountably "shifted" to a new set of alleged tradeoffs. On this sort of mind-set, of course, no one could ever refute any theory.
In fact, inflation now, even if it reduces unemployment in the short-run by inducing prices to spurt ahead of wage rates (thereby reducing real wage rates), will only create more unemployment in the long run. Eventually, wage rates catch up with inflation, and inflation brings recession and unemployment inevitably in its wake. After more than two decades of inflation, we are all now living in that "long run."
Deflation — falling prices — is unthinkable, and would cause a catastrophic depression.
The public memory is short. We forget that, from the beginning of the Industrial Revolution in the mid-18th century until the beginning of World War II, prices generally went down, year after year. That's because continually increasing productivity and output of goods generated by free markets caused prices to fall. There was no depression, however, because costs fell along with selling prices. Usually, wage rates remained constant while the cost of living fell, so that "real" wages, or everyone's standard of living, rose steadily.
Virtually the only time when prices rose over those two centuries were periods of war (War of 1812, Civil War, World War I), when the warring governments inflated the money supply so heavily to pay for the war as to more than offset continuing gains in productivity.
We can see how free market capitalism, unburdened by governmental or central bank inflation, works if we look at what has happened in the last few years to the prices of computers. A computer used to have to be enormous, costing millions of dollars. Now, in a remarkable surge of productivity brought about by the microchip revolution, computers are falling in price even as I write. Computer firms are successful despite the falling prices because their costs have been falling, and productivity rising. In fact, these falling costs and prices have enabled them to tap a mass market characteristic of the dynamic growth of free market capitalism. "Deflation" has brought no disaster to this industry.
The same is true of other high-growth industries, such as electronic calculators, plastics, TV sets, and VCRs. Deflation, far from bringing catastrophe, is the hallmark of sound and dynamic economic growth.
The best tax is a "flat" income tax, proportionate to income across the board, with no exemptions or deductions.
It is usually added by flat-tax proponents, that eliminating such exemptions would enable the federal government to cut the current tax rate substantially.
But this view assumes, for one thing, that present deductions from the income tax are immoral subsidies or "loopholes" that should be closed for the benefit of all. A deduction or exemption is only a "loophole" if you assume that the government owns 100 percent of everyone's income and that allowing some of that income to remain untaxed constitutes an irritating "loophole." Allowing someone to keep some of his own income is neither a loophole nor a subsidy. Lowering the overall tax by abolishing deductions for medical care, for interest payments, or for uninsured losses, is simply lowering the taxes of one set of people (those that have little interest to pay, or medical expenses, or uninsured losses) at the expense of raising them for those who have incurred such expenses.
There is furthermore neither any guarantee nor even likelihood that, once the exemptions and deductions are safely out of the way, the government would keep its tax rate at the lower level. Looking at the record of governments, past and present, there is every reason to assume that more of our money would be taken by the government as it raised the tax rate back up (at least) to the old level, with a consequently greater overall drain from the producers to the bureaucracy.
It is supposed that the tax system should be roughly that of pricing or incomes on the market. But market pricing is not. proportional to incomes. It would be a peculiar world, for example, if Rockefeller were forced to pay $1,000 for a loaf of bread — that is, a payment proportionate to his income relative to the average man. That would mean a world in which equality of incomes was enforced in a particularly bizarre and inefficient manner. If a tax were levied like a market price, it would be equal to every "customer," not proportionate to each customer's income.
An income tax cut helps everyone because not only the taxpayer but also the government will benefit, since tax revenues will rise when the rate is cut.
This is the so-called "Laffer curve; set forth by California economist Arthur Laffer. It was advanced as a means of allowing politicians to square the circle; to come out for tax cuts, keeping spending at the current level, and balance the budget all at the same time. In that way, the public would enjoy their tax cuts, be happy at the balanced budget, and still receive the same level of subsidies from the government.
It is true that if tax rates are 99 percent, and they are cut to 95 percent, tax revenue will go up. But there is no reason to assume such simple connections at any other time. In fact, this relationship works much better for a local excise tax than for a national income tax. A few years ago, the government of the District of Columbia decided to procure some revenue by sharply raising the District's gasoline tax. But, then, drivers could simply nip over the border to Virginia or Maryland and fill up at a much cheaper price. D.C. gasoline tax revenues fell, and much to their chagrin and confusion, they had to repeal the tax.
But this is not likely to happen with the income tax. People are not going to stop working or leave the country because of a relatively small tax hike, or do the reverse because of a tax cut.
There are some problems with the Laffer curve. The amount of time it is supposed to take for the Laffer effect to work is never specified. But still more important: Laffer assumes that what all of us want is to maximize tax revenue to the government. If — a big if — we are really at the upper half of the Laffer Curve, we should then all want to set tax rates at that "optimum" point. But why? Why should it be the objective of every one of us to maximize government revenue? To push to the maximum, in short, the share of private product that gets siphoned off to the activities of government? I should think we would be more interested in minimizing government revenue by pushing tax rates far, far below whatever the Laffer Optimum might happen to be.
Imports from countries where labor is cheap cause unemployment in the United States.
One of the many problems with this doctrine is that it ignores the question: why are wages low in a foreign country and high in the United States? It starts with these wage rates as ultimate givens, and doesn't pursue the question why they are what they are. Basically, they are high in the United States because labor productivity is high — because workers here are aided by large amounts of technologically advanced capital equipment. Wage rates are low in many foreign countries because capital equipment is small and technologically primitive. Unaided by much capital, worker productivity is far lower than in the U.S. Wage rates in every country are determined by the productivity of the workers in that country. Hence, high wages in the United States are not a standing threat to American prosperity; they are the result of that prosperity.
But what of certain industries in the U.S. that complain loudly and chronically about the "unfair" competition of products from low-wage countries? Here, we must realize that wages in each country are interconnected from one industry and occupation and region to another. All workers compete with each other, and if wages in industry A are far lower than in other industries, workers — spearheaded by young workers starting their careers — would leave or refuse to enter industry A and move to other firms or industries where the wage rate is higher.
Wages in the complaining industries, then, are high because they have been bid high by all industries in the United States. If the steel or textile industries in the United States find it difficult to compete with their counterparts abroad, it is not because foreign firms are paying low wages, but because other American industries have bid up American wage rates to such a high level that steel and textile cannot afford to pay. In short, what's really happening is that steel, textile, and other such firms are using labor inefficiently as compared to other American industries. Tariffs or import quotas to keep inefficient firms or industries in operation hurt everyone, in every country, who is not in that industry. They injure all American consumers by keeping up prices, keeping down quality and competition, and distorting production. A tariff or an import quota is equivalent to chopping up a railroad or destroying an airline — for its point is to make international transportation artificially expensive.
Tariffs and import quotas also injure other, efficient American industries by tying up resources that would otherwise move to more efficient uses. And, in the long run, the tariffs and quotas, like any sort of monopoly privilege conferred by government, are no bonanza even for the firms being protected and subsidized. For, as we have seen in the cases of railroads and airlines, industries enjoying government monopoly (whether through tariffs or regulation) eventually become so inefficient that they lose money anyway, and can only call for more and more bailouts, for even more of a privileged shelter from free competition.Originally published in The Free Market Special Issue (1984)
Hope is in short supply these days, while despair and hate are enjoying an enormous surplus.
To give an example, there are currently two types of stories that fill my news feed. The first are about politics and the perpetual horrors it unleashes on the world: there’s a new scandal every day, and war, protectionism, and nationalism are on the rise, with staggering human costs to pay as a result. Now, in a way, it makes sense that these stories dominate most people’s attention, as they represent widespread problems that deeply influence our lives. However, this attention has brought with it a kind of despair. Many people are falling silent or are ending long-time friendships simply because they want to avoid the onslaught of bad news—along with vicious fighting and personal conflict—that appears day after day.
But there’s a second kind of story I’ve seen lately: stories about amazing new technologies and enterprises that are, or soon will be, available to the public. Social media is full of stories that show how apps and drones and 3D printed devices can save our lives, or reinvent them, or simply make them a bit more convenient. These wonders are designed by young people: DIY geniuses, tech visionaries, social entrepreneurs, and many others with a passion for creating value for others.
The contrast between these two kinds of stories could not be starker. The first are assaults on justice as well as economic sense, while the second demonstrate the extraordinary benefits of social cooperation. They represent innovative ways not just to make money, but to make peace. But in a small way, the second group offers something more, something absolutely vital for our everyday lives: hope.
When faced with bad news week after week, it’s easy to despair of humanity’s future. But we can’t let the evils of politics convince us that change for the better is impossible. We have to hope.
That in turn means we have to make a conscious effort not to become victims of political events and of the news that surrounds them. It’s not just that focusing too narrowly on government distorts one’s view of the political process and of justice, although it certainly does that: politics also has a profound effect on our spirits, because it teaches us to believe that there is no life outside of it, while there’s simultaneously no hope to be found within it. We feel as if we’re all shackled to a sinking ship.
However, by resisting the forces that pull us into the black hole of politics, we can remind ourselves of the enormous and often wonderful world in which we’re fortunate to live. Letting go of politics and its many evils produces a fundamental change in our worldview. In fact, just taking a moment to watch a cheerful video can be a powerful remedy for the misery and destruction that we see in so much of the world. In this day and age, such simple resistance to political news is an almost revolutionary act.
We need to disengage from politics and the neurosis it causes, and reengage with the real world. We can’t change the nature of politics, but we can change our own lives and the lives of those around us through peaceful action, especially through commerce and entrepreneurship. Our hope doesn’t lie in politics or presidents or kings and the hate that they breed, but in the recognition of our mutual social interests.
Yes, things in the political world are bad, and are likely to get worse before they get better. Yet consider that just three centuries ago in Western Europe, it must also have also seemed as if there was no hope. In fact, economic development was so minimal, and people had so little exposure to the world of ideas that the concept of hope must have made little sense: hope for what? A better life? The idea must have been alien to most ordinary people of the time. Hence it was easy for millions of individuals to think of themselves as a part (namely, the bottom) of a “natural” social hierarchy dictated to them from birth. The evils wrought by kings and other monarchs must have seemed inescapable. And yet, from this economic and social stasis grew the greatest flourishing of human life and prosperity in history.
In order words, even though things are bad now, human beings have survived worse. But it took the rise of classical liberalism and its values of liberty and commercial society to do it. Today, it’s likely that we’ll need another such revolution in ideas to overturn the rising tide of statism that threatens us from both the left and right. Yet although winning this battle might seem impossible, there is hope, but only if we refuse to let evil, hatred, and melancholy conquer our lives, and set ourselves to the task of improving the world rather than passively accepting its decline. Mises’s personal motto and example comes to mind.
I think the idea of hope is summed up beautifully in the great film The Lion in Winter:
Henry II: We’re in the cellar and you’re going back to prison and my life is wasted and we’ve lost each other… and you’re smiling.
Eleanor: It’s the way I register despair. There’s everything in life but hope.
Henry II: We’re both alive… and for all I know that’s what hope is.
In a time when Federal Reserve reforms are discussed more openly than ever before, it seems appropriate to also think about the more fundamental question of whether central banks are needed in the first place. In 1936, Vera C. Smith (later Lutz) published her doctoral dissertation The Rationale of Central Banking written under Friedrich A. von Hayek at the London School of Economics. Smith reviewed the economic controversies around central banking from the nineteenth to the early twentieth century in France, Belgium, Germany, England, Scotland, and the United States.
Smith made very clear that central banks are not the result of natural developments in the banking sector, but come into existence through government favors.
So what are the justifications for central banks? Smith identified five main arguments for central banks from an economic point of view. Although Smith has written with a gold standard as the underlying monetary system in mind, it is interesting to look at these arguments with the benefit of hindsight more than 80 years later. Has any one of the arguments actually made a strong or even conclusive case for central banking?One: Uniform Distribution of Risk
The first argument runs as follows. In a system of free banking, which might be stable as a whole, one can expect individual banks to fail from time to time, just like there are bankruptcies in other sectors of the economy. Now, if any bank issues notes over and above their own gold reserves, it runs the risk of bankruptcy. The notes however will not stay exclusively in the hands of the immediate clients of the bank who benefit from the fiduciary note issue, but will be exchanged to other parties. Whoever holds the notes at the point of bankruptcy carries the loss. It seems reasonable to assume that the risk will not be evenly spread over the economy. In particular those individuals who, for whatever reason, are least capable of bearing the additional cost of discriminating between notes from solvent and insolvent banks will be hit the hardest. Therefore, the argument goes, the government should introduce some uniformity in the note issue as well as the distribution of risk over market participants. This can be done by imposition of a legal monopoly, that is, by creating a central bank.
The respective rebuttal of the free bankers was to point out that, while a monopolist on the note issue would indeed spread the risk over all parties more evenly, it would also tend to increase the risk overall. This is definitely a point worth considering.Two: Over-issue of Notes and Excessive Credit Expansion
The historically most important argument for central banks has been the one about the dangers of over-issuing notes and excessive credit expansion. This might strike a contemporary reader as somewhat paradoxical, but the argument goes like this:
In a free banking system there would be strong incentives for any individual bank to constantly lower their discount rates and thereby expand credit in order to gain market share. At a certain point of the process gold reserves would start to drain, and banks would have to refrain from further credit expansion in order to protect their own reserves. This would lead to an economic crisis.
It was argued that under free banking the fluctuations in the money and credit supply would thus be much more violent, which implies larger instability of the economy as a whole. A central bank would serve the purpose of preventing excessive note issue and credit expansion as well as the resulting interplay of inflationary and deflationary episodes.
The argument ultimately relies on a negative answer to the question of whether the mutual check of interbank clearing would be sufficient to prevent a critical number of commercial banks from trying to reap the short-run benefits of engaging in excessive expansion. Yet, as Smith pointed out, historical examples of competitive systems of note issue from Scotland, Canada, and Suffolk (Massachusetts) suggest that it in fact can be a sufficient deterring influence on fiduciary expansions by individual banks.Three: The Lender of Last Resort
Another well-known argument is the one of the lender of last resort. The idea is that central banks could mitigate economic crises that might occur in any system, because of their legal privileges and the stronger confidence of the public in their notes. When commercial banks are forced to contract their lending in order to protect their reserves as clients increasingly want to redeem bank notes into specie, central banks could step in and prevent a deflationary spiral, because central bank notes would be accepted without question. In the worst case, redemption could be suspended. Central banks could thus prevent liquidity shortages and possibly severer economic downturns.
The respective counter-argument has been made among others by Ludwig von Mises, who pointed out that the very existence of an active lender of last resort will be incorporated as a datum into the decisions of commercial bankers and incentivize them to take higher risks and indeed lower reserve ratios even further. Hence, it increases the fragility of the entire system.
Moreover, any of the three problems so far considered has a very simple root, namely fractional reserve banking. A full reserve system as has been proposed by many economists, including Fisher, Friedman, some Austrians, as well as the modern advocates of Vollgeld in Switzerland and Germany, would solve virtually all them. Yet, there are two arguments remaining which are of special importance for our modern times.Four: Central Banks as a Means to International Cooperation
The fourth argument holds that central monetary authorities in any currency area would be needed in order to make cooperation with regard to monetary policy decisions possible. The Bank for International Settlements (BIS) founded in 1930 is not least an outgrowth of the attempt at cooperation and harmonization. However, it is the very existence of central banks that renders monetary policy possible in the first place. The question is of course what kind of monetary policy should be implemented, which leads us to the final argument.Five: Rational Monetary Policy
The last argument gained attention in the post-World War I era and is indeed the most relevant for us today. Traditionally it has been the aim of monetary reformers to introduce automatic mechanisms of adjustment into the financial system. According to the fifth argument, however, it would be beneficial to pursue an active and rational monetary policy of controlling the volume of cash reserves and credit guided by “scientific criteria.” A central bank would be indispensable for its implementation. The main policy tools would be discount rate setting and open market operations.
The abandonment of the classical gold standard and the introduction of a fiat standard has indeed given more power into the hands of central bankers and fostered the notion that central banks should consciously manipulate the money stock. Any argument or scientific criterion that requires monetary expansion of a certain magnitude is implicitly at least also an argument for fiat money and, a fortiori, for central banks.
The first criterion that has been held up as scientific was price stability. The money stock should be expanded at the rate of real economic growth to keep the general price level constant. However, as Vera Smith pointed out this criterion “has been suspect in theory and just as unfortunate in practice.”
Modern macroeconomics that has developed after the publication of Smith’s work has rationalized monetary expansion even further by arguing for a stable rate of price inflation instead of price stability. Yet, again no study has been presented so far that can be regarded as proof of the overall economic benefits of inflation. In particular no study has shown that the problems of moral hazard, increased systemic risk, perverse redistribution of wealth from bottom to top, and unsustainable inflationary booms are in any way offset by potential benefits of expansionary central bank monetary policy.
That there are benefits of central banking for certain groups is pretty obvious. Leland Yeager in his preface to the Liberty Fund edition of Smith’s book pointed out that it is reasonable to suppose that central banks are valued today, among other things, for providing prestigious and comfortable job opportunities for economists. Nothing is so bad that it couldn’t get worse.The Bottom Line
It is indeed desirable to have a rational monetary policy, if there has to be one at all. Who could object to rationality? However, no economist has presented a conclusive case for conscious political implementation of monetary adjustments and stimuli. Central banks remain a creature of power politics rather than economic reason.
The 2016 Summer Olympics in Brazil cost Brazilian taxpayers $4.6 billion, conservative estimates show. But once related expenses covered by the Brazilian government are factored in, the overall costs hit the $12 billion mark, which equates to about 0.72 percent of Brazil’s national budget.
Prior to the Olympics, however, the Brazilian government had already spent BR$39.5 billion on infrastructure, or about $12 billion. Stadiums and urban projects designed to ensure the country was ready for the sports event were built, but aside from the events scheduled for 2014 and 2016, there seemed to be little to no demand for such public investments, which prompted the country to wonder whether the expenses were worth the trouble.
Now, as these same structures are left to rot, the documented decay becomes a symbol of government waste, not only because the investments weren’t meant to stand the test of time, but also because the Brazilian government’s lack of concern for the taxpayer is not the main story. It is, in fact, just a footnote.
Like many others, the government ignored the economic realities of the country, betting on inflation and cronyism in order to throw an unforgettable party.
The abandoned aquatic center. Source.A Party Worth Going Broke For: How Brazil “Paid” the Olympic Bills
Due to backlash over former President Dilma Rousseff’s economic policies, a nationwide movement supporting impeachment targeted her for, among many other things, raising government spending without accounting for the increases.
Due to the government’s lavish spending prior to the World Cup and Summer Olympics, Rousseff was afraid of suffering the consequences for increasing spending without hurting other government projects as a result, which would have forced the president to be upfront about her expenses. This led to a move that provoked chaos among consumers simply because banks were forced to put money into circulation that wasn’t backed by anything.
Instead of giving the money to banks so they could then cover social projects, pensions, and welfare programs, Brazil’s Treasury Department simply promised banks they would pay them back down the road. Thus, money meant for other projects remained in the treasury, allowing the federal government to spend it elsewhere. This move guaranteed that financial institutions keeping an eye on the government’s budget wouldn’t know that banks hadn’t been using the money coming from the Treasury. This allowed banks to distribute sums associated with welfare and other social programs without depleting the government’s funds. As more cash was put into circulation by the banks and the federal government due to the World Cup and Olympics-related expenses, the value of Brazilian money tanked. To the consumer, that translated into lower purchasing power, making it more difficult for the poor to stock their pantries.
With the government’s out of hand expenses prior to the World Cup and Olympics, the Brazilian people suffered the ultimate blow because the government robbed them of their money’s purchasing power, all because the president didn’t want to admit her government's spending had gotten out of hand.
Now that the structures built for the world to see are rotting away, the low-income Brazilian continues to suffer. The only solution to this matter is to unleash currency control from the Brazilian central bank, removing its responsibility for financial policies from the hands of the federal government. Only then will the federal government be powerless in creating more debt and inflation, keeping it from playing with the Brazilian taxpayer’s hard-earned money.
Originally published by TheAntiMedia.
In the next issue of The Austrian, David Gordon reviews Sebatian Mallaby's new book, The Man Who Knew, about the career of Alan Greenspan. Mallaby points out that prior to his career at the Fed, Greenspan exhibited a keen understanding of the gold standard and how free markets work. In spite of this contradiction, Mallaby takes a rather benign view toward Greenspan.
However, in his review, Gordon asks the obvious question: If Greenspan knew all this so well, isn't it all the more worthy of condemnation that Greenspan then abandoned these ideas so readily to advance his career?
Perhaps not surprisingly, now that his career at the Fed has ended, Old Greenspan — the one who defends free markets — has now returned.
This reversion to his former self has been going on for several years, and Greenspan reiterates this fact yet again in a recent interview with Gold Investor magazine. Greenspan is now a fount of sound historical information about the historical gold standard:
I view gold as the primary global currency. It is the only currency, along with silver, that does not require a counterparty signature. Gold, however, has always been far more valuable per ounce than silver. No one refuses gold as payment to discharge an obligation. Credit instruments and fiat currency depend on the credit worthiness of a counterparty. Gold, along with silver, is one of the only currencies that has an intrinsic value. It has always been that way. No one questions its value, and it has always been a valuable commodity, first coined in Asia Minor in 600 BC.
The gold standard was operating at its peak in the late 19th and early 20th centuries, a period of extraordinary global prosperity, characterised by firming productivity growth and very little inflation.
But today, there is a widespread view that the 19th century gold standard didn’t work. I think that’s like wearing the wrong size shoes and saying the shoes are uncomfortable! It wasn’t the gold standard that failed; it was politics. World War I disabled the fixed exchange rate parities and no country wanted to be exposed to the humiliation of having a lesser exchange rate against the US dollar than itenjoyed in 1913.
Britain, for example, chose to return to the gold standard in 1925 at the same exchange rate it had in 1913 relative to the US dollar (US$4.86 per pound sterling). That was a monumental error by Winston Churchill, then Chancellor of the Exchequer. It induced a severe deflation for Britain in the late 1920s, and the Bank of England had to default in 1931. It wasn’t the gold standard that wasn’t functioning; it was these pre-war parities that didn’t work. All wanted to return to pre-war exchange rate parities, which, given the different degree of war and economic destruction from country to country, rendered this desire, in general, wholly unrealistic.
Today, going back on to the gold standard would be perceived as an act of desperation. But if the gold standard were in place today we would not have reached the situation in which we now find ourselves.
Greenspan then says nice things about Paul Volcker's high-interest-rate policy:
Paul Volcker was brought in as chairman of the Federal Reserve, and he raised the Federal Fund rate to 20% to stem the erosion [of the dollar's value during the inflationary 1970s]. It was a very destabilising period and by far the most effective monetary policy in the history of the Federal Reserve. I hope that we don’t have to repeat that exercise to stabilise the system. But it remains an open question.
Ultimately, though, Greenspan claims that central-bank policy can be employed to largely imitate a gold standard:
When I was Chair of the Federal Reserve I used to testify before US Congressman Ron Paul, who was a very strong advocate of gold. We had some interesting discussions. I told him that US monetary policy tried to follow signals that a gold standard would have created. That is sound monetary policy even with a fiat currency. In that regard, I told him that even if we had gone back to the gold standard, policy would not have changed all that much.
This is a rather strange claim, however. It is impossible to know what signals a gold standard "would have" created in the absence of the current system of fiat currencies. It is, of course, impossible to recreate the global economy under a gold standard in an economy and guess how the system might be imitated in real life. This final explanation appears to be more the sort of thing that Greenspan tells himself so he can reconcile his behavior at the fed with what he knows about gold and markets.
Nor does this really address Ron Paul's Concerns expressed for years toward Greenspan and his successors. Even if monetary policymakers were attempting to somehow replicate a gold-standard environment, Paul's criticism was always that the outcome of the current monetary regime can be shown to be dangerous for a variety of reasons. Among these problems are enormous debt loads and stagnating real incomes due to inflation. Moreover, thanks to Cantillon effects, monetarily-induced inflation has the worst impact on lower-income households.
Even Greenspan admits this is the case with debt: "We would never have reached this position of extreme indebtedness were we on the gold standard, because the gold standard is a way of ensuring that fiscal policy never gets out of line."
Certainly, debt loads have taken off since Nixon closed the gold window in 1971, breaking the last link with gold:
Some of our assumptions are so deeply embedded that we cannot perceive them ourselves.
Case in point: everyone takes for granted that it’s normal for a country of 320 million to be dictated to by a single central authority. The only debate we’re permitted to have is who should be selected to carry out this grotesque and inhumane function.
Here’s the debate we should be having instead: what if we simply abandoned this quixotic mission, and went our separate ways? It’s an idea that’s gaining traction — much too late, to be sure, but better late than never.
For a long time it seemed as if the idea of secession was unlikely to take hold in modern America. Schoolchildren, after all, are told to associate secession with slavery and treason. American journalists treat the idea as if it were self-evidently ridiculous and contemptible (an attitude they curiously do not adopt when faced with US war propaganda, I might add).
And yet all it took was the election of Donald Trump for the alleged toxicity of secession to vanish entirely. The left’s principled opposition to secession and devotion to the holy Union went promptly out the window on November 8, 2016. Today, about one in three Californians polled favors the Golden State’s secession from the Union.
In other words, some people seem to be coming to the conclusion that the whole system is rotten and should be abandoned.
It’s true that most leftists have not come around to this way of thinking. Many have adopted the creepy slogan “not my president” – in other words, I may not want this particular person having the power to intervene in all aspects of life and holding in his hands the ability to destroy the entire earth, but I most certainly do want someone else to have those powers.
Not exactly a head-on challenge to the system, in other words. (That’s what we libertarians are for.) The problem in their view is only that the wrong people are in charge.
Indeed, leftists who once said “small is beautiful” and “question authority” had little trouble embracing large federal bureaucracies in charge of education, health, housing, and pretty much every important thing. And these authorities, of course, you are not to question (unless they are headed by a Trump nominee, in which case they may be temporarily ignored).
Meanwhile, the right wing has been calling for the abolition of the Department of Education practically since its creation in 1979. That hasn’t happened, as you may have noticed. Having the agency in Republican hands became the more urgent task.
Each side pours tremendous resources into trying to take control of the federal apparatus and lord it over the whole country.
How about we call it quits?
No more federal fiefdoms, no more forcing 320 million people into a single mold, no more dictating to everyone from the central state.
Radical, yes, and surely not a perspective we were exposed to as schoolchildren. But is it so unreasonable? Is it not in fact the very height of reason and good sense? And some people, we may reasonably hope, may be prepared to consider these simple and humane questions for the very first time.
Now can we imagine the left actually growing so unhappy as to favor secession as a genuine solution?
Here’s what I know. On the one hand, the left made its long march through the institutions: universities, the media, popular culture. Their intention was to remake American society. The task involved an enormous amount of time and wealth. Secession would amount to abandoning this string of successes, and it’s hard to imagine them giving up in this way after sinking all those resources into the long march.
At the same time, it’s possible that the cultural elite have come to despise the American bourgeoisie so much that they’re willing to treat all of that as a sunk cost, and simply get out.
Whatever the case may be, what we can and should do is encourage all decentralization and secession talk, such that these heretofore forbidden options become live once again.
I can already hear the objections from Beltway libertarians, who are not known for supporting political decentralization. To the contrary, they long for the day when libertarian judges and lawmakers will impose liberty on the entire country. And on a more basic level, they find talk of states’ rights, nullification, and secession – about which they hold the most exquisitely conventional and p.c. views – to be sources of embarrassment.
How are they going to rub elbows with the Fed chairman if they’re associated with ideas like these?
Of course we would like to see liberty flourish everywhere. But it’s foolish not to accept more limited victories and finite goals when these are the only realistic options.
The great libertarians – from Felix Morley and Frank Chodorov to Murray Rothbard and Hans Hoppe — have always favored political decentralization; F.A. Hayek once said that in the future liberty was more likely to flourish in small states. This is surely the way forward for us today, if we want to see tangible changes in our lifetimes.
Thomas Sowell referred to two competing visions that lay at the heart of so much political debate: the constrained and the unconstrained. In the constrained vision, man’s nature is not really malleable, his existence contains an element of tragedy, and there is little that politics can do by way of grandiose schemes to perfect society. In the unconstrained vision, the only limitation to how much society can be remade in the image of its political rulers is how much the rubes are willing to stomach at a given moment.
These competing visions are reaching an endgame vis-a-vis one another. As Angelo Codevilla observes, the left has overplayed its hand. The regular folks have reached the limits of their toleration of leftist intimidation and thought control, and are hitting back.
We can fight it out, or we can go our separate ways.
When I say go our separate ways, I don’t mean “the left” goes one way and “the right” goes another. I mean the left goes one way and everyone else — rather a diverse group indeed — goes another. People who live for moral posturing, to broadcast their superiority over everyone else, and to steamroll differences in the name of “diversity,” should go one way, and everyone who rolls his eyes at all this should go another.
“No people and no part of a people,” said Ludwig von Mises nearly one hundred years ago, “shall be held against its will in a political association that it does not want.” So much wisdom in that simple sentiment. And so much conflict and anguish could be avoided if only we’d heed it.