Blogroll: Mises Institute
I read blogs, as well as write one. The 'blogroll' on this site reproduces some posts from some of the people I enjoy reading. There are currently 266 posts from the blog 'Mises Institute.'
Disclaimer: Reproducing an article here need not necessarily imply agreement or endorsement!
The Treasury Department released new budget deficit numbers this week, and with two months still to go in the fiscal year, 2019's budget deficit is the highest its been since the US was still being flooded with fiscal stimulus dollars back in 2012.
As of July 2019, the year-to-date budget deficit was 866 billion dollars. The last time it was this high was the 2012 fiscal year when the deficit reached nearly 1.1 trillion dollars.
At the height of the recession-stimulus-panic, the deficit had reached 1.4 trillion in 2009. (The 2019 deficit is year-to-date):
What is especially notable about the current deficit is that it has occurred during a time of economic expansion, when, presumably, deficits should be much smaller.
For example, after the 1990-91 recession, deficits generally got smaller, until growing again in the wake of the Dot-com bust. Deficits then shrank during the short expansion from 2002 to 2007. During the first part of the post Great Recession expansion, deficits shrank again. But since late 2015, deficits have only gotten larger, and are quickly heading toward some of the largest non-recession deficits we've ever seen.
The situation is a result of both growing federal spending and falling tax revenues. As of the second quarter of 2019, year-over-year growth was nearly at a nine-year high. Federal spending rose 7.5 percent, year over year, during the second quarter of this year. The last time growth rose as much was during the first quarter of 2010 when spending increased 13.8 percent, year over year.
Meanwhile, federal revenue growth has fallen, with only one quarter out of the last eight showing year-over-year growth.
Historically, a widening gap between tax revenue and government spending tends to indicate a recession or a period immediately following a recession. We saw this pattern during the 1990-91 recession, the Dot-com recession, and the Great Recession.
The Trump administration has attempted to brag that it has increased revenues through tax hikes (i.e., tariff increases), and as Bloomberg reports, "tariffs imposed by the Trump administration helped almost double customs duties to $57 billion in the period."
But tariff hikes also cut intro entrepreneurial activity and overall production, reducing earnings and hobbling economic growth. Not surprisingly, tax revenues have not kept up.
Of course, if tax revenues actually limited government spending, there wouldn't be much to complain about. Lower revenues really would mean fewer resources flowing into government coffers — and that can be a good thing.
[RELATED: "Deficits Do Matter: Debt Payments Will Consume Trillions of Dollars in Coming Years" by Ryan McMaken]
But, in a world where government borrowing allows spending to balloon even in times of falling revenue, we're setting the stage for future problems. The more the total national debt rises, the more debt service will become a serious issue if interest rates increase even moderately. Given the prospects for higher interest rates in the future, the Congressional Budget Office estimates interest payments on the debt will increase substantially in the future:
It is troubling that after a decade of an economic expansion, the US government is still spending money as it does during and immediately following a recession. Thus, if deficits are this large right now when times are good, how big will they become when the US enters recession territory? Back in 2009, the recession and its aftermath (i.e., massive amoounts of stimulus) drove deficits beyond the trillion dollar mark four years in a row. With 2019's deficit total now pushing toward 900 billion, we should perhaps expect deficits to top two trillion when the next recession hits. And probably for several years.
The piper will then need to be paid when interest rates increase and substantial cuts must be made to social security, medicare, and to military budgets in order to service the debt and avoid default.
This reckoning can be put off, however, so long as the dollar remains the world's reserve currency, and the central bank can continue to monetize the debt. As long as the dollar reigns supreme, the central bank can keep this up without causing high levels of price inflation. But when the day comes that the dollar can no longer count on being stockpiled worldwide, things will look very different.
The central bank won't be able to simply buy up debt at will anymore, interest rates will rise, and Congress will have to make choices about how many government amenities will be cut in order to pay the interest bill. Americans who live off federal programs will feel the pinch. State governments will have to scale back as federal grants dry up. The US will have to scale back its overstretched foreign policy. Not all of this is a problem, of course. But lower-income households and the elderly will suffer the most. Everything may seem fine now, but by running headlong into massive deficits even during a boom, the feds are setting up the economy for failure in the future.
But that's in the future, and few lawmakers in Washington are worried about much of anything beyond the next election cycle.
[Excerpted from "Voluntaryism: The Political Thought of Auberon Herbert," from the Journal of Libertarian Studies 2, no 4 (1978): 303–04.]
Against what types of actions do a person's rights provide moral immunity? Since person A's having a right to something involves his moral freedom and prerogative to do with that thing as he chooses (provided that in so doing A does not prevent person B from exercising his rights), A's rights are violated whenever he is prevented from doing as he chooses with what is rightfully his. Violations of rights consist in subverting a person's choice about and disposal of what he owns. Since physical force (and the threat thereof) is the great subverter of choice, since this is the essential vehicle for the non-consensual use of persons, their faculties, and their properties, it is against force (and the threat thereof) that all persons have rights. In addition, persons have rights against being subjected to fraud. For fraud is simply a surrogate for, and the moral equivalent of, force. Fraud is the "twin-brother of force … which by cunning sets aside the consent of the individual, as force sets it aside openly and violently".25
In The Right and Wrong of Compulsion by the State, Herbert is anxious to point out that there is a potentially dangerous confusion between "… two meanings which belong to the word force".26 Direct force is employed when person A, without his consent, is deprived of, or threatened with the deprivation of, something to which he has a right — e.g. some portion of his life, liberty, or property. Anyone subject to such a deprivation or threat is, in his own eyes, the worse for it. His interaction with the wielder of force (or fraud) is something to be regretted, something to which he does not consent. This is the case, e.g. when A pays B to stave off being beaten or murdered by B. In contrast, B might get A to pay B a certain sum or do B a particular service, by indicating that B will only do something which A values if A pays that sum or renders that service. By so indicating the conditions for A's receiving from B what A values, person B may get person A to do something which, in itself, A had rather not do. If B does induce A to act by threatening (so-called) to withhold what A values, then, according to Herbert, we can say that B has used "indirect force" upon A. But "indirect force" is radically different from "direct force". In the case of indirect force, person A does not act under a genuine threat. For he is not faced with being deprived of something rightfully his (e.g. his arm or his life). Instead he is bribed, coaxed, induced, into acting by the lure of B's offer of something which is rightfully B's. No rights-endangering act plays any role in motivating A. A may, of course, wish that B had offered even more. But in accepting B's offer, whatever it may be, A indicates that on the whole he consents to the exchange with B. He indicates that he values this interchange with B over the status quo. He indicates that he sees it as beneficial — unlike all interactions involving direct force.
The employer may be indirectly forced to accept the workman's offer, or the workman may be indirectly forced to accept the employer's offer; but before either does so, it is necessary that they should consent, as far as their own selves are concerned, to the act that is in question. And this distinction is of the most vital kind, since the world can and will get rid of direct compulsion; but it can never of indirect compulsion…27
Besides, Herbert argues, any attempt to rid the world of indirect force must proceed by expanding the role of direct force. And, "… when you do so you at once destroy the immense safeguard that exists so long as [each man] must give his consent to every action that he does".28 The believer in strong governments cannot claim, says Herbert, that in proposing to regulate the terms by which individuals may associate, he is merely seeking to diminish the use of force in the world.
What, then, may be done when the violation of rights threatens? So strong is Herbert's critique of force that, especially in his early writings, he is uncomfortable about affirming the propriety of even defensive force. Thus, in "A Politician in Sight of Haven", the emphasis is on the fact that the initiator of force places his victim "outside the moral-relation" and into "the force-relation". Force, even by a defender, is not "moral". The defender's only justification is the necessity of dealing with the aggressor as one would with "a wild beast". Indeed, so pressed is Herbert in his search for some justification that he says, in justification of his defense of himself, that "The act on my part was so far a moral one, inasmuch as I obeyed the derived moral command to help my neighbor".29 In The Right and Wrong of Compulsion by the State, Herbert starts by identifying the task of finding moral authority for any use of force with the task of finding moral authority for any government. He declares that no "perfect" foundation for such authority can be found, that all such authority is an usurpation — though "when confined within certain exact limits … a justifiable usurpation".30 Herbert also asserts the inalienability of each person's rights — including, presumably, the rights of each aggressor. This seems to confirm the status of even defensive force as an usurpation. But then Herbert seems to reverse himself — arguing that those who use force (or fraud), having disallowed, "this universal law … therefore lose the rights which they themselves possess under it".31 Finally, Herbert arrives at the considered judgment that, within special contexts, self-preservation does justify self-defense. Self-preservation "… justifies an action wrong in itself (as the employment of force) only because of the wrong which has been already committed in the first instance by some other person".32 Ten years later, Herbert was, if anything, more hesitant about defensive force when he wrote,
If the self is the real property of the individual, we may, I think assume (it is however at best an assumption) that force may be employed to repel the force that would take from an individual this special bit of property in himself…33
Finally, however, Herbert seems to have fully overcome his hesitancy about defensive force. Possibly his most forceful statement appears in the essay, "A Voluntaryist Appeal".
If you ask us why force should be used to defend the rights of Self-ownership, and not for any other purpose, we reply by reminding you that the rights of Self-ownership are … supreme moral rights, of higher rank than all other human interests or institutions; and therefore force may be employed on behalf of these rights, but not in opposition to them. All social and political arrangements, all employments of force, are subordinate to these universal rights, and must receive just such character and form as are required in the interest of these rights.34
- 25. Auberon Herbert, "A Plea for Voluntaryism", p. 329.
- 26. Herbert, Right and Wrong of Compulsion by the State and Other Essays (Indianapolis, Ind.: Liberty Classics, 1978), p. 144.
- 27. Ibid., pp. 144–145.
- 28. Ibid., pp. 145–146.
- 29. Herbert, "A Politician in Sight of Haven", p. 101. (Italics added)
- 30. Herbert, Right and Wrong, p. 141
- 31. Ibid.
- 32. Ibid., p. 142.
- 33. Herbert, "State Socialism in the Court of Reason", p. 29. (Italics added.)
- 34. Herbert, "A Voluntaryist Appeal", p. 317.
[Adapted from an interview with His Serene Highness Prince Michael of Liechtenstein. H.S.H. Prince Michael of Liechtenstein is the Founder and Chairman of Geopolitical Intelligence Services AG, as well as president of the Think Tank ECAEF (European Centre of Austrian Economics Foundation). He is Chairman of Industrie- und Finanzkontor in Vaduz (Liechtenstein).]
Claudio Grass (CG): The spirit of governance, as well as the local culture of Liechtenstein, seem to support and work in harmony with the ideas of personal freedom, independence and especially respect for private property. To what extent do you think this was influenced by the heritage and the history of your family and of past generations?
H.S.H. Prince Michael of Liechtenstein (PML): We have a very well-balanced governance system here in Liechtenstein, which results in cohesion and prosperity. It is proof that the combination of monarchy, direct democracy, and the high autonomy of the municipalities works well. This combination forces all parts of government to apply credible politics. The monarchy’s reputation and strength are based on the balanced family constitution and the rigidity applied by the Princely family toward its members with the effect of discipline and a high sense of responsibility. It is widely agreed that a democracy can only function on the ideals of personal freedom, independence, subsidiarity, personal responsibility, and respect for private property.
CG: What about today? Do you see these values and individual rights as being under threat in recent years and how are they defended in Liechtenstein?
PML: Unfortunately, even in Western democracies, the values of freedom and independence and the respect for private property get more and more eroded. A flood of laws limits freedom of choice and regulations violate property rights. In today’s European societies, many are tempted to happily exchange freedom against an illusion of security. Unfortunately, also in Liechtenstein we see such an attitude, much less pronounced but still existing. However, our systems are robust enough to protect individual freedom and property rights.
CG: The European Center for Austrian Economics Foundation (ECAEF) has played a key role in researching and promoting sound ideas and advancing arguments for self-responsibility and limited government. It can be argued, however, that the political trend seems to be headed in the opposite direction for quite some time, with some even claiming that WWI marked the end of civilization. What is your view on this and do you think we should still remain optimistic about a possible reversal toward more individual freedom?
PML: World War I might not have marked the end of civilization, but it marked the start of the phase where Europe’s influence in the world and its combination of Christianity and Liberalism (a very successful model) started to decline. Liberalism, which includes values such as personal freedom and property rights, is based on Christianity. Personal responsibility is a basic factor in Christianity.
This system has played very well also for Western economy and prosperity. But Europe became very saturated. After seventy years of peace after World War II, Europe left the path of a drive to achievement and turned to a drive of self-protection, anxiety, and redistribution. This saturated situation will necessarily lead into a crisis and I believe — as unfortunate as it is — that a big disruption will be necessary for a return toward more individual freedom. In case this does not happen, Europe will again fall into poverty and a loss of freedom. However, I am optimistic in the long term, but I see quite some troubles in the near future.
CG: The last few years have seen a sharp decline in the quality of public debate in Europe and in the US, as deep divisions and political polarization has turned civilized dialogue into name-calling and shouting matches. Freedom of speech and its limits have also come under scrutiny and many attempts to curb it have backfired. How important a role do you think freedom of expression will play if we are ever to return to a higher level of public discourse?
PML: Political correctness has degenerated the public debate in Europe and the US to a high degree of mediocrity. The essence of democracy and free society is an open debate of sometimes clashing opinions. Under the term “polarization” deferring opinions are decried and ideas, which do not correspond to the accepted mediocrity, are marginalized as either radical or populist or right-winged, etc. As a result, freedom of expression is limited.
Therefore, all of a sudden, as soon as there are differences, name-calling and shouting matches are replacing a sound public debate. In order to get to a higher level of public discourse, we have to come back to the real freedom of expression, which unfortunately is more and more limited. Sometimes polarization is a necessary ingredient of a functioning and healthy democracy.
CG: What are the key challenges and opportunities you can see emerging from this ongoing technological push toward decentralization and digitalization? As new ideas and systems give power back to the individual, do you expect to see a social impact, apart from an economic one too?
PML: All new positive technologies make men more efficient and a society more prosperous. The fear that there will be less jobs due to new technologies such as robotics, artificial intelligence, etc., is unjustified. In fact, new technologies will create new types of jobs. The challenge will be to manage the transformation.
Blockchain with its system of decentralization has the big advantage that the individual becomes much more independent from centralized institutions such as state agencies or some private providers such as banks, notaries, etc. This will have a very positive social impact, as the grip of government on individuals will become weaker. And the economic advantage will be a considerable reduction of transaction costs. Blockchain will be successfully applied in many areas, but it will need time for the benefits to ripen.
CG: It can be argued that we are going through strange times geopolitically, with the US shifting away from its traditional leadership role in many global issues and with rising trade tensions threatening to rupture or redefine key alliances. At the same time, we see a lot of political undercurrents in Europe rise to the surface, with key electoral victories of anti-establishment parties and movements. In your position as the Founder and Chairman of Geopolitical Intelligence Services (GIS) in Vaduz and from your own extensive experiences, do you believe this period to be unique or do you see historical parallels and patterns that might guide our expectations and outlook?
PML: The World is in a time of extreme political disruptions. But we had similar times before, especially in the age of Renaissance in Europe which finally shaped today’s situation in the Continent. It is very difficult to apply historical parallels and patterns. World War I was an incident not as disruptive as the Renaissance, but started a new period of European and Western decline. But the time before is a good example to see today’s lingering conflict between the US and China. The mistakes that the European powers made in the late nineteenth century with the outbreak of World War I should be a warning for today’s politics.
CG: The global economy also seems to stand at a crossroads. After a decade of heavy-handed interventions by central banks in all major economies, combined with an explosion in debt levels, it would seem that unsupported, “organic” growth is arguably dead, while financial markets are addicted to low rates and central bank accommodations. How do you evaluate the impact of these policies and what are, in your estimation, the biggest economic risks moving forward?
PML: The biggest problem, not only economically but also politically, is the debt problem. It is unimaginable, how this madness of creating more and more debt and just pushing the economy by inflating the money supply will end. The only outcome one can imagine now is that the resulting catastrophe will be big. A small group of people already believes that the only solution, as terrible as it is, will be a major war. I cannot really disagree with that assumption, because the resulting crisis might lead to more and more political tensions, which could unload like a thunderstorm in a war. How such a war will happen is unclear, it might be limited to the cyber sphere or it might also be likely that traditional military forces will be deployed.
CG: In this context, what do you think the role of gold will be in the coming years? What do you make of the fact that we’re seeing key central banks, e.g., in Russia and China, racing to increase their reserves in recent years?
PML: I think gold will always play an important role. People simply trust in it, although it is not always very practical. I believe that the central banks in Russia and China have seen the possibility to increase trust in their currencies by having larger gold reserves. This is important, because we must not forget that the value of money is based on the trust of the people who use it. Gold is a good hedge against the inflated amount of the currencies which will finally destroy the trust that people still have.
[Adapted from an interview with His Serene Highness Prince Michael of Liechtenstein. H.S.H. Prince Michael of Liechtenstein is the Founder and Chairman of Geopolitical Intelligence Services AG, as well as President of the Think Tank ECAEF (European Centre of Austrian Economics Foundation). He is Chairman of Industrie- und Finanzkontor in Vaduz (Liechtenstein).]
One of the most fundamental things about economics — which many people who are passionate about politics do not understand — is that the economy is not like a chess board where you can move one piece with deterministic and predictable consequences. On the contrary, an economy is an intricate fabric of interrelated institutions and actors all of whom act relative to one another. Any one move creates a cascade of domino effects. If the price of milk changes dramatically then orange juice sales might be affected — and with them the prices of other fruits as well. It's impossible to predict.
The role of a good economist is to be able to follow the threads of consequences likely to result from a certain policy. If done properly, this can help to minimize the damage done by the short-sightedness of policymakers (and would-be policymakers) seeking some immediate and favorable end.
Many policies can end up having the opposite effect from what is intended.Beauty Salon Economics
For example, suppose some of the fancy hair salons are getting irked because cheap salons are popping up everywhere and giving people poor quality haircuts. As far as the high end shops are concerned, those "other" salons are giving the whole industry a bad name. So a coalition goes to the government to pass standards and licensing laws in the hairdressing industry (in some places you currently need a license to braid hair.) That’s going to improve the quality of haircuts, right?
Not necessarily. Now all the hair salons have to send their employees to college for two years to get a license, and when they graduate they are expecting much higher pay because they just sunk two years into an education which they saw no money during. They went out drinking with their student loans, paid rent, and accrued debts. What’s more the salons need to consult special accountants or lawyers to make sure they can prove that they are adhering to the new regulations — even the ones who are way ahead of the law already providing far better conditions and services than the existing standards. Such professionals can charge upward.s of $100 an hour. Many independent salons simply can’t afford the increase in costs and have to close down entirely; others have to jack prices up to pay for the extra costs of compliance and staff. In some areas only one salon is left standing and since people have less choice they can afford to let standards slip.
With the price of haircuts going up lots of people decide to go without. They cut their friends hair at home. Badly. Or they get pretty good at it and don’t have to go to the hairdressers any more but take longer to prepare for going out and miss out of the chat and gossip. What’s more, everyone who does still go for a professional haircut has less left over to spend on a manicure or something else nice, so other industries also suffer. You can add to that the marginal increase in taxes to pay the civil servants in the new public body which acts as a regulator for the hairdressing industry. Now those people are involved in busy work instead of making commodities and providing services that improve people’s living standards in real terms and rather than paying into the public purse they are a net drain on it.
I choose a relatively trivial example because it’s perfectly illustrative of how a seemingly simple and innocuous policy suggestion — mandatory hairdressing licenses — can generate more than its fair share of consequences. (Yes, hair cuts are not a life or death issue. But so are many other activities that are similarly regulated.) An alternative is for a series of private watchdogs to certify only hairdressers that meet their standards and give the ones who do an official number and sticker to put in their window; because they are competing they have to keep the costs of certification to a minimum (no $100 an hour fees), and people who are not fussed to pay extra for a certified cut can take a risk on somewhere cheaper or go by word of mouth.
Occupational licensing makes for an interesting case because it is almost ubiquitously considered in the public interest and even necessary to prevent catastrophe. And yet there is actually zero evidence that it leads to a higher quality of service provision. Zilch!
After compiling a meta-analysis entitled, "Rule of Experts," S. David Young concluded “…most of the evidence suggests that licensing has, at best, a neutral effect on quality and may even cause harm to the consumers. ... The higher entry standards imposed by licensing laws reduce the supply of professional services. … The poor are net losers because the availability of low-cost service has been reduced.”
Stanley Gross of Indiana State University, had to concur, “…mainly the research refutes the claim that licensing protects the public.”
More recently economics PhD Morris Kleiner released two publications (2006, 2013) for the Upjohn Institute for Employment Research demonstrating that licensing occupations does more to restrict competition that to ensure quality.
However, until this is more broadly understood we may continue seeing mandatory licensing, not just for hairdressers and manicurists, but for tour guides, librarians, locksmiths, dry cleaners, auctioneers, fruit ripeners, plumbers, private investigators, Christmas tree vendors, florists, interior designers, funeral directors, cab drivers, shampoo specialists, glass installers, cat groomers, tree groomers, hunting guides, kick boxers, real estate agents, tattoo artists, nutritionists, acupuncturists, music therapists, yoga instructors, morticians with all the all-but-invisible (to the untrained eye) attendant consequences.
There are two fundamental arguments most commonly made against gun control.The Anti-Crime Argument
The first one is based on the idea that persons have a fundamental right to self-defense against ordinary criminals. That is, in a world where criminals have access to either legal or illegal weapons, ordinary people ought to be able to arm themselves for purposes of self defense.
The benefits of private gun ownership in this regard can be illustrated in a variety of ways. Mexico's strict gun-control regime, for instance, ensures ordinary Mexicans are at the mercy of the cartels and ordinary street criminals. Mexico's astoundingly high homicide rates illustrate the unfortunate reality.
Moreover, within the United States, some of the worst regions for homicides are areas with some of the most strict gun control laws. Baltimore, for example, has a homicide rate ten times that of the United States overall, while the state of Maryland heavily restricts gun ownership.
Studies that assert "more guns means more crime," meanwhile, have never been able to demonstrate a causal relationship here. Not only is there no reliable data on where exactly all the guns are, but the direction of causality can go either way. We would expect people living in a high crime area to be more likely to purchase a gun for protection. In other words, the proper conclusion may just as likely be "more crime means more guns."
The gun-for-self-defense argument is the easier one to make. For the most part, one need only argue that people need to be at least as well armed as ordinary criminals. Shotguns and rifles for home defense, or conceal-carry of handguns, for instance, would arguably be sufficient.1The Defense-Against-Tyranny Argument
The other argument for private gun ownership is the argument that weapons ought to be owned by a sizable portion of the population as a defense against an abusive government.
In the current ideological environment, this is the harder argument to make. And, as we shall see, this argument depends heavily on making the case that a standing army controlled by the federal government is a threat to freedom. As it now stands, this argument isn't exactly popular.The Origins of the Second Amendment
In the late eighteenth century, however, the idea that a standing army was a grave danger to any society was far more common. The eighteenth-century arguments behind the Second Amendment, of course, were always centered around providing a check on the power of the central government's military power. Those arguments go far back beyond the Declaration of Independence at least to the days of the English Civil war. In the 1660s it was agreed that troops were necessary to maintain order, but few trusted the central government with the task. Thus, "a nationwide militia, composed of civilians who would — as in earlier days — be summoned in time of need."
In practice, this meant the militia members would have access to their own arms, and be skilled in their use. Much was made of the idea that a national standing army, under the command of the central government was a significant danger to the liberties of the resident population. This idea persisted in Britain even into the nineteenth century.
These ideas eventually made their way to the United States where state and local militias were commonly used during the Revolutionary War, and afterward, such as when the Massachusetts Militia was successfully used to put down Shays' Rebellion. Formal and independent militias — some controlled by cities and states, and some semi-private — continued to exist throughout the nineteenth century. But these were also supplemented by the idea of the "unorganized" militia, which in many state constitutions were defined as "all able-bodied male residents of the State, between the ages of eighteen and forty-five years." As noted by Jeffrey Rogers Hummel, these militias were successfully employed in defense against Indian raids, and as part of the Invasion force of the Mexican War.
Under-girding the idea of the militia was always the belief that a sizable federal standing army was a threat to American freedoms, and that outside the Navy, military force out to be decentralized and subject to state and local control.
These militias — once called into service — were usually subject to control by government officials, whether local or at the state level. But it was often assumed that the ranks of the militia would be filled by residents skilled in the use of their own private arms. This, of course, also assumed private gun ownership. Moreover, it assumed ownership of arms — and proficiency with them — at a level of a military unit.So What does This Have to do with the Gun Control Argument Today?
The idea of the militias as a check on standing armies remains important because gun-control advocates are now specifically targeting the defense-against-tyranny argument in their drive to further criminalize gun ownership.
For example, last week, Democratic Presidential candidate Joe Biden was asked about his position on gun control:
COOPER: So, to gun owners out there who say, well, a Biden administration means they're going to come for my guns?
BIDEN: Bingo. You're right if you have an assault weapon. The fact of the matter is, they should be illegal, period. Look, the Second Amendment doesn't say you can't restrict the kinds of weapons people can own. You can't buy a bazooka. You can't have a flame thrower.
The guys who make these arguments are the people who say the tree of liberty is watered with the blood of patriots, we need the protection against the government. We need an F-15 for that. We need something well beyond whether or not you're going to have an assault weapon.
Biden is not claiming no gun ownership is to be allowed. Instead, he's just saying people don't need guns beyond what's necessary for personal defense. In his mind, that means it's practical to eliminate private ownership of so-called "assault weapons."
But the tactic here is clear: Biden is attacking the guns-against-tyranny argument because he knows if he can win that, he can make a case for abolishing legal ownership of semi-automatic rifles like AR-15s.
And the claim he makes will strike many listeners as quite sensible. Here are the key components:
- It is silly to assert an AR-15 will defend against tyranny.
- Why is it silly? Because a tyrannical US government would be armed with F-15 fighter jets and flame throwers and bazookas. Thus, the idea that people could defend against this with some AR-15s is absurd.
- Should people be allowed to own military hardware then? Of course not! Everyone knows we can't just let anybody own a bazooka or a machine gun.
The conclusion is this: let's move on form this fantasy about how your AR-15 fights tyranny, and let's get down to making people safe by getting these "weapons of war" out of the hands of people who will use them to slaughter children.
After all, very few people are willing to go on CNN and say "yes, I think flame throwers and armored vehicles with belt-fed machine guns on top should be perfectly legal to everyone at any time." Few are willing to say this for good reason. They'd be mocked and dismissed as utterly irrelevant.
Moreover, the pro-gun-control side benefits from the fact the US military is incredibly popular, and we are regularly told that all American liberties only exist because the US military makes freedom possible. Why would anyone need a gun to shoot at those American heroes?Rebuilding the Dike: Yes, Standing Armies Are a Problem, and Yes, We Must Decentralize Military Power
Unfortunately, anyone who wants to really defend the guns-against-tyranny argument has his work cut out for him.
For more than a century now, Americans have been told there is no longer any role for any sort of military force that is beyond the direct control of the federal government. In other words, a huge standing federal army is fine, and we don't need military hardware — privately owned or otherwise — to defend against them.
This idea was made a legal reality with the Militia Act of 1903. With the new legislation, the federal government created the so-called National Guard which would spell the doom of the unorganized militia in the US, and serve to completely undermine the Second Amendment and its defense of decentralized military power in the US.
After 1903, the federalization of the state militias only accelerated until, as historian David Yassky concludes, "Today's National Guard is thus a far cry from what the Founders' understood a militia to be" and the result of these changes has brought about "the disappearance of anything the Founders would have recognized as a militia." Far from acting as a bulwark against abuse of federal power, today's National Guard is something the authors of the Second Amendment "would have seen as little better than a standing army."
In terms of the gun-control debate, this change has removed the idea of the unorganized militia from public debate, and has also solidified the idea that all military power in the United States ought to be controlled by generals in Washington, DC. Thus, the idea that anyone outside federally-controlled military units ought to have military-level weapons strikes most as bizarre.
Yes, some vestiges of the old system did persist later than 1903. As late as 1990, it was still possible — at least in theory — for state governors to legally veto deployment of National Guard units ordered by US presidents. But even that independence is gone now.
Historically, however, state governments could — and did — refuse to deploy troops at the request of presidents.
Today, the idea that standing armies are a danger — or that local and state militias have a role in defending against them — is profoundly unpopular among policymakers and average voters alike.
Thus, from this point, if one is going to assert that guns (and other weapons) are needed to protect against tyranny, one has to rebuild the entire Second Amendment edifice against standing armies: an edifice which imagines both an organized and unorganized militia outside the control of the central government.
After all, even many people who fancy themselves big defenders of the Second Amendment certainly don't act like it. Often, the same people who clamor to thank soldiers "for their service" then say with the next breath that guns are necessary to defend themselves against the military's soldiers. Moreover, the destruction of the militia's independence was historically cheered by conservatives. It was gun-owning social conservatives who supported legislation like the Montgomery Amendment which put the final nail in the coffin of state-control of National Guard units. Needless to say, the bill's sponsor, conservative Congressman Sonny Montgomery of Mississippi, was not exactly punished by the voters for his assault on the Second Amendment at election time.
- 1. I don't agree with this claim, but I suspect many would find it convincing.
[Excerpt from Man, Economy, and State with Power and Market, chapter 6: Antimarket Ethics: A Praxeological Critique (2009), pp. 1303–06.]
Some writers are astute enough to realize that the market economy is simply a resultant of individual valuations, and thus they see that, if they do not like the results, the fault lies with the valuations, not the economic system. Yet they proceed to advocate government intervention to correct the immorality of individual choices. If people are immoral enough to choose whiskey rather than milk, cosmetics rather than educational matter, then the State, they say, should step in and correct these choices. Much of the rebuttal parallels the refutation of the knowledge-of-interests argument; i.e., it is self-contradictory to contend that people cannot be trusted to make moral decisions in their daily lives but can be trusted to vote for or accept leaders who are morally wiser than they.
Mises states, quite rightly, that anyone who advocates governmental dictation over one area of individual consumption must logically come to advocate complete totalitarian dictation over all choices. This follows if the dictators have any set of valuational principles whatever. Thus, if the members of the ruling group like Bach and hate Mozart, and they believe strongly that Mozartian music is immoral, they are just as right in prohibiting the playing of Mozart as they are in prohibiting drug use or liquor consumption.5 Many statists, however, would not balk at this conclusion and would be willing to take over this congenial task.
The utilitarian position — that government dictation is bad because no rational ethics exists, and therefore no person has a right to impose his arbitrary values on someone else — is, we believe, an inadequate one. In the first place, it will not convince those who believe in a rational ethics, who believe that there is a scientific basis for moral judgments and that they are not pure whim. And furthermore, the position involves a hidden moral assumption of its own — that A has no right to impose any arbitrary values on B. But if ends are arbitrary, is not the end “that arbitrary whims not be imposed by coercion” just as arbitrary? And suppose, further, that ranking high on A's value scale is the arbitrary whim of imposing his other values on B. Then the utilitarians cannot object and must abandon their attempt to defend individual liberty in a value-free manner. In fact, the utilitarians are helpless against the man who wants to impose his values by coercion and who persists in doing so even after the various economic consequences are pointed out to him.6
The would-be dictator can be logically refuted in a completely different way, even while remaining within Wertfrei praxeological bounds. For what is the complaint of the would-be dictator against free individuals? That they act immorally in various ways. The dictator's aim, therefore, is to advance morality and combat immorality. Let us grant, for the sake of argument, that an objective morality can be arrived at. The question that must be faced, then, is: Can force advance morality? Suppose we arrive at the demonstrable conclusion that actions A, B, and C are immoral, and actions X, Y, and Z are moral. And suppose we find that Mr. Jones shows a distressing propensity to value A, B, and C highly and adopts these courses of action time and again. We are interested in transforming Mr. Jones from being an immoral person to being a moral person. How can we go about it? The statists answer: by force. We must prohibit at gunpoint Mr. Jones from doing A, B, and C. Then, at last, he will be moral. But will he? Is Jones moral because he chooses X when he is forcibly deprived of the opportunity to choose A? When Smith is confined to a prison, is he being moral because he doesn't spend his time in saloons getting drunk?
There is no sense to any concept of morality, regardless of the particular moral action one favors, if a man is not free to do the immoral as well as the moral thing. If a man is not free to choose, if he is compelled by force to do the moral thing, then, on the contrary, he is being deprived of the opportunity of being moral. He has not been permitted to weigh the alternatives, to arrive at his own conclusions, and to take his stand. If he is deprived of free choice, he is acting under the dictator's will rather than his own. (Of course, he could choose to be shot, but this is hardly an intelligible conception of free choice of alternatives. In fact, he then has only one free choice: the hegemonic one — to be shot or to obey the dictator in all things.)
Dictatorship over consumers’ choices, then, can only atrophy morality rather than promote it. There is but one way that morality can spread from the enlightened to the unenlightened — and that is by rational persuasion. If A convinces B through the use of reason that his moral values are correct and B's are wrong, then B will change and adopt the moral course of his own free will. To say that this method is a slower procedure is beside the point. The point is that morality can spread only through peaceful persuasion and that the use of force can only erode and impair morality.
We have not even mentioned other facts that strengthen our argument, such as the great difficulty in enforcing dictatorial rules against people whose values clash with them. The man who prefers the immoral course and is prevented by the bayonet from acting on his preference will do his best to find ways to circumvent the prohibition — perhaps by bribing the bayoneteer. And, because this is not a treatise on ethics, we have not mentioned the libertarian ethical theory which holds that the use of coercion is itself the highest form of immorality.
Thus, we have shown that would-be dictators must necessarily fail to achieve their professed goal of advancing morality because the consequences will be precisely the opposite. It is possible, of course, that the dictators are not really sincere in stating their goal; perhaps their true purpose is to wield power over others and to prevent others from being happy. In that case, of course, praxeology can say no more about the matter, although ethics may find a good deal to say.7
- 5. Mises, Human Action, pp. 728–29. The same total dictatorship over consumer choice is also implied by the knowledge-of-interest argument discussed above. As Thomas Barber astutely says:
It is illegal for pleasure-boaters to fail to carry a life preserver for every person on board. A great number of young men are publicly employed to go about and look for violators of this law. Pleasant for the young men, of course. But is it really any more the government's business that a man goes canoeing without a life preserver than that he goes out in the rain without his rubbers? ... The law is irritating to the individual concerned, costly to the taxpayers, and turns a lot of potential producers into economic parasites. Perhaps the manufacturers of life preservers engineered its passage. (Barber, Where We Are At, p. 89
- 6. It is true that we do not advocate ends in this volume, and in that sense praxeology is “utilitarian.” But the difference is that utilitarianism would extend this Wertfrei injunction from its proper place in economics and praxeology to embrace all of rational discourse.
- 7. Mises often states that interventionary measures in the market, e.g., price controls, will have consequences that even the government officials administering the plans would consider bad. But the problem is that we do not know what the government officials’ ends are — except that they demonstrably do like the power they have acquired and the wealth they have extracted from the public. Surely these considerations may often prove paramount in their minds, and we therefore cannot say that government officials would invariably concede, after learning all the consequences, that their actions were mistaken.
Shannon O’Toole, according to the author's biography on Amazon, “worked extensively to identify fraud in multiple government programs. She received countless accolades and honors for her achievements and finally the prestigious HUD Secretary’s Award for her work.”
The talented Ms. O’Toole was a single mother needing a job when she showed up at the FDIC, which she describes as a “disorganized, flying-by-the-seat-of-your-pants atmosphere that seemed to permeate the place.” While she bore the brunt of untangling complicated real estate assets and readying them for sales, O’Toole was reminded again and again that government is not efficient or fair, as she was passed over for promotions by friends of one misogynist boss after another.
Her account, Washington Siren: A Woman’s Journey Through Scathing Scandals, Lies, and Secrets Inside the FDIC, HUD, IRS and Other Agencies, with a Love Story that Survives it All, written, for some reason in the 3rd person, chronicles O’Toole’s long career of Sisyphean frustration with government bureaucracy.
At publication date, 2017, her National Government Lien Recovery Program hadn’t gone anywhere despite having the potential to raise trillions of dollars for the US Treasury, if implemented.
However, as Ludwig von Mises explained in his 1944 book Bureaucracy,
The bureaucrat is not free to aim at improvement. He is bound to obey rules and regulations established by a superior body. He has no right to embark upon innovations if his superiors do not approve of them. His duty and his virtue is to be obedient.
As she works for the RTC (Resolution Trust Company) she comes face-to-face with competing government and private contractor factions fighting to sell the foreclosed properties from failed S & Ls. “There just seem to be layers and layers of oversight personnel, all watching but doing nothing.”
One of her co-workers explained that the RTC’s property management division was desperately keeping SAMDA contractors in place. “They’re trying to build an empire out of managing the assets and don’t want the sales department to dispose of anything. If Thomas can make a proposal to Washington DC for hiring more staff, his grade and pay go up because his empire increases.”
In the for-profit world this wouldn’t make sense, but, this is government. Mises wrote in Omnipotent Government,
Only to bureaucrats can the idea occur that establishing new offices, promulgating new decrees, and increasing the number of government employees alone can be described as positive and beneficial measures.
Because contractors were paid a fee based upon the value of the assets they managed, they wanted to hold them as long as they could. And, the contractors’ overseers at the RTC didn’t want the assets sold because that would put them out of their jobs.
Of course, the result was billions in government waste. “I’ve seen laziness at the FDIC and some waste but nothing like this,” O’Toole told a colleague. “This is beyond reason. It’s just plain crazy.”
His reply was one the author would hear often. “It’s just plain politics.”
“Bureaucratic management is management of affairs which cannot be checked by economic calculation,” wrote Mises.
If O’Toole pushed back, she was met with rules and regulations. “We must all accept that FIRREA is the founding legislation for the RTC, and that FIRREA instructs us to use outside fee contractors to manage all the properties from the failed savings and loans,” one of her superiors announced at a meeting.
Again, Mises from Bureaucracy,
They are no longer eager to deal with each case to the best of their abilities; they are no longer anxious to find the most appropriate solution for every problem. Their main concern is to comply with the rules and regulations, no matter whether they are reasonable or contrary to what was intended. The first virtue of an administrator is to abide by the codes and decrees.
O’Toole writes evocatively about union protected government employees disappearing from their jobs for weeks and sometimes years at a time. At one point, she was ready to retire, but the OPM (Office of Personnel Management) couldn’t tell her what her monthly check would be due to outdated computer systems.
Working for HUD and FHA during the Obama administration, the author couldn’t believe that the mandate to soften mortgage underwriting standards came down from DC, despite the nation still working through the rubble of the 2008 mortgage crisis.
She writes that government work is “a world of processes, not profits,” and that some of her friends call federal employment “White Collar Welfare.”
Through research, O’Toole determined the government places liens on properties for unpaid taxes. However, the government, she found, never followed up after being notified the properties in question were being transferred.
Thus, the liens fall by the wayside and the government is left to chase tax avoiders through garnishments and court actions. O’Toole tried desperately to obtain data on the number of, and total amount of liens placed by the IRS, on a state-by-state basis. The tax collector demurred, citing privacy concerns, despite O’Toole not asking for individual taxpayer information, just the aggregate numbers.
Libertarians might knowingly chuckle about all of this government nonsense, except, the Democratic presidential candidates are advocating that this same government completely take over the healthcare system.
A chilling thought after reading O’Toole's memoir.
The Brennan Center for Justice recently published a collection of essays, all written by far-left politicians, about how the United States might solve the problem of mass incarceration. Bernie Sanders contributed an essay titled “Abolish For-Profit Prisons.” His essay should come as no surprise; during the 2016 election, he made headlines after proposing the Justice Is Not For Sale Act. Sanders is hardly alone in targeting private prisons as the culprit in over-imprisonment. In the first primary debate of 2019, Elizabeth Warren (falsely) claimed that private prison stocks have grown rapidly since President Trump took office. The specter of private prisons is a popular talking point among Democrats, but the facts about the prison system do not support their diagnosis of the mass incarceration problem.
At the outset, it is always worth clarifying that private prisons are “private” in only the loosest sense of the word. It is true that the profits from these facilities are privatized, but as with any crony enterprise, the costs are socialized. State governments use taxpayer dollars to fund the contracts, and they stipulate the terms of operation. Many people see private prisons as an indictment of capitalism, failing to recognize that these prisons are wholly dependent on the beneficence of the government — the very entity that people like Bernie Sanders want to take control over the prisons as a solution to the problem.
But even if we grant this delusional view of capitalism, which fully equates all organizations that are not fully nationalized, private prisons would still fail to explain mass incarceration. This has hardly softened the demagoguery about for-profit prisons, of course. In 2018, the Equal Justice Initiative ran the headline “Private Prison Populations Skyrocket.” The article reports that private prisons hold 128,063 prisoners. This, apparently, is “skyrocketing” growth from the 2008 numbers in which private prisons held 128,525 prisoners.
The alleged “increase” is only derived from the percentage change from 8 percent of the total prison population in 2008 to a whopping 8.5 percent in 2016. Apparently, skyrocketing growth entails a proportional growth rate of less than a tenth of a percent per year, and even this only occurred because the total population of prisoners decreased, fully undermining the claim that for-profit prisoning is the cause of mass incarceration.
The reality is that exclusively attacking private prisons does more to preserve mass incarceration. The concern is that private prison lobbyists will spend money to ensure a constant flow of prisoners. This is not an unreasonable concern; incentives matter, and the prison industrial complex is rife with perverse incentives. What the focus on private prisons obscures, though, is not only that these same incentives also apply to public institutions, but they are far more significant in the public sector. As Stanford Law Professor John Pfaff points out:
While private prison groups spent $13 million on lobbying efforts between 1986 and 2014, educational groups (mostly primary and secondary education, like the American Federation of Teachers and the National Education Association) spent over $256 million, medical groups over $360 million, and — perhaps most importantly — public employee groups (which include, but certainly are not limited to, prison guard unions) over $132 million.1
In fact, during these years, total lobbying at the state level only amounted to $36 billion, making the $13 million spent by private prisons amount to 0.03 percent.
Public unions also carry more leverage than merely the lobbying dollars they have to spend. When Florida tried to transfer 14,000 prisoners to private facilities in 2012, the Republican-controlled state government supported the plan, which would have meant the estimated loss of 3,000 guard jobs. Thanks to pressure from public corrections unions, enough Republicans crossed the aisle to help the Democrats vote the bill down. A similar defeat took place in 1998, when Tennessee attempted to contract out its entire prison system.
The interests of public unions are not exclusive to preventing private contractors from taking over their industry. The fear — a valid one — with private lobbying is that they will spend money to increase arrests, convictions, and prison terms. In the “Kids for Cash” scandal, two Pennsylvania judges were convicted of accepting bribes from for-profit detention centers to impose harsher sentences on juvenile delinquents. This is a genuine problem created by the perverse incentives built into the criminal justice system.
But public unions have the same interests, as fewer prisoners means fewer jobs. Not only do they spend more money lobbying to preserve and expand the prison industry, but they use political leverage to help elect politicians with “tough-on-crime” positions. This includes publicly elected sheriffs, prosecutors, and magistrates, in addition to legislators. The difference between public and private lobbying, aside from scope, is that private organizations and their beneficiaries actually face legal consequences for their actions. Following the Kids for Cash trial, both judges were given lengthy prison sentences, and the owner of the detention facility was imprisoned and forced to pay restitution to the victims. We may argue that the penalties and awards did not go far enough (the restitution the individual victims received was meager), but punishment and restitution in fully state-run enterprises is practically non-existent, despite a far more severe track record of abuses.
None of this is meant to defend private prisons or suggest that they are a solution to mass incarceration or the injustices of our legal system (they clearly are not). There are plenty of reasons to oppose private prisons. However, it is important to recognize why people like Bernie Sanders oppose them. Sanders hardly makes any objection to the prison system per se, except to call for abolishing for-profit organizations. He is against private prisons because he is against private companies; he and many other pseudo-reformers show little concern for the prison system as such, evident from their conspicuous denial that public facilities face the same incentives, with more political leverage and virtually no consequences for misbehavior. Mass incarceration is, indeed, a problem, and the Democrats (and Republicans and Libertarians) who want to address this problem are right to do so. But if their only solution is to abolish private prisons, they are not combating mass incarceration, they’re working to preserve it.
- 1. John F. Pfaff, Locked In: The True Causes of Mass Incarcertation — and How to Achieve Real Reform (New York: Basic Books, 2017), 86.
The precapitalistic system of production was restrictive. Its historical basis was military conquest. The victorious kings had given the land to their paladins. These aristocrats were lords in the literal meaning of the word, as they did not depend on the patronage of consumers buying or abstaining from buying on a market.
On the other hand, they themselves were the main customers of the processing industries, which, under the guild system, were organized on a corporative scheme. This scheme was opposed to innovation. It forbade deviation from the traditional methods of production. The number of people for whom there were jobs even in agriculture or in the arts and crafts was limited. Under these conditions, many a man, to use the words of Malthus, had to discover that "at nature's mighty feast there is no vacant cover for him" and that "she tells him to be gone."1 But some of these outcasts nevertheless managed to survive, begot children, and made the number of destitute grow hopelessly more and more.
But then came capitalism. It is customary to see the radical innovations that capitalism brought about in the substitution of the mechanical factory for the more primitive and less efficient methods of the artisans' shops. This is a rather superficial view. The characteristic feature of capitalism that distinguished it from precapitalist methods of production was its new principle of marketing.
Capitalism is not simply mass production, but mass production to satisfy the needs of the masses. The arts and crafts of the good old days had catered almost exclusively to the wants of the well-to-do. But the factories produced cheap goods for the many. All the early factories turned out was designed to serve the masses, the same strata that worked in the factories. They served them either by supplying them directly or indirectly by exporting and thus providing for them foreign food and raw materials. This principle of marketing was the signature of early capitalism as it is of present-day capitalism.
The employees themselves are the customers consuming the much greater part of all goods produced. They are the sovereign customers who are "always right." Their buying or abstention from buying determines what has to be produced, in what quantity, and of what quality. In buying what suits them best they make some enterprises profit and expand and make other enterprises lose money and shrink. Thereby they are continually shifting control of the factors of production into the hands of those businessmen who are most successful in filling their wants.
Under capitalism private property of the factors of production is a social function. The entrepreneurs, capitalists, and land owners are mandataries, as it were, of the consumers, and their mandate is revocable. In order to be rich, it is not sufficient to have once saved and accumulated capital. It is necessary to invest it again and again in those lines in which it best fills the wants of the consumers. The market process is a daily repeated plebiscite, and it ejects inevitably from the ranks of profitable people those who do not employ their property according to the orders given by the public. But business, the target of fanatical hatred on the part of all contemporary governments and self-styled intellectuals, acquires and preserves bigness only because it works for the masses. The plants that cater to the luxuries of the few never attain big size.
The shortcoming of 19th-century historians and politicians was that they failed to realize that the workers were the main consumers of the products of industry. In their view, the wage earner was a man toiling for the sole benefit of a parasitic leisure class. They labored under the delusion that the factories had impaired the lot of the manual workers. If they had paid any attention to statistics they would easily have discovered the fallaciousness of their opinion. Infant mortality dropped, the average length of life was prolonged, the population multiplied, and the average common man enjoyed amenities of which even the well-to-do of earlier ages did not dream.
However, this unprecedented enrichment of the masses was merely a by-product of the Industrial Revolution. Its main achievement was the transfer of economic supremacy from the owners of land to the totality of the population. The common man was no longer a drudge who had to be satisfied with the crumbs that fell from the tables of the rich. The three pariah castes that were characteristic of the precapitalistic ages — the slaves, the serfs, and those people whom patristic and scholastic authors as well as British legislation from the 16th to the 19th centuries referred to as the poor — disappeared. Their scions became, in this new setting of business, not only free workers, but also customers.
This radical change was reflected in the emphasis laid by business on markets. What business needs first of all is markets and again markets. This was the watchword of capitalistic enterprise. Markets — that means patrons, buyers, consumers. There is under capitalism one way to wealth: to serve the consumers better and cheaper than other people do.
Within the shop and factory the owner — or in the corporations, the representative of the shareholders, the president — is the boss. But this mastership is merely apparent and conditional. It is subject to the supremacy of the consumers. The consumer is king, is the real boss, and the manufacturer is done for if he does not outstrip his competitors in best serving consumers.
It was this great economic transformation that changed the face of the world. It very soon transferred political power from the hands of a privileged minority into the hands of the people. Adult franchise followed in the wake of industrial enfranchisement. The common man, to whom the market process had given the power to choose the entrepreneur and capitalists, acquired the analogous power in the field of government. He became a voter.
It has been observed by eminent economists, I think first by the late Frank A. Fetter, that the market is a democracy in which every penny gives a right to vote. It would be more correct to say that representative government by the people is an attempt to arrange constitutional affairs according to the model of the market, but this design can never be fully achieved. In the political field it is always the will of the majority that prevails, and the minorities must yield to it. It serves also minorities, provided they are not so insignificant in number as to become negligible. The garment industry produces clothes not only for normal people, but also for the stout, and the publishing trade publishes not only westerns and detective stories for the crowd, but also books for discriminating readers.
There is a second important difference. In the political sphere, there is no means for an individual or a small group of individuals to disobey the will of the majority. But in the intellectual field private property makes rebellion possible. The rebel has to pay a price for his independence; there are in this universe no prizes that can be won without sacrifices. But if a man is willing to pay the price, he is free to deviate from the ruling orthodoxy or neo-orthodoxy.
What would conditions have been in the socialist commonwealth for heretics like Kierkegaard, Schopenauer, Veblen, or Freud? For Monet, Courbet, Walt Whitman, Rilke, or Kafka? In all ages, pioneers of new ways of thinking and acting could work only because private property made contempt of the majority's ways possible. Only a few of these separatists were themselves economically independent enough to defy the government into the opinions of the majority. But they found in the climate of the free economy among the public people prepared to aid and support them. What would Marx have done without his patron, the manufacturer Friedrich Engels?
This article is excerpted from Liberty & Property, part 2 (2009).
- 1. Thomas R. Malthus, An Essay on the Principle of Population, 2nd ed. (London, 1803), p. 531.
On July 28, in London, Ontario, a police pursuit of two bank robbers resulted in three collisions. The suspects’ car hit another car, and police cars hit two other cars, the second of which was a taxicab, which appears to have had the right-of-way in an intersection. The taxi driver has been released from the hospital, but his passengers, 27-year-old Porsche Clark and her 9-year-old daughter Skyla, as of August 1st, remained in hospital in critical condition.
The London Free Press reports: “Under Ontario law, police may pursue a fleeing vehicle if a crime has been committed or is about to be committed. Police must determine whether the need to protect public safety by stopping the vehicle outweighs any possible public risks from the pursuit.” Indeed, that is the law, under the Police Services Act (PSA).
Simply put, this gives the police an immense amount of leeway. Essentially, the police will determine whether the need to keep the public safe requires them to put the public at risk. This contradicts the first of six principles of the PSA under which “police services shall be provided throughout Ontario,” namely “The need to ensure the safety and security of all persons and property in Ontario [emphasis added].” This principle should preclude police pursuits which might put anyone in danger. That is, the police ought not be allowed to trade your safety for my safety, regardless of whether the danger arises from the reckless driving of the police, or the reckless driving of the criminal as a result of the police pursuit.
Unfortunately, these conflicting rules are typical of government policy. This is how bureaucrats, and especially courts, are given wide latitude to interpret the law, often protecting those who are supposed to protect us.Responsibility of Police Officers
The PSA principle stated above, “The need to ensure the safety and security of all persons and property in Ontario” is again emphasized in PSA Section 43 “Criteria for hiring”: “No person shall be appointed as a police officer unless he or she … is physically and mentally able to perform the duties of the position, having regard to his or her own safety and the safety of members of the public.”
Taken together, that principle and Section 43 are not ambiguous. And if the crazy police pursuit policy did not exist, this should be enough to hold police accountable for actions which endanger the public, right? Unfortunately, no, because they protect themselves with Section 50 “Liability for torts.”
Section 50 stipulates that a municipal police services board — not police officers themselves — “is liable in respect of torts committed by members of the police force in the course of their employment.”
Additionally, “The board may … indemnify a member of the police force for reasonable legal costs incurred … in respect of any other proceeding in which the member’s manner of execution of the duties of his or her employment was an issue, if the member is found to have acted in good faith.” However, “acting in good faith” is a broad concept which, in Canada, as in the US, usually produces immunity for police officers.
As it turns out, the board is not actually liable for any of this because Section 50 further stipulates that “The council is responsible for the liabilities incurred by the board. …” Council means City Council, and we know what that means. Taxpayers are ultimately liable, and the reality is that taxpayers pay a lot of money to settle lawsuits resulting from the indiscretions of police officers.
Even if most officers consistently exercise what rational civilians would consider to be “good judgment,” the law nevertheless punishes taxpayers in order to protect officers who make bad decisions. And nine-year-old Skyla Clark is old enough to understand how this can create perverse incentives.Government’s Rationale
The coercive nature of government fosters a culture where little incentive exists to control the behavior of individuals operating within government bureaucracies, whose budgets are renewed, and often increased, in spite of taxpayers’ dissatisfaction.
At the same time, we are supposed to accept at face value the assertion that society can grow and prosper only if government agents are granted legal immunity for actions they undertake in the performance of their public duties. For if they fear the personal consequences of their own mistakes, they may hesitate to take actions which they sincerely believe to be in the public interest. With this fear running through their minds, they would be frozen in a state of inaction, and citizens would suffer the effects of a rapidly decaying society. However, when granted the privilege of externalizing the costs of their actions on to the backs of taxpayers, then, and only then, are the government’s angels able to function. This is the gospel according to government.
The truth is that those who are tasked with serving the public interest, while protected by legal immunity, are well positioned to serve whatever interests they choose. The doctrine of immunity is a ruse, a license to abuse, and a not too subtle confirmation that some of us will always be above the law. It encourages people to do bad things. It promotes a sense of invincibility, superiority, entitlement, and outrage toward others who forget to bow in the presence of their masters.
In 2008, young Garett Rollins learned this the hard way as he celebrated his 19thbirthday. He and his friends were complying with a police request to leave the party, and Rollins questioned the rough treatment of a girl by Constable Benjamin Tomiuck, who responded “We can do whatever the f_ _ _ we want.” “You don’t have to be such dicks about it,” Rollins replied. Upon hearing those words, Constable Matt Pouli viciously assaulted the unaggressive Rollins. Nine years later, Rollins was awarded $28,500 in damages. It is unclear whether any of this money was paid by Pouli, but it is clear that he remained on active duty, with a salary of $134,031.98 last year.Conclusion
In 2016, Ombudsman Ontario arrogantly claimed that “The government has demonstrated that it is willing and able to respond to urgent public concerns about policing and police culture, and to set provincewide rules in the public interest. It did so in 1992 with its original guideline for the use of force. It did so in 1999 to end dangerous high-speed police pursuits.”
However, as Porsche Clark, Skyla Clark, Garett Rollins, and many others like them know, those new rules were just another example of the government paying lip service to the concerns of the public.
Police officers — and other government agents — are routinely excused for actions for which ordinary civilians are prosecuted, convicted, and imprisoned. It is mystifying and frightening that we support politicians and bureaucrats who constantly moralize about the virtues of equality, while they shamelessly use their power to avoid equality before the law.
Centuries ago, those who believed themselves to be harmed by "slanderous" words may have had to take matters into their own hands, perhaps through dueling, or even through just a drunken fist fight with one's accuser down at the local tavern. Sometimes, these confrontations led to either intentional or accidental death.
Over time, however, courts were tasked with addressing harms allegedly done through this sort of defamation.
Crime historian Randolph Roth notes how matters changed in this regard from the seventeenth century to the eighteen century. Roth recounts how Alexander Stuart, a wealthy Virginia planter, concluded he had been defamed when laborer John Thompson and Thomas Paxton (another wealthy planter) spread a story in which it was alleged that Stuart had engaged in sexual acts with a "negro wench." Stuart sued Thompson and Paxton for slander, but:
In the mid-seventeenth century a gentleman like Stuart would have horsewhipped Thompson and challenged Paxton, who was his peer, to a duel. In the eighteenth century, the desire for revenge was more often satisfied in court, even though half of all slander suits... were settled or dropped before trial, and those that ended in guilty verdicts usually resulted in small damage awards. Most suits were intended merely to demonstrate that the plaintiff was a man who would stand up for his rights. They were not meant to bankrupt the defendant."Guilt and Damages Are Very Difficult to Prove
Roth does not say how Stuart's case turned out. But from a moral point of view, it would seem that it we must consider a variety of factors before any pronouncements about guilt or damages can be made. This has also become increasingly important as legal penalties for defamation has increased since the seventeenth century:
- Were the events in the story told by Paxton and Thompson true?
- Did Paxton and Thompson believe the story to be true?
- Did Stuart actually suffer harm?
- Did Paxton and Thompson intend to do harm to Thompson?
When it comes to establishing these facts, things are much easier said than done.Is It True?
It may be relatively easy to determine whether or not Stuart actually did what Paxton and Thompson said he did. But were Paxton and Thompson merely repeating what they believed to be facts? If so, that would suggest less malice in what they did. Or none at all.
Moreover, if Stuart did what was related in Paxton's and Thompson's stories, does he really have a "right" to be immune from the effects of things he actually did?
Some might say Stuart has a right to privacy, but as Murray Rothbard asks: "How can there be a right to prevent Smith by force from disseminating knowledge which he possess? Sure there can be no such right."
In other words if Stuart is seeking a legal judgment against Paxton and Thompson — and if Paxton and Thompson believed the story to be true — what Stuart's really saying is that it is good for the state to use violence against people who simply relate facts.Was There Really Harm?
A second important factor is determining if Stuart really suffered harm as a result of Paxton's and Thompson's actions.
Again, this is easier said than done.
To show that he has been significantly harmed, Stuart ought to have to show:
1. People believed the stories related by Paxton and Thompson.
2. People cared enough to act on the new information.
3. These actions brought real and significant harm to Stuart.
All too often, those who support government sanctions against alleged slanderers and libelers assume that people merely believe everything they are told, form a negative judgment against the alleged victim, and then act out against that victim.
This, of course, is not at all necessarily the case. For example, even after years of being dogged by accusations of being a child molester, Michael Jackson's performances were still very much in demand. At the time of his death, he was about to pocket at least 60 million dollars for shows planned in London. Jackson's album sales were also increasing at the time. Did some people refuse to purchase Jackson's products and services because of the allegation? Possibly. Or it may have been that the allegations were readily believed by those who already didn't like him — while his fans refused to believe the allegations. Moreover, the allegations might have meant some fan might have supported Jackson even more in a show of solidarity.
More recently, Johnny Depp has sued his ex-wife for $50 million dollars for defamation. His ex-wife claims he abused her. Could Depp prove she has hurt his income? To be sure, it may not be difficult to show that his income has suffered in recent years. Depp has starred in a string of box office mediocrities and bombs in recent years, including The Lone Ranger which lost $190 million dollars for Disney. But that all happened before his ex-wife allegations came out.
So, if Depp is now claiming he has long income as a result of his ex-wife comments, how do we know that his drop in income was not really due to his lack of success at the box office?
The notion that unflattering information about a person are automatically believed, and then translated into loss of income, is not at all empirically certain.Is It Defamatory to Call Someone a Homosexual?
One especially damaging and questionable concept within defamation law is the concept of "defamation per se." In these cases, the alleged victim does not even need to demonstrate harm. The defamatory comments are simply assumed to have caused harm.
Yet, the assumptions behind defamation per se are often completely divorced from reality.
For example, in many jurisdictions in the United States, it is considered defamation per se to accuse someone of being a homosexual.
But is this really defamatory?
Legal scholars are increasingly noting that it cannot at all be assumed that an alleged victim suffers economic loss due to an accusation of homosexuality. Whether or not it is damaging depends entirely on the details of a person's community and social environment.
Similarly, "unchastity" has long been considered a type of defamation per se. In same places and cultures, noting that a woman has been raped may have aroused revulsion directed at the rape victim. But in most modern and Western contexts, one can certainly argue that knowledge a women has been raped is more likely to garner sympathy for the victim more than anything else. Moreover, just calling a woman a "slut" in public can hardly be assumed to lead to her social exile. There is, quite frankly, approximately zero evidence of this outside small ultra-conservative pockets in the modern West.
The arbitrariness of these declarations of defamation per se demonstrate some of the many dangerous assumptions behind defamation law.
Thus, as a bare minimum, any legal discussion of defamation must be closely tied to an alleged victim's ability to demonstrate that real damage has resulted from supposed defamatory comments. Fortunately, in the US at least, most defamation cases are based on “libel per quod”, which, as Matthew Bunker, et al note, "requires proof of special damages—actual economic or pecuniary loss. These damages can be difficult to prove, and their absence creates a barrier to recovery."The First Amendment and Defamation
Bunker, et al, also conclude:
Proving defamation in United States courts has become an increasingly complicated undertaking. Along with a substratum of common law requirements, the U.S. Supreme Court has imposed a number of additional layers of First Amendment firmament, beginning with the landmark case New York Times Co. v. Sullivan. Additional requirements flowing from state constitutional free speech and press protections have also made their way into the defamation laws of individual states.
In other words, thanks to the First Amendment, it has become quite difficult to prove defamatory material ought to result in court ordered restitution.
While the use of the common law against defamation was spreading in America's eighteenth-century British colonies, the adoption of the First Amendment at the end of that century added some significant barriers, especially at the federal level.
The benefits of these barriers can be seen when US defamation law is compared to the law elsewhere.Using Defamation Law to Silence Critics
Consider, for example, the case of Rachel Ehrenfeld. NPR reports:
In 2003, she [Ehrenfeld] wrote a book called Funding Evil: How Terrorism is Financed, and How to Stop It. The book accused a wealthy Saudi businessman of funding al-Qaida. The businessman, Khalid bin Mahfouz, sued Ehrenfeld in a British court.
Although Ehrenfeld is an American writing in the US, bin Mahfouz sued her in British court because British legal requirements for defamation are more lax. Consequently:
"Crooks and brigands from around the world come [to the UK] launder their reputations, where they couldn't get exculpation in either their home country or indeed in the United States of America," says Mark Stephens, a London lawyer who often represents media companies in these cases. ... In American courts, the burden of proof rests with the person who brings a claim of libel. In British courts, the author or journalist has the burden of proof, and typically loses. "So you've got the rich and powerful shutting down and chilling speech which is critical of them," says Stephens.
Not surprisingly, Ehrenfeld lost in court, and prior to 2010, an American court might have enforced the British court's $250,000 fine against her. But thanks to the "Speech Act" passed by Congress that year, US courts are now instructed to not enforce international defamation rulings unless they conform to US standards under the First Amendment.
In other words, the lackadaisical defamation standards employed by much of the world no longer have standing in the US.
Not surprisingly, many British jurists still think they have struck the right "balance" between the interests of the allegedly defamed and those who are accused of defamation.
But they're wrong.
The correct balance is to be lopsidedly in favor of those accused of defamation.
After all, the bin Mahfouz case illustrated just how prone to abuse defamation cases can be when involving wealthy and powerful people. Few ordinary people can afford to defend themselves against billionaires like bin Mahfouz, or even foreign regimes who are known to sue their critics in various courts.
The result is a situation in which the powerless are less likely to criticize the powerful. Murray Rothbard notes:
[T]he current system [which allows for defamation suits] discriminates against poorer people in another way; for their own speech is restricted, since they are less likely to disseminate true but derogatory knowledge about the wealthy for fear of having costly libel suits filed against them.Other Dangers Loom on the Horizon
It is possible to conceive of future cases in which defamation law could be used to enforce modern notions of political correctness.
For example, accusing another person of mental illness is often considered to be a type of defamation per se. So, what happens when a person states that transgender people suffer from a type of mental illness? Potentially, those who express this opinion could be sued for defamation in court by those who claim they were harmed by being cast as mentally ill.
Indeed, in Italy, a physician became embroiled in defamation lawsuit when she stated that homosexuality is "a disease." She was eventually exonerated of defamation, but only after a long legal battle.
Fortunately, respect for freedom of speech makes this less likely in the US. But it's not unthinkable.The Answer: Combat Speech with More Speech
None of this is to say that ordinary people cannot suffer true loss as a result of defamatory information being released. But the cost of defamation laws are also significant in terms of both abuse the powerful, and also in cases where people merely said things they thought to be true without any malicious intent or even negligence.
The answer, however, is suggested by Rothbard, who notes that in a system of unrestricted free speech, "everyone would know that false stories are legal, there would be far more skepticism on the part of the reading or listening public, who would insist on far more proof and believe fewer derogatory stories than they do now."
This, of course, is already the reality for people of ordinary means. In an age of social media especially, where anyone can be publicly accused of heinous acts at any time, the non-wealthy must trust to the public's skepticism as a defense against potentially costly and defamatory statements.1 After all, if we live in a society where people automatically believe anyone who accuses a third party of being a rapist, then our society has problems far beyond insufficiently robust defamation laws.
Bob Murphy analyzes John Carpenter’s 1988 cult classic They Live, starring Roddy Piper. After making some general observations about the major thematic elements, Bob critiques the main character’s strategy for dealing with an alien takeover of the media and government.
According to our guest, American health care is stuck in a fortress mentality that stifles innovation, constrains medical advances, and yields low quality care. That fortress was erected more than one hundred years ago but, in many ways, is being circumvented by creative actors who are seizing opportunities to make changes outside of the political process.
Bob Graboyes is Senior Research Fellow at the Mercatus Center at George Mason University. He holds a PhD in Economics from Columbia University, and has held a number of academic positions in higher education in Virginia. He is the author of “Fortress and Frontier in American Health Care,” a booklet which offers many examples of individuals adopting a risk-tolerant frontier attitude to compete with insiders and pave the way to the future without having to rely on political reform. Prior to focusing his career on healthcare, Bob Graboyes was regional economist and director of education at the Federal Reserve Bank of Richmond.
[Originally published November 11, 2009.]
Do you think ideas don't matter, that what people believe about themselves and their world has no real consequence? If so, the following will not bug you in the slightest.
A new BBC poll [reported November 2009] finds that only 11 percent of people questioned around the world — and 29,000 people were asked their opinions — think that free-market capitalism is a good thing. The rest believe in more government regulation. Only a small percentage of the world's population believes that capitalism works well and that more regulation will reduce efficiency.
One quarter of those asked said that capitalism is "fatally flawed." In France, 43% believe this. In Mexico, it is 38%. A majority believes that government should rob the rich to give money to poor countries. In only one country, Turkey, did a majority say that less government is better.
It gets even worse. While most Europeans and Americans think it was a good thing for the Soviet Union to disintegrate, people in India, Indonesia, Ukraine, Pakistan, Russia, and Egypt mostly think it was a bad thing. Yes, you read that right: millions freed from socialist slavery — bad thing.
That news must lift the heart of every would-be despot the world over. And it comes as something of a shock twenty years after the collapse of socialism in Russia and Eastern Europe revealed what this system had created: backward societies with citizens who lived short and miserable lives. Then there is the China case, a country rescued from bloody barbarism under communism and transformed into a modern and prosperous country by capitalism.
What can we learn? Far from not having learned anything, people have largely forgotten the experience and have developed a love for the ancient fairy tale that all things can be fixed through collectivism and central planning.
As to those who would despair at this poll, consider that it might have been much worse were it not for the efforts of a relative handful of intellectuals who have fought against socialist theory for more than a century. It might have been 99% in support of socialist tyranny. So there is no sense in saying that these intellectual efforts are wasted.
Ideas also have a life of their own. They can lie in wait for decades or centuries and then one day, the whole of history turns on a dime. Especially these days, no effort goes to waste. Publications and essays, or any form of education, is immortalized, ready for the taking by a desperate world.
As for the opinion poll, we have no idea just how intensely these views are held or even what they mean. What, for example, is capitalism? Do people even know? Michael Moore doesn't know, else he wouldn't be calling bailouts for elite, Fed-connected financial firms a form of capitalism. Many other people reduce the term capitalism to "the system of economics in the United States." It is no more complicated than that. This is despite the reality that the United States has a comprehensive planning apparatus in place that is directly responsible for all our current economic troubles.
Now, let's take this further. Among the many people around the world who do not like the US empire, many believe they don't like capitalism either. If the US economy drags the world down into recession, that is a prime example of capitalism's failure. Even more preposterous, if you didn't like George W. Bush, his ways, and his cronies, and Obama is something of a relief, then you don't like capitalism and you do like socialism.
Another point of view misunderstands the idea of capitalism itself. It is not about creating economic structures that benefit capital at the expense of labor or culture or religion. It is about a system that protects the rights of everyone and serves the common good. Capitalism is just the name that happened to be identified with this system. If you want to call freedom a banana, fine, what matters is not words but ideas.
I do know that none of these messed-up definitions of capitalism follow. You know this too. But for the world at large, serious ideological analytics are not the animating force of daily life. Many people attach themselves to vague slogans.
Further, as Rothbard has forcefully argued, free-market capitalism serves no more than a symbolic purpose for the Republican Party and for conservatives. Economic liberty is the utopia that they keep promising to bring us, pending the higher priority of blowing up foreign peoples, jailing political dissidents, crushing the left wing on campus, and routing the Democrats.
Once all of this is done, they say, then they will get to the instituting of a free-market economic system. Of course, that day never arrives, and it is not supposed to. Capitalism serves the Republicans the way Communism served Stalin: a symbolic distraction to keep you hoping, voting, and coughing up money.
All of which leaves true capitalism — a product of the voluntary society and the sum total of all the exchanges and cooperative acts of people all over the world — with few actual intellectual defenders. They are growing, but the educational work we need to do is daunting, and we are facing the most powerful forces in the world.
There is nothing new in this. In the history of the world, freedom is the exception, not the rule. It must be fought for anew in every generation. Its enemies are everywhere, but the leading enemy is ignorance. For this reason, the main weapon we have at our disposal is education.
Education includes explaining that socialism is an unworkable idea. There is nothing better than Ludwig von Mises's 1922 book Socialism, a comprehensive presentation of the fallacy of the socialist idea. Another essential work is the Black Book of Communism. Here we have a wake-up call that shows that the dream of socialism is actually a bloody nightmare.
Then there is the issue of the positive case for capitalism. One can do no better than Mises's own Human Action, which is not likely to ever be surpassed as a treatise on the free economy. True, it is not for everyone. And that's fine. There are many primers out there too.
The fashion for socialism and the opposition to capitalism should alarm every lover of freedom the world over. We have our jobs cut out for us, but with numbers this bad, it is not difficult to make a difference. Every blow you can land for free markets helps protect freedom from its enemies.
After almost twenty years without an execution, the Federal penal system has decided to proceed with a number of executions. NPR reported last month:
U.S. Attorney General William Barr has instructed the Federal Bureau of Prisons to change the federal execution protocol to include capital punishment, the Justice Department said.
Barr also asked the prisons bureau to schedule the executions of five inmates who have been found guilty of murder. According to the DOJ, the victims in each case included children and the elderly. In some of the cases, the convicted murderers also tortured and raped their victims.Is the Death Penalty Ever Warranted?
I am not an anti-death-penalty absolutist. That is, in some cases where the testimony and physical evidence is overwhelming — and the crimes are particularly heinous — the death penalty could be warranted, at least in theory.
But given police corruption, incompetent prosecutors, and an over-reliance on circumstantial evidence in court, a great many death-penalty cases are built on a pretty shaky foundation. Moreover, it is extremely likely that innocent people have been executed in the United States whether through errors, or through outright fraud on the part of government officials.
In other words, the death penalty is serious business, and given that government bureaucrats can't even run the DMV or the VA competently, there's no reason to assume their criminal-justice skills are anything deserving of our unconditional trust.
So, it is conceivable that the death penalty could be justly applied in some cases.There's No Need for a Federal Death Penalty
When examining the federal death penalty, however, it quickly becomes apparent that it is simply unnecessary — and should be completely abolished.
State laws already address the need to prosecute violent criminals. Murder, rape, assault, and other violent crimes are already illegal in every state of the Union. If Smith murders Wilson in, say, Pennsylvania, Smith can be tried for murder under Pennsylvania law. This is true even if Smith employs bombs, airplanes, or other tools associated with international terrorism.
There is no need for an extra layer of federal criminal justice. For example, Timothy McVeigh, who was convicted of the Oklahoma City bombing, was certainly eligible to be tried for murder under Oklahoma law. Those who perpetrated 9/11 were certainly eligible to be tried for murder under New York and Virginia laws. But McVeigh was tried for the federal crime of killing a federal agent. Zacarias Moussaoui was prosecuted in federal court for his role in the 9/11 attacks, specifically "conspiracy to murder United States employees," among other crimes.
Although these sorts of killings are certainly illegal in the states where they occur, the federal government insists on having prerogatives to prosecute defendants under federal law also. This is often done to add an additional layer of possible prosecution, and so that defendants can be prosecuted more than once for the same crime. This is a violation of the Bill or Rights, of course (as explained by Justice Neil Gorsuch) but federal courts have looked the other way on this loophole for years.
Besides, cases of terrorism or international crime rings are hardly what's behind most capital cases in federal court. We're not talking about Russian crime bosses or domestic supervillains. On the contrary, nearly all defendants in capital cases in federal court are brought to trial for run-of-the-mill crimes involving drug deals, bank robberies, or other acts that are already violations of state criminal statutes.
Federal involvement simply isn't essential.
Moreover, in some cases, federal prosecutors deliberately go against the wishes of local prosecutors.
Lezmond Mitchell, for example, is a Navajo Indian who was convicted of murdering a Navajo woman and her granddaughter on Navajo land. He is now awaiting execution in a federal prison.
But note the murders took place on Navajo land, and Navajo law does not allow the death penalty. Nonetheless, the federal government inserted itself into the case. According to an analysis by The Intercept:
the U.S. government had forced itself onto the case. For one, because the murder alone was not punishable by death under tribal law, seeking the death penalty was “possible only by virtue of the fact that Mitchell and a fellow Navajo, aged 16, stole a car in connection with the murders they committed,” [ Judge Stephen Reinhardt wrote in a legal dissent on the case.] The Anti Car Theft Act of 1992 had made carjacking a federal crime — and the 1994 crime bill had made carjacking resulting in death a crime punishable by death. “In the absence of the carjacking, Mitchell would not have been eligible for the death penalty.”
“Equally important,” Reinhardt went on, “none of the people closely connected to the case wanted Mitchell to be subjected to the death penalty: not the victims’ family, not the Navajo Nation — of which the victims and perpetrators were all members and on whose land the crime occurred — and not the United States attorney whose job it was to prosecute Mitchell.”
No one directly involved with the case who lived within 500 miles of the reservation demanded the death penalty. But a government bureaucrat in Washington, DC. That came only after US Attorney General Ashcroft intervened to ensure the death penalty was on the table.Expanding Federal Powers
The fact that a car theft had allowed the federal government to demand jurisdiction in the Mitchell case illustrates a longtime strategy used by federal lawmakers to expand federal jurisdiction over time.
The US Constitution, after all, only mentions three federal crimes: treason, piracy, and counterfeiting. Only piracy involves crimes that necessarily occur beyond the jurisdiction of state laws against violent crime. Counterfeiting, on the other hand, is merely a type of fraud. And fraud is already illegal in every state. Treason is only a real problem if it involves violent acts against others — in which case it is already covered by state laws against violent crime.
Meanwhile, all other federal crimes beyond these three are based on tortured legal reasoning designed to do an end run around the Tenth Amendment. They're justified under the "necessary and proper" clause or the commerce clause. They are redundant and largely function to greatly expand federal intervention into each and every American community. Beyond piracy, the entire federal apparatus for criminal prosecutions ought to be abolished. But the federal death penalty is a good place to start.
Dr. Bylund observes that students, when selecting entrepreneurial projects for his course, lean heavily towards consumer products and services. Does this represent smart entrepreneurial thinking, or not? Is it biased by (lack of) marketplace experience? Is it biased by media reporting and “buzz”? And what can practicing entrepreneurs learn from a reasoned analysis of the profit opportunities in Business-to-Business ventures compared to Business-to-Consumer ventures?Key Takeaways and Actionable Insights
The economy — measured by Gross Output — is 70% production. That means that 70% of entrepreneurial opportunities arise in the supply chain stages that are prior to the final consumer purchase.
Keynesian economists believe that the economy is defined by consumption. Hence all their policies are justified as supporting or boosting consumption. Austrian economists think differently, and recognize that production is the health of the economy. People produce so that they can then exchange with others — that’s simple way to invoke Say’s Law. Keynesians use the metric of GDP to indicate economic growth or decline, and that metric is 70-75% composed of consumption. Economist Mark Skousen led the charge for an alternative metric, Gross Output or GO to track the size of the economy. GO measures the value of all production at every stage of the supply chain, i.e. every transaction where one entrepreneur or firm sells to another. GO identifies pre-consumption transactions as 75% of the economy. As Dr. Bylund says, it’s where the money is for entrepreneurs.
For the entrepreneur, B2B — producing input for other firms — offers advantages of structure, standardization and scale.
Structure: When an entrepreneur sells inputs for another firm’s production, the customer provides structured guidance on measurements, quality, delivery methods and timing — a blueprint for what they want to receive and how they want to receive it. Demand is codified. If the supplying entrepreneur can meet these codes, and a bid and a supply contract are approved, then a great deal of certainty is created around the business relationship.
This does not mean that there is no room for innovation. That comes in the elements of the business relationship that are not contracted. The creative entrepreneur can innovate in speed, responsiveness, ideation, and spotting new opportunities for efficiency. Innovation occurs at the edges of the structure, while the structure itself provides stability.
Standardization: Once the structured relationship is defined and agreed and the production interchange is established, the supplier-entrepreneur benefits from maintenance of the standard. There is precise knowledge of the ingredients to use, the production process to follow, the production rate and delivery specifications. This adds to certainty, and allows for the negotiation of lower costs.
Scale: Obviously the scale opportunity for the supplier is dependent on the size of the buyer and the size of the contract — it’s in the buyer’s hands. Nevertheless, contract reliability represents scale over time, and future volume assumes some (although not complete) predictability. The supplier can concentrate on efficiency measures to lower costs when there is no need to concern themselves with throughput variability.
These advantages are reversed in B2C businesses, where the trend is towards the opposite of structure, standardization and scale: personalization. Dr. Bylund called the B2C market ephemeral and flimsy. He was referring to the changeability of the consumer. Austrians understand that value is the subjective perception of the consumer. And the consumer is emotional, idiosyncratic and inconsistent in their continual rearrangement of value scales — what they prefer today is often different than what they prefer tomorrow, even if it is not obvious to the entrepreneur what change in conditions has brought this about. Consumers’ moods change and their choices change. Our free pdf points out the techniques required to manage in this context — tight targeting, deep empathy, and micro-segmentation.)
An entrepreneur’s production cycle may be 5 months or 5 weeks, but the consumer can change their mind in 5 minutes. They are on a different cycle. Their demand can not be relied upon. Continuous change is required of the entrepreneur competing for the consumer’s dollar, and continuous change is a tough business model. (Listen to our previous podcast on Austrian Capital Theory for the best tips on how to manage for continuous change.)
There are business channels where both B2B and B2C models are required. Some entrepreneurs find themselves moving their consumer goods to their end-consumer through distribution channels owned and operated by big businesses, such as CPG manufacturers of foods and beverages that sell on the shelves of Whole Foods or Walmart. The Walmart and Whole Foods relationships are B2B, even though the entrepreneur is in the B2C space. It is necessary to focus on producing value for the consumer, and educating the retailer about their benefit in passing on that value, as well as their role in communicating it to the consumer. At the same time, it is necessary to comply with the structure, standardization and scale rules set by the big business. We might call this a B2B2C business. It requires skills for both B2B and B2C.
Competing in B2B remains challenging, of course, but entrepreneurs should consider the size of the opportunity and the reduced uncertainty that are potentially available. In B2B, the entrepreneur is required to compete with other suppliers, to get costs and prices right to meet the customer’s needs, and to work hard to meet supply chain standards and specifications, and to negotiate contracts. Those requirements may be preferable and less uncertain than the ephemerality and flimsiness of consumer markets.Additional Resource
President Donald Trump wants a lower US dollar. He complains about the over-valuation of the American currency. Yet, is he right to accuse other countries of a “currency manipulation”? Is the position of the US dollar in the international monetary arena not a manipulation in its own right? How much has the United States benefitted from the global role of the dollar, and is this “exorbitant privilege” coming to end? In order to find an answer to these questions, we must take a look at the monetary side of the rise of the American Empire.
Trump is right. The American dollar is overvalued. According to the latest version of the Economist’s “ Big Mac Index,” for example, only three currencies rank higher than the US dollar. Yet the main reason for this is not currency manipulation but the fact that the US dollar serves as the main international reserve currency.
This is both a boon and a curse. It is a boon because the country that emits the leading international reserve currency can have trade deficits without worrying about a growing foreign debt. Because the American foreign debt is in the country’s own currency, the government can always honor its foreign obligations as it can produce any amount of money that it wants in its own currency.
Yet the international reserve status comes also with the curse that the persistent trade deficits weaken the country’s industrial base. Instead of paying for the import of foreign goods with the export of domestic production, the United States can simply export money.American Supremacy
The performance of the US economy in the 20th century owes much to the predominant role of the US dollar in the international monetary system. A large part of attaining this role was the result of the political and military supremacy that the United States had gained after World War I. Still today, the position of the US dollar in the world of finance represents a major underpinning of the prosperity at home and provides the basis for the expansion of the US military presence around the globe.
After each of the two world wars in the 20th century, the United States emerged as the largest creditor country, while the war had ruined the economies of the war-time enemies along with that of the major allies. After the end of the Cold War, this pattern would experience a repetition. The United States, so it seems, has been, since then, the only remaining superpower.
In the 1990s, the dollar experienced a new flourishing, and the US economy went through magical rejuvenation. However, this time the economic and political fundamentals gave much less support to the assumed role of the dollar in the world. In contrast to the time after World War II, the basis for the dollar's global expansion in the 1990s was not economic strength, but debt creation. The public debt ratio, which had been falling since the end of the war began to turn-around in 1982 and has been rising ever since (Figure 1).
Figure 1: US Gross Federal Debt Ratio (Debt in percent of gross domestic product), 1940–2018. Source: US Bureau of Public Debt, tradingeconomics.com
With this debt creation came a new phase of global expansion of the dollar. The spread of the dollar provided the basis for the economic performance and the military position of the United States. Yet this time, the new structure that has emerged is outwardly powerful but inherently fragile. It is not economic strength that provides the foundation of the role of the US dollar in the international monetary system, but it is the US dollar's financial role that provides the basis for the United States to maintain and extend its global activities.
While after 1919 and after 1945, the United States emerged not only as the largest international creditor, but also as the major industrial power, the US has become an international debtor since the 1980s and is confronted with a weakening industrial base. Also, in contrast to the earlier world wars and the other conflicts, the economies of Russia, Western Europe, and Southeast Asia did not lay in ruins when the Cold War ended. As to their productive capacity and financial resources, these regions now are on an even footing with the United States.
For a while it appeared as if the international monetary system that emerged in the 1990s could be interpreted as a new version of the older Bretton Woods System whose structure foresaw a central role for the US dollar in the post–World War II era. While the parallels fit insofar as the current system provides similar benefits to the participants, the present structure is even more flawed than the older scheme, which broke down due to its inner contradictions.Bretton Woods
Like the earlier Bretton Woods System (BW1), the current system (BW2) is characterized by the pegging of foreign currencies to the US dollar or using the dollar as the currency of reference. This time, it is mainly Southeast Asian countries, particularly China, that practice this policy in an informal way. Through this arrangement, these economies in Southeast Asia receive a similar advantage as was once enjoyed by the Western European countries when their undervalued currencies gave them a competitive advantage that helped to rebuild their industrial base after the Second World War. Once this reconstruction stage was completed, the BW1 system fell apart, and the Europeans began to build their own currency system. The decoupling of the European currencies from the dollar progressed step by step and finally led to the introduction of the euro in 1999. As of now, the euro is equal to the US dollar in the size of its internal use, yet as a global currency, and particularly as an international reserve currency, the US dollar still dominates majestically (Figure 2).
Figure 2: Shares of the major currencies in the international monetary system. Source: European Central Bank
Mainly the central banks in Southeast Asia, foremost China, have accumulated US dollars as their international reserves in the recent past. However, there is little doubt that their willingness to finance US deficits and to hold on to a weakening currency will not last forever. As happened in Europe before, once the prime goal of these countries is fulfilled — industrial development based on exports with the help of undervalued currencies — Southeast Asia will move out of the dollar linkage.
The Bretton Woods System as it was established by the end of World War II bestowed an “exorbitant privilege” to the United States when the dollar became the point of reference for the international currency system following the Bretton Woods Accord. With the other member countries fixing their currencies to the US dollar, and the US dollar officially fixed to gold at $35 per troy fine ounce, it seemed as if an ideal construction was found in order to avoid international monetary disruptions and to provide the framework for global economic expansion.
The gold anchor was aimed at preventing an excessive production of US dollars by the US government. When foreign countries had a trade surplus, they were formally allowed, according to the Bretton Woods Accord, to exchange the excess dollars for gold from the American Treasury. With a stable parity between dollar and gold, this would have restricted dollar creation. France took the agreement literally and demanded gold from the United States instead of accumulating dollars as international reserves. Yet other surplus countries such as Japan and West Germany refrained from that option. With their exchange rates kept competitive, Japan and West Germany embarked upon an export-led growth strategy that sped up their economic recovery and made them industrial powers again.
For the United States, the BW1 system provided a special privilege and it did not take long for the United States to abuse it. Pursuing the goal of expanding the welfare state along with ever-more-active foreign military involvements, the United States expanded the money supply drastically. The discrepancy began to widen between the stock of gold in the vaults of the Federal Reserve and the dollars in circulation in the world. It became obvious that the US government no longer had the means to fulfill the original agreement of making foreign currencies exchangeable into gold. By the late 1960s, the dollar shortage of the 1950s had turned into a dollar glut. World price inflation began its rise.
Originally in the BW1 treaty, it was stipulated that the modification of currency parities should be an exception rather than a rule. But in the course of the 1960s, the international monetary system entered into a phase of high instability when fixing and re-fixing of foreign currencies to the dollar became a huge concern. The perverse monetary system that emerged created a bonanza for currency speculators. The candidates for exchange rate revaluation — such as Germany or Japan — were easy to identify. By taking out a dollar loan and changing the money at a fixed rate into German marks or Japanese yen and then depositing the amount, leverage could be applied, and profits were guaranteed when the revaluation of the foreign currencies occurred — as it was not hard to foresee. The risk was minimal and largely confined to bearing the cost of the interest rate differential between the rate of the dollar loan and that of the deposit rate in the German or Japanese money markets.Long Live the Dollar
In the late 1960s, the international monetary system had transmogrified into a source of global liquidity creation that originated from the United States but forced also other nations to import this inflation. Inflation-fighting central banks, such as the German Bundesbank, could not effectively apply restrictive instruments. Given that the interest rate differential was the prime risk factor for currency speculators, a restrictive monetary policy with higher interest rates in the revaluation candidate country would attract even more hot money and would have made the speculation even less risky. Central banks abroad, particularly the German Bundesbank and the Bank of Japan, massively accumulated US dollars as international reserves when they held their exchange rates fixed to the dollar at the undervalued parity. Yet by buying up the excess offer of US dollars with their own currency, these countries expanded their own monetary base and laid the foundation for inflation at home.
In 1971, with the so-called "Smithsonian agreement," a final attempt was made to save the old system when the United States devalued its currency against gold and a series of other currencies. However soon thereafter, it became obvious that there was no chance of revival for the old regime. In 1973, with the adoption of the new rule that each country could choose its own currency arrangement, the Bretton Woods System was officially declared as dead.
Since then, the US dollar has entered into a long decline, interrupted by two episodes. Under the Reagan presidency, the Cold War entered into its final period, and the dollar became the currency of refuge for some time. The US victory in this battle appeared as a replay of the endings of World War I and World War II with the United States emerging for a third time on top of the world. In the 1990s, the triad of global dominance seemed well in place for the United States: unrivalled military might, a booming and innovative economy, and the status of undisputed issuer of the global currency. The US dollar experienced another period of strength. Since 2002, however, the long-term trend toward a weaker dollar is back in place, interrupted by lower peaks of the waves of strength (see Figure 3).
Figure 3: US Dollar Index, 1965–2019. Source: tradingeconomics.comThe Dollar and US Foreign Policy
In the 1990s, the monetary policy of the United States became an instrument of a grand geostrategic enterprise. The neoconservative movement took this constellation as it emerged in the 1990s for granted and implemented a policy that was based on a philosophy that assumed with almost religious confidence that it was the duty and right of the United States to be the hegemon in the 21st century. In contrast to the time after the two world wars, however, the rest of the world outside of the United States did not lay in ruins. While after the two world wars, it was the US industrial base that laid the foundation for the role of the US dollar, now it was not the superiority of the US industrial base that provided the basis for the global role of the United States but its insatiable appetite for private and public consumption. The current underpinning of the geostrategic supremacy play of the United States is the US dollar by itself in its role as the major international reserve and trade currency. It is a system without a proper foundation similar to traditions that live on for some prolonged period of time even when the reasons for their existence have vanished.Changing of the Guard
The easy monetary policy of the United States has accelerated the de-industrialization at home and has fostered industrialization abroad (predominantly in China and in the rest of Southeast Asia); it has produced a situation that stands in sharp contrast to the end of World War I and World War II. Under the new BW2 system, the United States is no longer the largest creditor with the largest industrial base, but instead has become the largest international debtor. Imperial politics requires expansive monetary policy, and the consequence of it shows up in persistently high trade deficits and a deteriorating external investment position (Figure 4).
Figure 4: United States net international investment position, 1976–2019 (in million US-dollars). Source: tradingeconomics.com
Being the issuer of a global currency provides huge benefits that come with a curse. Increased private and public consumption possibilities come from the privilege of getting goods from abroad without the necessity of producing an equivalent amount of tradable export goods. While other countries have to export in order to pay for their imports, the sovereign who emits a global currency is exempt from adhering to the most fundamental law of economic exchange. This sets domestic resources free for the expansion of the state, particularly military power. The more such an imperial power extends its military presence, the more its currency becomes a global currency, and thereby new expansionary steps can be financed. Expansion becomes a necessity.
Over time, however, the divergence widens between the weakening industrial base at home and the extended global role. With goods coming from abroad for which there is no immediate need to pay with sweat and effort, the domestic culture changes from an ethics of production to hedonism. Creeping corruption cronyism undermines the political system. With resources set free because of imports, the production of goods at home shifts to fancy activities. The cycle of "panem et circenses" has been the fate of all empires.
The current global position of the United States is similar to that of Spain in the period of its decline. Already economically hollow, Spain tried desperately to hang on to its outposts and "possessions" around the globe while the domestic economy transmogrified into a public-service and militarized economy. In the end, the United States gave the coup de grace to the Spanish Empire by taking away Cuba, Puerto Rico, and the Philippines. A new phase of US geographic expansion and dominance began and in 1898 and the stage was set for the United States to become the imperial power of the 20th century.
History, and in particular economic history, always shows both: common features and differences, and indeed, the American Empire is different from some of the former empires. Yet what the United States has in common with the former imperial states is that at some point the military extension becomes too complex to be handled efficiently and thus becomes too costly.
The discrepancy between the relative position of the US economy in the world on the one hand and the relative position of the United States as to its military presence and the role of the US dollar on the other hand is moving toward a cracking point. This leads to the conclusion that in a world where the economic strength of the United States is diminishing relative to other countries and regions, there will be less and less of a place for the US dollar privilege.
Different from the factors that justified the expectation of a coming demise of the dollar in 2007, the American currency has experienced a new spring due to the financial crisis of 2008. With little else in place for shelter, the US dollar served as a safe haven. It remains to be seen if this will also be the case when the next financial disaster happens.
Economist Patrick Newman relays his adventurous tale of deciphering Murray Rothbard’s handwritten manuscript on early American history. Needless to say, Rothbard’s take is not what you learned in school.
When Donald Trump was running for President he accused China’s trade policies and the resulting U.S. trade deficit of being “the greatest theft in the history of the world.”
His position on China has remained unchanged. Trump recently, again, denounced China for manipulating its currency in order to weaken the yuan. Trump argues that a weaker yuan, in other words a stronger dollar, will increase the U.S. trade deficit and harm the U.S. economy.
Both sides on this exchange rate issue want to have a weaker currency. First of all, notice the hypocrisy in Trump’s statement. He denounces China for wanting a weaker yuan because Trump wants a weaker dollar. It’s bad for China to implement policies that weaken its currency, but it’s good for the U.S. federal government to have the same goal of weakening its currency. Trump is criticizing China for doing the exact thing that Trump wants to do.
But beyond the hypocrisy, note that we cannot even be certain that a weaker dollar will lead to a smaller U.S. trade deficit. The trade deficit is the amount of exports, in dollar terms, minus the amount of imports, again in dollar terms. A weaker dollar will lead to higher prices for U.S. imports and fewer imports and lower prices for U.S. exports and an increase in our exports. But if we pay higher prices for imports and buy fewer imports the dollar amount of imports may not decrease. If the effects of higher prices outweighs the effects of a lower quantity of imports, then the weaker dollar would lead to more imports in dollar terms.
Similarly, if the price effect of lower prices for our exports outweighs the effects of selling more exports, then the dollar amount of exports would decrease. In other words, the relationship between the dollar exchange rate and the trade deficit is determined by the price elasticities of demand for exports and imports. (In international trade theory this is called the Marshall-Lerner condition.)
Next, let’s assume that a weaker dollar does lead to the Trumpian goal of reducing the trade deficit. Here we see a contradiction between this policy and Trump’s other trade policy goals. Trump wants U.S. and foreign companies to build businesses and employ workers here in the U.S. instead of in other countries and he believes that his tariff policies will lead to this result. That is to say he wants to increase the amount of capital investment in the U.S. Trump believes that tariffs will increase the amount of foreign investment in the U.S. and/or decrease the amount of U.S. investment overseas.
In the government’s balance of payments, the net capital flows is the amount of capital inflows into the U.S. minus the amount of capital outflows out of the U.S. Trump wants to increase these net capital flows.
The problem here is that the amount of net capital flows is correlated with the trade deficit. Dollars that flow out of the U.S. due to the trade deficit tend to flow back into the country as foreign investment. If the trade deficit decreases, say due to a weaker dollar, than foreigners have fewer dollars to invest in the U.S. Net capital flows will decrease.
Trump seems to want (I say “seems to want” because his position on these matters is often unclear) a smaller trade deficit and increased net capital flows into the U.S. He can’t have both. If the trade deficit decreases, then foreign investment will decrease.
Trump’s position on exchange rates is hypocritical, it’s uncertain whether or not weakening the dollar relative to the yuan will accomplish his goal of reducing the trade deficit, and a smaller trade deficit will lead to less capital investment in our economy.