To Infinity and Beyond with the Iterative Conception

By Brandon Mitchell

In the previous two articles, we’ve covered why incoherence and unintuitive set self membership led to the search for another method of grounding the Zermelo-Fraenkel axioms of set theory. We discussed how the iterative conception is perhaps a more straightforward way of understanding the universe of sets and how Pairing and the Null Set Axioms follow from the axioms which constitute the iterative conception.

We now turn to the final three axioms of Zermelo-Fraenkel; Union, Power Set and Infinity.

Axiom of Union – ∀A∃B∀x (x∈B ↔ ∃D(x∈D ∧ D∈A))

The broad gist of the axiom of union is that for any collection of sets A there is another set B which contains the members of the sets in A. (Do a visual)

So let’s assume the negation of the axiom of union giving us:

∃A∀B∃x¬(x∈B ↔ ∃D(x∈D ∧ D∈A))

This is to say that there is a collection of sets for which there is no union set.

Let’s call this collection H, so:

∀B∃x¬(x∈B ↔ ∃D(x∈D ∧ D∈H)).

By axiom 7 in the iterative conception, all sets are formed at some stage. And so H was formed at some stage. Let’s call that stage S.

By axiom 9, all possible collections of elements formed at previous stages are formed at a given stage.

So, at least one member of H was formed at S-1. Let’s call that member G.

By axiom 9 at least one member of G was formed at S-2. Let’s call that member F. (do visuals)

So to recap F ∈ G ∈ H

Changing tack, by axiom schema 10, for all stages s, ∃y∀z(z∈y ↔(∃D(z∈D ∧ D∈H)  ∧ ∃t(tEs ∧ zFt)))

As a reminder, the point of the axiom schema is to capture the notion that at every stage every possible collection of items formed at previous stages appears at that stage.

The statement above says that there is a collection y which contains all z’s where z is a member of a member of H and was formed before s. Remember that we said that H was formed at the arbitrary stage S and that the last member of a member of H was formed at S-2. So we see that the union of H appears at S-1.

Power-set Axiom – ∀A∃P∀B(B ∈ P ↔ ∀C(C ∈ B → C ∈ A))

The power set axiom says that for any set A there is a set P of all the subsets of A.

Remember that the definition of subset A ⊆ B ↔ ∀C(C ∈ A → C ∈ B). This is to say that every member of A is also a member of B. Note that under this definition, B is a subset of itself.

So now let’s address how the power set axiom follows from the axioms of the iterative conception.

Again the straightforward thought is that at each stage there is formed all possible collections of sets formed at the previous stages. Let’s say there is a set A formed at some stage s. As A ⊆ A for all A, then we expect A ∈P. Axiom 10 then tells us that P will be formed at stage s+1.

And now, more formally consider the negation of the power set axiom:

∃A∀P∃B¬ (B ∈ P ↔ ∀C(C ∈ B → C ∈ A)).

This says that there is a set A which doesn’t have a power set P.

Let’s call such a set D.

So: ∀P∃B¬ (B ∈ P ↔ ∀C(C ∈ B → C ∈ D)).

So for all sets P they either have a member B which has members not in D and so is not a subset of D or B is a subset of D and is not in P.

But by axiom 10: For all stages s ∃y∀z(z∈y ↔ (∀C(C ∈ z → C ∈ A) ∧ ∃t(tEs ∧ zFt)))

This says that there is a set y of all sets z which just contain members of A where z was formed before a given stage s. This means that there is a set P above if we can find the stage at which it is formed.

Remember that we said that A ⊆ A and so A ∈P. We said that A was formed at some stage S so by our version of Axiom 10 if s is S+1 then y is P.

More formally:

∃y∀z(z∈y ↔ (∀C(C ∈ z → C ∈ A) ∧ ∃t(tEs ∧ zFt))) Let’s call such a y, V.

So: ∀z(z∈V ↔ (∀C(C ∈ z → C ∈ D) ∧ ∃t(tEs ∧ zFt))) So: ∃B¬ (B ∈ V ↔ ∀C(C ∈ B → C ∈ D))

Let’s call such a B, K. So: ¬ (K ∈ V ↔ ∀C(C ∈ K → C ∈ D)).

So: (K∈V ↔ (∀C(C ∈ K → C ∈ D) ∧ ∃t(tEs ∧ KFt))) And here we arrive at the contradiction. Our axiom says that at any stage s there is a set V of the subsets of D, but our negated axiom denies that.

Now the stage at which V is formed is determined by the stage at which K is formed. As we’ve said above, if D is formed at stage s, then K is formed at stage s and V is formed at stage s+1.

Infinity

There is a set I which contains the null set and for all x in I the von Neumann successor of x is in I.

The von Neumann successor of x is the union of x and {x}. As such, the von Neumann successor of {a, b} would be the union of {a,b} and {{a,b}} which is {a, b, {a, b}}.

The axiom of infinity first claims that there is a set which contains the null set.

We established that the null set appears at stage 1.

It also says that this set contains the von Neumann successor to all its members.

If x appears at a given stage s, then {x} appears at s+1 by Axiom 10. At stage s+2 the union of x and {x}, in other words, the von Neumann successor of x appears.

Set

Von Neumann Successor

{0} {0, {0}}
{0, {0}} {0,{0},{0.{0}}}

 

At any stage s, by axiom 10, ∃y∀z(z∈y ↔(z=z ∧ ∃t(tEx ∧ zFt)))), which is to say that at any stage there is a set of all sets formed at earlier stages. And so at any stage there is a set which contains the null set and the von Neumann successor to every member so long as that member was formed two stages earlier.

And here seems to be the purpose of axiom 6, that there is a stage, not the first one, which is not immediately after any other stage, and therefore, presumably, doesn’t include members formed at the immediately previous stage. Essentially, axiom 6 allows us to think of what would exist in light of an endless succession of stages.

Boolos, with who’s article we began this series, puts it this way, referring to the version of Axiom 10 above, ‘And if y contains x, y contains all successors of x (and there are some), for all these are formed at stages immediately after stages before s and, hence, at stages themselves before s.’

And this kind of locution is the precise reason why, writing myself, with perhaps more clarity, on the iterative conception felt worthwhile.

Iterating a Concept of Sets – Part II

By Brandon Mitchell

Picking up where we left off in part one, we’ve come to the conclusion that the naïve conception, borne of the idea that the universe of sets can be defined by simply stating for any predicate that there is a set of things which do and a set of things which do not satisfy that predicate, is not a viable starting point for a theory of sets.

We arrived at this conclusion, in part, because of the Russell set (∃y(Sy ˄ ∀x(x ∈ y ↔ x ∉ x))) and the contradiction which it produces and in part because of the confusing or inelegant nature of set self-membership.

So we are left with a question as to how we are going to define and limit a universe/s of sets.

Boolos thinks it important to consider the best alternative to the naïve theory as one which achieves maximum elegance and is not developed with a consciousness of the Russell set.

To achieve these virtues, he turns to what we’ll call the ‘iterative conception’, attributed to Ernst Zermelo and Abraham Fraenkel, both German logicians of the late 19th and early 20th centuries.

Broadly, the iterative conception begins with a collection of individuals which are not sets. At the first stage, it forms every possible collection (set) of those individuals. It then proceeds to subsequent stages, and repeating this process it forms every possible collection of those sets and individuals which appear at previous stages.

Though it may sound complex, the idea is instead very intuitive.

Let’s give an example.

Begin with some (non set) individuals. We could call them a and b. Then at the beginning stage, form all possible collections of the individuals. In this case {a}, {b} and {a,b} along with the collection of no individuals (being a possible collection of individuals), the null set. I’ll denote the null set with {0}. At the subsequent stage, form all possible collections of items (sets or non-sets) which appear at previous stages.

Note that stage 2 contains every possible collection of the things which appear at stage 1. Similarly, when the process moves to stage three, that stage contains every possible collection of things which appear at earlier stages. This will include collections of sets whose members are only those who first appeared at stage 1. Here we introduce the concept of ‘forming’. A set is ‘formed’ at stage three iff it does not appear at any preceding stage.

Iterative Theory 2

And so, taking this tack, that of creating a universe of collections based upon a definite group of fundamental objects, yields the iterative conception of sets. The conception can be put forward in a series of axioms which can in turn, along with extensionality, be used to ground the Zermelo axioms of pairing, union, power set and infinity.

Axioms of the Iterative Conception

For the axioms, we establish a binary relation ‘earlier than’ (aEb: a is earlier than b) for stages and a binary relation ‘formed at’ (aFb: a is formed at b) for sets and a unary relation ‘is a stage’ (Sx: x is a stage).

The axioms are:

  1. ∀x(Sx → ¬(xEx)) – No stage is earlier than itself, i.e. stage 1 is not earlier than stage 1, etc. This axiom is defining a property of the ‘earlier than’ relation.
  2. ∀x ∀y ∀z(Sx ∧ Sy ∧ Sz → ((xEy ∧ yEz) → xEz)) – ‘Earlier than’ is transitive. Stage 2 is earlier than stage 3. Stage 1 is earlier than stage 2 so stage 1 is earlier than stage 3.
  3. ∀x ∀y(Sx ∧ Sy→(xEy ∨ x=y ∨ yEx)) – Earlier than is connected; that is every stage stands in some relation to every other stage, i.e. either stage 3 is earlier than stage 1 or stage 3 equals stage 1 or stage 1 is earlier than stage 3.
  4. ∃x∀y(Sx ∧ ¬(x=y)→(xEy))– There is an earliest stage. In the example, stage 1 was that stage.
  5. ∀x∃y((Sx ∧ Sy) → (xEy ∧ ∀z(zEy → (zEx ∨ z=x)))) – Every stage has an immediate successor, i.e. stage 2 in the example followed immediately after stage 1 etc.
  6. ∃x(Sx ∧ ∃y(Sy ∧ yEx ∧ ∀z(Sz → (yEz → ∃a(Sa ∧ zEa ∧ aEx))))) – There is a stage. Let’s call it  X and there is another stage, let’s call it Y, earlier than X. To say it again, there are two stages, X and Y and Y is earlier than X. For any other stage Z if Z is later than Y, there is another stage A and A is later than Z and earlier than X. This is to say that there is a stage, not the earliest one, that is not immediately after any given stage.
  7. ∀x ∃s(Ss ∧ xFs ∧ ∀t(xFt → t=s)) – Every set is formed at some unique stage
  8. ∀x ∀y ∀s ∀t (Ss ∧ St ∧ xFs ∧ yFt y ∈x) → tEs)) – Every member of a given set was formed at an earlier stage than that set
  9. Where s, t and r are stages, ∀x∀s∀t(xFs ∧ tEs → ∃y∃r (y∈x ∧ yFr ∧ (t=r ∨ tEr))) – For any set and any two stages if the set was formed at a stage s and the other stage t is earlier than s then there is a y that is a member of x. Y was formed at stage r and r either is t or is after t. Every set formed at a given stage has at least one member which was formed at the immediately previous stage.
  10. This is a variation of the axiom schema of separation adapted to the iterative conception. Boolos calls them ‘specification axioms.’ They are of the form

∀x(Sx → ∃y∀z(z∈y ↔(Z ∧ ∃t(tEx ∧ zFt)))), saying that at any given stage there is a set of just those sets of which X is true and which were formed before a given stage. This is true for all formulas X. So if stage 1, as in the diagram, holds every possible collection of sets formed at stage 0, then stage 2 could only introduce new collections in-so-far as they contain sets formed at stage 1.

So how do we arrive at the Zermelo axioms from the characteristics of the iterative conception? In this article, we’ll discuss the Null Set and the Pairing Axiom. In the next article, we’ll pick up Union, the Power Set and Infinity.

Null Set

Take x=x in the specification axiom making, ∀x(Sx → ∃y∀z(z∈y ↔(z=z ∧ ∃t(tEx ∧ zFt)))). At every stage there is a set of all sets formed prior to that stage. Since this holds for all stages, it holds for Stage 0 before which there were no sets formed. So there is a set of no sets or the null set.

It was helpful for me to review our naive or alternative proof for the null set, that of creating a separation axiom replacing P with x ≠x, saying that there is a set of just those things which do not equal themselves. We could create a specification axiom with x ≠x and it would prove that there is a null set. But it would not show at what stage the null set was formed, for in using x ≠x, both sides of the conjunction (x ≠x ∧ ∃t(tEx ∧ xFt)) would be false, leaving no z  in y. With Boolos’ proof, he shows that y becomes the null set as at the first stage because ¬∃t(tEx ∧ xFt).[1]

Pairing

Zermelo’s pairing axiom [∀x ∀y ∃z ∀a (a ∈z ↔ (a=x ∨a=y))] said that for any two sets, there was another set that contained just those two. We see that the pairing axiom follows from the iterative conception as well.

Suppose the pairing axiom were false, so [∃x ∃y ∀z ∃a ¬( a ∈z ↔ (a=x ∨a=y))] all sets z have at least one member which aren’t a given x or y. Let x and y be g and h respectively.

So ∀z ∃a ¬( a ∈z ↔ (a=g ∨a=h)). All sets have at least one member that is neither g nor h. Since g and h exist, then they appeared at some stage. Let’s call that stage ‘s’. By axiom 5 there is a stage s+1. Let ‘t’ be that stage.

At stage t our specification axiom holds: ∃n∀p(p∈n ↔((p=g ∨ p=h) ∧ ∃m(mEt ∧ pFm)).

As both g and h were formed prior to t then there is a set n which only contains p and h.

Conclusion

With these two proofs, we begin to see some of the elegance of the iterative conception take shape as it gets us where we have wanted to go under the naïve notion but in a way which makes much better sense of the notion of ‘set’.

In the next (and final) article in this series, we’ll take a look at three final axioms of Zermelo-Fraenkel, Union, Power Set and Infinity.



[1] What is now interesting to consider is whether or not there is a null set at every stage, given the specification axiom with x ≠x.

Iterating a Concept of Sets

By Brandon Mitchell

If you:

A)     Recognized the (admittedly belaboured) pun in the title and

B)      Then opened the article to read it

then this article is likely beneath you.

If however, you were just interested in an article about sets, then this very well may prove both interesting and helpful!

Maybe you too, like me, had the naive concept of sets presented at some point in your academic career. The lecturer  quickly proved it to be incoherent and with equal rapidity moved on to the briefest introduction of the ‘iterative conception’ possible. This followed shortly thereafter with a presentation of the standard Zermelo axioms.

If so, then it’s likely that you too, like me, felt as if you were the fat kid in an academic game of dodge-ball, suffering lasting wounds to your most sensitive parts, as axioms, lemmas and proofs were thrown at you while all your classmates seemed well able to deal with this dizzying and exhausting experience.

Maybe you too, like me, found that introduction both perplexing and disconcerting, leaving one wondering where we began and why…immediately followed by wondering where precisely we ended up…and why…

Well, that was my relationship with the foundational aspects of set theory until recently coming across a paper by George Boolos appearing in a 1971 edition of the Journal of Philosophy[1]. In it, he helpfully takes the reader through what has been called the ‘naive’ conception, then introduces the iterative conception and most importantly grounds the Zermelo axioms in the characteristics which make up the iterative conception.

My hope here is to reiterate these steps, taken at dizzying speed in the lecture hall, with perhaps a bit more clarity.

Boolos begins, as do I, with the notion of a set itself and Georg Cantor’s definition that a set is, ‘a totality of definite elements that can be combined into a whole by a law’[2]. Putting aside the relative obscurity of many of the concepts employed by Cantor, a few things about sets remain. They are meant to be a definite collection of things. Being a definite collection of things and only a definite collection of things, sets are to be identified by their members. Should two sets contain all and only the very same things, then they are the same sets.

Axiom of Extensionality: ∀A ∀B ∀x ((x ∈ A ↔ x ∈ B) → A = B)

Combine the extensionality concept with a collection of predicates which lack vagueness and it seems natural, given the law of excluded middle, to suppose that for any given predicate there are two sets, one of things to which the predicate applies and the other those things to which the predicate does not apply. Boolos sums the thought up as ‘Every predicate has an extension.’

Naive ‘Separation’: For all predicates F, ∃B ∀x (x ∈ B ↔ Fx)

The thought here is that the totality of predicates then defines the universe of sets. That is, all the sets that we can talk about, describe and otherwise play with. Extensionality and Naive Separation combine to form what we’ll call the ‘Naive Theory’.

By specifying certain specific formulas for Fx, we get some of the familiar axioms:

Sy = y is a set

∃y(Sy ˄ ∀x(x ∈ y ↔ x ≠ x)) – Null Set

∃y(Sy ˄ ∀x(x ∈ y ↔ (x = z ˅ x = w))) – Pairing

∃y(Sy ˄ ∀x(x ∈ y ↔ ∃w(x ∈ w ˄ w ∈ z))) – Union

∃y(Sy ˄ ∀x(x ∈ y ↔ (Sx ˄ x=x))) – Universal Set

Unfortunately there is a predicate that we can put in for Fx which leads to a logical contradiction.

Consider:

∃y(Sy ˄ ∀x(x ∈ y ↔ x ∉ x))

This says that there is a set of things which are not members of themselves. Let A be such a set.

Then:

∀x(x ∈ A ↔ x ∉ x)

Since the thought is that there are things which are and are not F then anything that we may talk about either is or is not in the extension of F. So the universal quantifier ranges over our set A.

So:

A ∈ A ↔ A ∉ A

As a result, we can’t say that there is a set corresponding to the extension of any predicate.

And while the above is the most decisive objection to the naive concept of sets, there are yet others.

Above, we arrived at the notion of a universal set by placing x = x for Fx. Since everything is self identical then the universal set contains everything, including itself. And as you’ll remember, two sets are the same when they have the same members. Sets are defined in terms of  their members. But the universal set has itself as a member.

If x = {a,b,c,x} and y = {a,b,c,y} does x = y? It would seem that it does. But this is still an awkward situation.

Similarly awkward is:

x = {a,b,c,y} and y = {d,e,f,x}. So x = {a,b,c,{d,e,f,x}} which equals {a,b,c,{d,e,f,{a,b,c,y}}} and so on and so forth.

Where a ∈ b and b ∈ c and c ∈ a, then we can call these sets ungrounded, a seemingly circular definition holds between them.

And while these issues of ungroundedness and self membership may not be logically problematic like the contradiction, they might be things we’d like to avoid in our iteration of sets.

So we look for another way for imagining the universe of sets.

In part II, I’ll present this new way, the Iterative Conception.



[1] George Boolos. ‘The Iterative Conception of Set’. Journal of Philosophy. Vol 8, No 68. April 1971. Pgs 215-231

[2] Georg Cantor. Gesammelte Abhandlungen, Zermelo ed. Berlin 1932.

On Lombard Street – Part II

‘Political economists say that capital sets towards the most profitable trades, and that it rapidly leaves the less profitable and non-paying trades. But in ordinary countries this is a slow process, and some persons who want to have an ocular demonstration of abstract truths have been inclined to doubt it because they could not see it.’ 

By Brandon Mitchell

What is capitalism?

To be clear, the question is one of definition. That shouldn’t, however, make it a wholly uninteresting question to both ask and answer.

I’ll endeavour to present one possible answer, an answer with which you might find interesting or informative.

First, let’s consider the term ‘capital’ as it would seem straight-forward that we should want the term in question, ‘capitalism’ to be in some way derivative of ‘capital.’ I am going to propose for my own use a wider notion for ‘capital’ than one might commonly encounter. Without arguing at length, I find problematic a traditional connotation for capital of ‘human produced means of production’ or  a kind of productive artefact. Such a narrow term has outlived its usefulness in describing how value is created in the economy. Rather, I propose to use the term ‘capital’ to refer to any factor of production; any element necessary for creating value.

And so my definition: (A) Any arrangement is capitalist wherein enterprises compete for the voluntary allocation of capital amongst the diverse owners of capital. (B) This could be restated as any individual having a right to their labour, such a right being twofold. Firstly, they have the right to allocate their labour how they see fit. Secondly, they have a right to the fruits of their labour.

I propose each condition is necessary and sufficient for a system being capitalistic. Though they are not equivalent, I think where we find the one instantiated, we find the other. The tie is maybe neither straightforward nor obvious but nonetheless I maintain that it exists.

If B then A

As labour is a factor of production and we have defined capital to be any factor of production then A is permuted into ‘Any arrangement is capitalist wherein enterprises compete for the allocation of labour amongst the diverse owners of labour.’

If each individual has the right of allocation of his labour and the right to its fruits, he then owns his labour. When each individual owns his labour, there are diverse owners of labour and enterprises which would make use of labour must attract that labour.

If A then B

To begin, let’s consider the question, ‘what are the conditions necessary for A to persist? At what point has A failed to be the case?’ It has failed to be the case when either the nominal owners of capital are not permitted the voluntary allocation of that capital, or when the nominal owners of all capital are few. In the first alternative B fails to be the case as the owners of labor do not have the right of allocation of that labor. In the latter alternative B fails as individuals would not have the right to their labor at all. The first is serfdom, the second, slavery. This however, simply evidences what we have already shown.

To show that A implies B, let’s focus on the latter alternative. All physical capital could be concentrated in the hands of a few but as long as each individual has both a right to his labor and the fruits of said labor, capital as a whole has not concentrated.

And so what, you might say, of a situation in which there are diverse owners of physical capital but not labor? Such a situation may be capitalist for a moment and a moment only. Every poor investment of physical capital would make its owner both poorer and more the complete slave. For every poor allocation of physical capital, a person has lost forever his ownership of capital of any kind. As those who possess physical capital make poor investments one by one the diverse ownership of capital becomes less and less diverse and, like the game of Monopoly, a small set of people come to control all physical resources. Without the right to one’s own labor there ceases to be diverse ownership of capital.

And so, in order for A to endure, B must be true.

Nonetheless, it is worthwhile to investigate why the ‘right’ to one’s labour is expressed as such. It may not be obvious how the right to allocation and fruits are jointly necessary and sufficient for the endurance of A.

Constituent in A is a concept of ownership which is robust enough to create the market operation upon which the definition is based. With the right to allocate but not the incentive, the notion of capitalism that results is rather hollow.

Naturally the same considerations which apply so clearly in the case of physical capital apply when considering labour as capital. If you take away the right to fruits, you’ve diluted the sense of ‘ownership’ in A, a sense of ownership that would hold for all material capital.

So a free market in labor is both necessary and sufficient for an enduring capitalism[1].



[1] Another un-dealt with objection might run something like this: We can have A with a subset of people not having a right to their labour. This is true. The ante-bellum south might be such an example. Whether or not we admit such a counter argument is a matter of the extent of the ‘diverse’ condition. Where that diversity must be sufficiently great, B only may be satisfactory. Otherwise, we might have some alternate B’ where everyone but a subset of some size has a right to their labour.

Of Friends and Foils

By Oliver Ray

I’m looking after a dog at the moment. He’s a rather fine West Highland White called Woolly. Though I’m not sure he knows he is, since he doesn’t come when he’s called. He has an expressive, almost human face and big bushy white eyebrows. He is fed twice a day and walked twice a day. He also poos twice a day. Ideally the walk and poo are scheduled together. He has a fondness for Europop, with the bass turned up a little higher than usual.

Though quiet, he is a sociable creature and if there is only one person in the house he will seek them out, gesture with his eyebrows and curl up at their feet. If that person is on a different floor to himself, he will methodically check all the rooms on his own floor to make sure they are empty before having to tackle the stairs. He is better at going up stairs than down.

He renders the stoniest of hearts gooey and sentimental. I also like to think that he is trained to kill on command. I yell, “ROSEBUD!” when we have visitors just to make sure. So far, this has only been met with quizzical expressions by both dog and guests. I then have to take them on a tour of the flowerbeds to avoid questions.

The balance of power between us is not always clear. I’m bipedal, can use chopsticks, run a bath etc–by rights Woolly’s natural superior. But I think any creature that has you scrabbling around picking up their shit has the better of you.

Concerns of the canine bowel aside, I have noticed that when one walks a dog, one gets considerably more attention. Prima facie, you are immediately assumed to belong to the dog owning coterie, that exclusive group familiar with the warm sensation of faeces through polythene. This gives you license to exchange words as your hounds initiate mutual proctologic analysis. At first I thought this was friendly conversation, but now I realise it is also to diffuse something like voyeurism.

One also seems to get fonder looks from the opposite sex. I took Woolly for a walk down the King’s Road the other day. I forget why we were there. I think he wanted to pop into Anthropology. He looked very fine indeed and he damn well knew it. Some of his loveliness must have been projected onto me. Many admiring looks were cast in his direction, but also unaccountably in mine. I cannot fathom why this should have been. Could the presence of the pooch engender the assumption that I am a sensitive, caring man and enjoy stroking things? Perhaps, but no more so than that I am a man who keeps bags for poo, handles tins of jellified horse spine and has hair on my clothes. (Though for the record, Woolly doesn’t moult and has a preference for dry mix, but how would they know that?)

On another occasion, Woolly was walking me in the park. A woman came jogging across the grass toward us. She was one of those rare folk that glow when exercising instead of sweating in odd places and coughing up a lung like most of us. Bounding at her dainty heels was some glossy spaniel. Woolly—normally the first to say hello to a fellow quadruped—turned away and nosed vaguely at a pile of leaves. He has a sixth sense for these things. The woman stopped by me as her dog went over to Woolly and we began to talk. She was very lovely. At what one might have optimistically described as the critical moment, Woolly gave me a look that anyone else would have called adorable, but one that I knew better as downright sly. He feigned a polite sneeze and crapped all over the path. For timing, he really is without peer.

Do I control him, or am I his? Is he a furry producer of poop or a cuddly playmate? Is he an attracter of women or does he foil flirtation? How can he be a creature who can’t recognise his own name and yet play a devilish game of backgammon? Perhaps there is no definitive answer to these questions. He is, simply, Woolly.

On Lombard Street – Part I

‘But I maintain that the Money Market is as concrete and real as anything else; that it can be described in plain words; that it is the writer’s fault if what he says is not clear.’

‘The briefest and truest way of describing Lombard Street is to say that it is by far the greatest combination of economic power and economic delicacy that the world has ever seen. Of the greatness of the power there can be no doubt. Money is economical power.’

By Brandon Mitchell

In the introduction to Lombard Street, Bagehot waxes on the economic virtues of a banking society, particularly in relation to his own, English, society and the heart of its banking, Lombard Street, in the City of London. From these opening pages, we can identify three economic advantages he claims accrue to those societies who have a tendency to hold their capital in bank deposits.

Firstly, money deposited in banks is available for investment. If it be deposited payable on demand or for a term, people are able to hold all or a portion of their liquid wealth in a way that makes that wealth available for investment. When that wealth is invested, it begins economic work either in enterprises producing goods and services, enterprises which, without the capital, would not have been undertaken or in capital investment, as in a home loan, where the cost of capital is happily borne out in return for the benefit.

Secondly, the pooling nature of banking lowers the transaction cost of collective investment. He writes, ‘A million in the hands of a single banker is a great power; he can at once lend it where he will and borrowers can come to him because they know he has it. But the same sum scattered in tens and fifties throughout the whole nation is no power at all: no one knows where to find it or who to ask for it.’ Banks serve as an apparatus for gathering wealth. Banks, because they are set up to do that, bear the associated costs relatively efficiently. If each entity which sought investment had to muster the pool of investors themselves, the costs would be much higher.

Thirdly, bankers provide value as investment managers. Bagehot comments on how bankers assess the risk and determine what kind of reward they would need to take that risk. He emphasizes how any plausible venture or good security will result in a loan at some price. Bankers make it their business to make good investments out of wise loans.

On Lombard Street

We can see the synergistic effect of these three benefits of banking when we regard what the situation would be without them. A businessman with a capital intensive venture would seek that capital from individuals, individuals who may not be interested in loaning their capital. He would have to go from person to person, trying to entice them with the opportunity to loan their savings to him. To merely undertake such a process would be daunting for someone who is, say, in the dam building business. Then, he would have to count on a preponderance of individuals to appropriately price the risk in loaning him money. When a cobbler regards his life’s savings, he would likely find it difficult to fully and appropriately appreciate the potential in the hydro-electric market. On demand and short term deposits, however, provide the individual with the liquidity that is their reason for keeping more than modest amounts of money. That money is efficiently pooled in the bank. Large, centralized institutions hold it rather than vast swathes of the population. And, because their business is in making good investments, bankers are in a position to appropriately price risk. All of this combines to allow economic cooperation otherwise unseen.

An organization is created between a bank’s depositors, the bankers, and the bank’s borrowers that allows capital to be deployed in new and innovative ways. Bagehot writes, ‘A citizen of London in Queen Elizabeth’s time could not have imagined our state of mind. He would have thought that it was

no use inventing railways (if he could have understood what a railway meant), for you would not have been able to collect the capital with which to make them…Taking the world as a whole-either now or in the past-it is certain that in poor states there is no spare money for new undertakings and that in most rich states the money is too scattered, and clings too close to the hands of the owners, to be often obtainable in large quantities for new purposes.’ Capital resources in a banking society, by contrast, are distributed amongst those who can make profit from them, facilitating innovation and productivity not otherwise possible.

In the next set of ruminations on Lombard Street, I’ll examine some of Bagehot’s introductory comments on the commercial effects of robust capital markets.

Capping Taxes or Capping Revenue?

By Brandon Mitchell

Alex Estorick, an intern at ConservativeHome, has commented on top end marginal tax rates and an old New Zealand  Labour party proposal to cap the amount of income tax any given individual was expected to pay. It was difficult not to notice a slight mismatch between the rationale for supporting such a policy and that policy’s likely effect.

The rationale is straight-forward. Laffer curves are real and the phenomena which produce them can be observed amongst small groups, like the very wealthy, just as they can large sections of the economy. Let’s begin by outlining what I consider the two principal mechanisms which lead to the Laffer curve. We can call the first the ‘diminishing marginal utility of additional earnings.’ There becomes a point at which the value of the compensation I receive for working that extra day or putting in that extra effort is less than the opportunity cost. I could spend the day with my family or forget the stress of that additional venture. In all sorts of ways we weigh the likely costs and benefits of our efforts and actions, and for large enterprises or very wealthy and talented people, the situation is the same. Should I spend the day at the office or sailing the family to Spain in the yacht? How much an entrepreneur expects to benefit from a new venture or a talented executive from another promotion is largely determined by raw economic realities but taxation is a factor in direct proportion to the size of the top marginal income tax rate. When that rate is 50%, as it is in the UK, it becomes a factor of no small importance.

The second, we can call ‘the return on investment of tax avoidance.’ Individuals and businesses do many things to manage and structure their tax liability. Possibly the most well known example of this is evidenced by the long and diverse list of well known business which are incorporated in Luxembourg or Ireland but the number and variety of actions taken to limit tax liability are immense. People and businesses alike would prefer to have their money when and where they need it, but to a point they will tolerate these elaborate schemes in order to shelter their wealth. The amount of money saved in tax outweighs both the cost of hiring lawyers and accountants to manage elaborate systems and the inconvenience of often not being able to do what you’d like with your money. The important point here is that tax avoidance measures cost something. Corporations and individuals invest time, effort and resources into managing their tax liability to the extent that there is a return on that investment; that there is more money saved on the tax bill than the hassle and expenditure involved in making that savings.

What both of these mechanisms have in common is that their effect on people’s behaviour becomes greater in proportion to the rates at which tax is levied. The lower the top marginal income tax rate, the more the entrepreneur is compensated for starting that business, making it more likely that he will start the business rather than do something else with his time and money. The lower the taxes on individuals and businesses, the less they will find it worthwhile to do to avoid those taxes. In both instances, we move to the left-hand portion of the curve when people decide to pay taxes and more money is gathered in by the government. When rates lower, people stop spending on lawyers and accountants and instead, pay the tax, or they set up the business, make more money and pay more tax, in each instance keeping more for themselves in the process.

And so we arrive at the recommendation from ConservativeHome. Estorick writes, ‘How fair is it to cap the taxes of the highest income earners when so many other families are facing difficult times? The answer is, as optimal tax theory has shown, lower tax rates at the very top of the income distribution can potentially lead to higher income earners paying more, not less, tax. Given this, perhaps this is exactly the sort of approach that a fair tax system should take?’ He’s concerned that lowering the taxes paid by the wealthiest is not a politically popular thing to do, but if more revenue is raised, then do we really care? He sees an equitable solution in the 2001 McLeod Review, commissioned by the Labour government, where they recommended a cap, in nominal terms, on individual income tax liabilities. Estorick sees this proposal as lowering the rate of tax paid by many top earners, thereby moving us off the downward slope of the Laffer curve and back towards the top.

But it’s worthwhile to examine just what kind of mitigating effects we can expect from a nominal tax cap given the phenomena which cause the Laffer curve and whether those effects fall inline with the rationale for such a cap, that it will raise more revenue for HM Treasury. In the case of ‘diminishing marginal utility’ when someone’s tax liabilities have reached the nominal cap, taxes will no longer be a consideration in their decision to work to increase their income. However, the Treasury will receive precisely no benefit from the person’s decision to do additional work. In lowering the ROI of tax avoidance the policy is more successful. For those whose liabilities will inevitably hit the cap, they will forgo the cost of their tax avoidance measures. When liabilities may come in under the cap, cost-benefit characteristics similar to if there were no cap, remain. But in either case, does the avoidance of tax avoidance on the part of these tax payers create additional tax revenue? No. The cap is a cap on how much the government will raise from any given individual.

There exists a single tax avoidance measure that the policy addresses; avoiding UK income tax by not living in the UK. Capping income tax will tend to reduce the ROI of relocating to a lower tax jurisdiction. The policy is designed to sustain or increase the number of taxpayers. It fails, however, to capitalize on any of the other alterations of behaviour which result from a lowering of taxes, increased taxable incomes and an increased willingness to pay tax on current income.

Turbulent Priests and the Rule of Man

By Adam Bates

In 1170, Thomas Becket, the Archbishop of Canterbury, was cut to pieces by four supporters of King Henry II after they overheard the king cursing the archbishop’s name.  “Will no one,” legend records the king as saying, “rid me of this turbulent priest?”  History suggests that this was not an order from Henry, but merely an exasperated utterance.  His supporters, however, did not take it that way and subsequently rid the king of the priest by chopping off the priest’s head.  There was no charge, no trial, just a declaration and an execution.

One would expect Americans to consider such a story an egregious example of the evils of absolutism and the rule of man.  No legitimate system of government, they might say, can invest in one man the authority of judge, jury, and executioner.  The ruler does not simply get to kill those who annoy him.  Our entire American system and the liberal thought out of which it sprang insist that such a concentration of powers in one person is anathema to a free and moral society.  Such concentrations exist only in despotic and tyrannical societies.

Becket Assassinated

Americans can be forgiven for thinking that way.  Not until last fall was it so painfully obvious that our society is not so far removed from Henry’s.  On September 30th 2011, another ruler (to wit, ours) rid himself of another turbulent priest, and he did it purely by executive fiat without regard for the Constitution, the principles that begat it, or the rule of law.

Witness reports suggest that Anwar al-Awlaki, an American citizen raised in New Mexico, was killed in a U.S. drone strike in Yemen.  al-Awlaki has long been considered an “enemy combatant” by the executive branch, and is accused of providing moral support and spiritual guidance to men who committed terrorist attacks against the United States.  He is not alleged to be a combatant in the conventional sense of the term; he didn’t fire guns or make bombs, he talked.  He told people things that the U.S. government considered threatening, and today they killed him for it.

How does one become an enemy combatant?  There is no hearing, no burden of proof, no judge or jury.  The president’s administration simply declares you an enemy and then you are so.  There is no appeal once the designation has been made.

How is an enemy combatant sentenced to death?  There is no hearing, no burden of proof, no judge or jury.  The president’s administration simply declares that you should die, adds your name to a list, and then sets about killing you.  There is no appeal.

How is the sentence carried out?  There is no hearing, no burden of proof, no judge or jury.  The president’s administration simply finds you and fires missiles at you until you are dead.  There is, as you might imagine, no appeal (not temporally anyway).

The Obama administration declared al-Awlaki an enemy combatant, the Obama administration sentenced him to die, and Obama administration effected his death.  There was no process, there was no evidence presented, there was no judge to whom to present, and there was no appeal (repeated attempts by al-Awlaki’s father to prevent the assassination of his son were dismissed on the grounds that to respond to the allegation the government would have to reveal state secrets and jeopardize national security).

The authority the president asserts when exercising this immense group of powers is the Authorization of the Use of Military Force that followed the 9/11 attacks.  The AUMF purports to give the president the authority to use “all necessary means” in pursuit of those who were responsible for 9/11.

Even granting the staggering assumption that such a broad interpretation of this authority is Constitutional, what are the limits on this power?  If the AUMF gives the president the power to kill American citizens in Yemen, doesn’t it also give the president the power to kill Americans in America?  Certainly there is no limiting language in the law itself.

The government will claim that it would never do such a thing within the borders of the country.  But is that the extent of my protection?  Is the only reason the president is not allowed to declare me an enemy and have me summarily killed, the fact that it would be unpopular?  Doesn’t the Constitution do more to protect the lives of its citizens than that?  Surely it must, or else it protects nothing at all.

Well, the argument goes, Anwar al-Awlaki was an enemy, and he deserved to die.  That may well be the case, but it is a case that must be made rather than assumed.    The Constitution explicitly provides more protections to Americans accused of making war against us, not fewer (and certainly not none).  Our Constitution, our principles, and the rule of law demand that the president not be vested with the power ours exercised today.

The fact that we didn’t care for this turbulent priest is no justification for ceding to the king the power to kill all the priests he pleases.

On Corporate Financing

Annette Poulson has written a survey of literature on how corporations choose to finance. The following are some comments on that essay.

We are all familiar with the variety of motivations for taking on personal debt whether it be to facilitate consumption (credit cards) or to leverage cash flow for investment (home loans) but for what reasons do companies choose to take on debt, especially as they have the ability to raise capital through further equity issues?

From the investor’s perspective debt level has long been a metric to pay attention to. Luminaries such as Benjamin Graham, Peter Lynch and Warren Buffet have all suggested that, in some ways, the level of debt a company operates under can be an indicator of company health. So why debt finance and what, if any, effect does debt structure have on company value?

From the ‘cash-flow’ perspective of company valuation, debt financing may be said to have no effect on the value of a company. Some call this the ‘irrelevance proposition,’ that debt to equity levels do not have an effect on the overall value of the company. The reasoning goes something like this. The same cash flows bought at the same price should be equally valued. Say you own a 10% stake in a firm. The firm holds zero debt. You are entitled to 10% of the profits of the firm, revenues less expenses. Now imagine an identical firm with identical cash flows. You have a 10% equity stake in that firm as well, but this firm decides to raise capital through a bond issue. In creating this liability, the value of your stake is lessened as the profits of the company are lessened as the interest payments on this debt are entered into the expenses side of the ledger. Instead of registering profits, that cash flow will go to debt holders. But, if you were to own 10% of this debt issue as well, then ex hypothesi, you would own the same cash flow as you do with the company with no debt and, by our stated valuation method, the debt plus equity in the latter case would be valued the same as the equity in the former case. The point here is that if the same cash flow can be bought with the same capital, then the debt vs. equity structure shouldn’t be important. We would expect the equity stake in the first company to be of equal value to the combination of the equity and debt stake in the second. Investors that have the same stakes (10% equity in one case and 10% equity and 10% debt in the other) see the same profits and enjoy the same payouts and so value the two companies the same.

But things are never so simple. When taxes are considered, we see that in the real world the cash flows of the companies won’t be identical and clear incentives are introduced for certain behaviours. The first consideration is that corporate income taxes apply to profits. Interest payments on debt are considered expenses. Corporations then can afford to reward investors in their debt at a higher rate than equity investors as payments to equity holders are taxed at the corporate level and payments to bondholders are not. In effect, there exists a subsidy in the amount of the corporate tax rate for financing via debt.

However, we cannot look only at the incentives created by one portion of the tax code while ignoring what incentives might be created by the rest of the code. The irrelevance proposition discussed above, that debt to equity ratios per se aren’t relevant to a company’s overall value because either way investors own the same cash flow, was dependent on investors actually receiving the same cash flow. Corporate taxes make it cheaper for a company to use their cash flow to service debt rather than pay shareholders.

In addition, we need to pay attention to the tax consequences on what investors get. Investors will prefer to be compensated in a way that gets counted as a capital gain rather than ordinary income. Interest payments on debt are counted as ordinary income. Gains on equity that are unrealized for a year or more are counted as capital gains and are taxed at a significantly lower rate. Investors will have a preference to be compensated in equity gains to the tune of the difference between their marginal rate and the capital gains rate.

So, overall, it is in the best interest of companies to finance investment through the reinvestment of profits. The money is then not taxed on the corporate side and investors, instead of seeing the income, see a long term capital gain. But, if there is not sufficient excess revenue inside the company to make the desired investments, outside capital must be sought. In those instances, it makes sense to debt finance, given the subsidy created by tax treatment.

However, just as the cost of debt for individuals is determined by their ability to repay their debt, so too are companies constrained by current and expected cash flows when borrowing. The need for the company to remain solvent becomes a limiting factor for the amount of debt financing that is possible. The increasing cost of additional debt as overall levels rise will be a consideration when a firm chooses how to finance investment. Those increasing costs are driven by particular concerns on the part of bondholders. One such concern is that shareholders of a company with a large debt burden and faced with insolvency will be much more likely to head to the roulette table with the remainder of the company’s assets; taking on far more risk when the bondholders might prefer an orderly liquidation. As a result, borrowing past a certain level is likely to be cost prohibitive for companies in many situations.

For companies with long records and stable cash-flows, the level of leverage that they can reach is likely to be much higher than for newer companies who may have heaps of earnings growth potential but don’t currently operate a business large enough to get the amount from money markets they want. In such situations or when a company has reached a high level of leverage and still wants to raise capital, new issues of equity are likely going to be competitively priced relative to another bond issue.

And so our account of the incentives faced when deciding how to raise capital, we might make the following predictions. Companies wishing to make investments will first do so out of spare revenue. Then they will look to money markets to access as much as possible at a reasonable rate. Finally, especially if there is high expected earnings growth, thereby making their equity particularly attractive while their balance sheet may not be, they will look to the equities markets.

Operation Fast and Furious, the War on Drugs, and Gun Control

Last year, the public became aware of a program by the Bureau of Alcohol, Tobacco, Firearms, and Explosives (ATF) known as Operation Fast and Furious (F&F). Most controversially, as a part of the program, the ATF knowingly allowed suspected arms traffickers to transport thousands of guns to Mexican drug gangs. In December 2010, the program backfired when two of the guns involved with the program were found at the murder scene of U.S. border patrol agent Brian Terry.

Several months later, after the disclosure of this program which has led to the deaths of hundreds of innocent people, many gun rights advocates have asserted that the program was secretly designed to increase violence in Mexico thereby creating a justification for more gun control. Most notably, the National Rifle Association (NRA) published an article contending Operation F&F showed that President Obama has a long-term, subtle plan to substantially enhance gun control in a possible second term.

Yet, evidence fails to link conclusively F&F with a premeditated attempt at gun control policy by the Obama administration. Though, as more information becomes available the operation continually looks worse, and this rather Machiavellian possibility seems more likely.

Assistant Attorney General Lanny Breuer, in his testimony before Congress after the disclosure of F&F,  blamed the Second Amendment for F&F. As a cabinet member representing the Obama Administration, Attorney General Eric Holder—who desired in 1995 to “brainwash” people to support gun control and contended that terrorists crashing planes into buildings on 9/11 showed the need for more gun control—also treated F&F as evidence of the need to impose tougher gun regulations. In fact, without Congressional input, the Obama administration strengthened gun control regulations in four states bordering Mexico, and a court subsequently approved the policy. Officials used the consequences of F&F to justify gun control policy even though the secret program had already been disclosed to the public.

This program was never supposed to be public knowledge. In fact, when asked about the program in early 2011, the ATF, on January 25, falsely denied sending guns to Mexico and on February 4th Assistant Attorney General, Ronald Weich, sent Congress a letter falsely denying claims that the government helped arm drug gangs.

After misinforming people regarding the details of the program, the Department of Justice (DOJ) repeatedly denied responsibility for F&F, portraying it as merely a local matter. Despite these repeated assertions, Congressional investigations unearthed information showing that the ATF Director Kenneth Melson was briefed weekly on the operation and that the heads of virtually every law enforcement component of the DOJ, such as the FBI and the Drug Enforcement Agency, may have known of the program.

It became apparent with the discovery of more information that both the Assistant Attorney General and the Attorney General, who both testified that F&F showed the need for more gun control, had prior knowledge of the operations. In the case of the Assistant Attorney General, he eventually admitted to having knowledge of F&F, though he claims that a failure on his part to make obvious connections stopped him from understanding the full scope of the program. In a much more damaging revelation, the Attorney General who claimed only to have heard of the operation a few weeks before May 3, 2011 was briefed of the program on July 5, 2010—nearly a year earlier—via documents addressed directly to him.

When remarking on these discoveries, Representative Darrell Issa correctly observed that even though only around 10% of the information on F&F has become available to Congress, “we find damning evidence that high-ranking people in Justice knew all along and not only didn’t stop this program, but believed in it.”

If the plan to keep this program a secret had worked, the case for using it to promote gun control—as advocated by members of the ATF while the program remained a secret—would be much stronger. Whether President Obama personally knew about the program or not, the increased violence resulting from the program bolstered his repeated assertions that guns from the United States threaten stability in Mexico and that, therefore, the federal government has a responsibility to do something about it.

In early April, Mexican President Calderon pleaded for President Obama and Congress to tighten gun control, particularly by reinstating the ban on assault weapons that expired in 2004. Given that Obama referred to the expiration of the ban at the time as a “scandal,” the President clearly sympathizes with Calderon’s request. If F&F had remained a secret, President Obama would have a much stronger case in calling for legislation to ban assault weapons on account of the violent drug war in Mexico.

Yet, the drug war itself represents a failed government policy often used to justify gun control. Although President Calderon blames the violence in Mexico largely on American guns, violence has spiked since 2006 in places where the Mexican army and federal police forces have intervened, drastically increasing the local overall murder rate in recent years. In contrast, the nation of Mexico as a whole has a lower murder rate than in previous decades, and the murder rate has continued to decline in line with long-term trends in the regions of Mexico devoid of any drug war. Rather than resulting from the end of the assault-weapons ban, the instability in Mexico is caused predominantly by the drug war, leading even Mexico’s ex-President and former drug war proponent Vicente Fox to join many other Latin American leaders in calling for its end.

With this realization, gun rights advocates who support the war on drugs should reconsider their view on the matter. The war on drugs and its associated violence is used by politicians as a justification for gun control and so poses a threat to Second Amendment rights. Without a drug war, much of the justification for restricting gun rights generally and assault weapons in particular would disappear.

Unfortunately, many gun rights advocates fail to see this connection. When the Obama administration decided to use the ATF last year to deny medical marijuana users the right to purchase guns and ammunition, proponents of second amendment rights, with some notable exceptions like Gun Owners of America, mostly accepted the decision without protest. Even the NRA remained silent on this decision by the Obama administration. By failing to speak up, these gun rights activists essentially accepted that the war on drugs represents a legitimate reason to impose gun restrictions. As the drug war in Mexico and in the United States continues, this ‘war on drugs exception’ to the Second Amendment represents a grave threat to the right to bear arms.

Although present information makes it unclear whether President Obama himself was aware of F&F and whether that program was originally created in part to justify further gun control, it remains a fact that the operation intensified the violence associated with the war on drugs in Mexico. Without a war on drugs, there would have been no F&F, and no one would have been able to use the failed program to justify further gun control.

While F&F represents an acute instance of governmental abuse which was eventually used to justify yet more abuse in the form of strengthened gun control, the continued war on drugs represents a far more serious and transparent danger to the right to bear arms. Second Amendment advocates who support it should seriously reconsider their views.