The Colonial Origins of Comparative Development: A Summary

Acemoglu, Johnson and Robinson (AJR) attempt to measure the effects of institutions on income differences by introducing an exogenous source of variation in institutions to measure their differing outcomes. They begin by pointing out that the history of colonisation resulted in different institutions being formed: some countries received extractive institutions (whereby the coloniser would simply extract all resources but would not build proper institutions to promote growth and sustainable living) whilst others received inclusive institutions. This depended upon the ability for colonisers to settle, if a country was full of disease then the coloniser would not wish to live in this colony, but simply take as many resources as possible and then leave. This was the fate of many African colonies where Europeans were unable to settle due to high levels of diseases which Europeans hadn’t developed immunity to. On the other hand, countries such as the Americas, Australia and New Zealand were much more hospitable to European settlers who decided to build colonies where they and their descendants would live. AJR believe these institutions then persisted, and the effects can still be felt today.If they are correct then they are able to use the mortality rate experienced by Europeans living in a country as an instrument for that countries institutions today.

Diagrammatically, their theory boils down to:

Settler Mortality -> Settlements -> Early Institutions -> Current Institutions -> Current Economic Performance

A simple regression does indeed show that colonies where Europeans faced higher death rates are a lot poorer today than countries which had lower death rates for Europeans.The authors then regress this instrument on institutions as follows:

Current Economic Performance = a(Institutions) + b(Other Controlled for Factors) + c(omitted variables and random error)

The estimated value for a tells us how important institutions are for current economic performance. The authors measure institutions using the risk of expropriation (the risk that an investment or property will be seized by the government). Because institutions is likely to be correlated with other omitted variables this introduces an omitted variable bias, whereby our estimated a is incorrect because of a correlation between institutions and other uncontrolled factors. To overcome this the authors use this settler mortality as an instrument, estimating:

Instruments = a + b(settler mortality) + u

And then substituting this into the structural regression equation.

They find that mortality rates faced by settlers over 100 years ago explains 25% of the variation in current institutions.

Overall, they find that their IV estimate of 0.94 (of the effect of institutions) is greater than their OLS estimate of 0.52 (which suffers bias) which suggests downward bias and thus that the measurement error in the institutions variable is more important than reverse causality or omitted variable bias. Acemoglu et al conclude that the results show a “large effect of institutions on economic performance” with the instrument explaining 25% of variation in today’s income.

It should be noted that Albouy criticises this use of IV though as he says there are measurement errors in the settler mortality data, particularly given that it is thin and relies on data from soldiers, bishops and labourers. He points out that out of the sample of 64 colony countries only 28 have their own mortality rates, with the rest being assigned the mortality rates of other countries which are judged to be similar in terms of disease environment.

Economic Growth: Where does it come from?

One of the fundamental questions of economics is why are some countries rich and others poor? Why do some countries experience heavy growth which allows them to catch-up with the economic giants of the world, whilst others are relegated to the bottom and are unable to jump on the growth train? Is it due to luck, geography, culture or institutional factors? This article explains some of the different theories of economic growth, beginning with the Solow growth model (neoclassical model), before criticising such a model and suggesting that endogenous growth models have more to tell us about the growth phenomenon. We highlights the pitfalls of each model, but conclude that the main reason that incomes are different between countries is due to institutional differences.

The Solow Model

According to the World Bank 50% of the world population have only 10% of the world income. Norway is 98 times as rich as the Democratic Republic of Congo and 24 times richer than Bangladesh (correcting for PPP). We begin to answer our question of what causes economic growth by using the Solow growth model. This model assumes that the capital stock depends on the level investment (which itself is determined by the amount of savings in the economy) minus the level of depreciation and that other neoclassical assumptions (such as profit maximising firms) are satisfied. The model then looks at what happens a long a balanced growth path, this is characterised as the situation in which capital, output, consumption, wages and return to capital are constant. After some mathematical derivations we find that capital grows at the rate n + g, where n is population growth and g is the growth rate of technology. Investment is used to replace depreciated capital, to equip new workers (population growth), and to upgrade capital (technological growth). Per capita growth must then be equal to g which tells us that any increases in the capital stock only originate due to increases in technological growth.  Further derivations tell us that the growth rate of output equals the growth rate of capital, and thus output per capita only grows at the rate g: any change in growth originates from new developments in technology.

Before we summarise the findings of this model we need to be careful to distinguish between level and growth effects. Imagine that the balanced growth path output of the economy is 5%, this is the level we are at. If there is a storm which temporarily reduces output then y may fall to 3% and so we have growth in output to return to our balanced growth path level of output. This growth is only temporary and when we return to the BGP, we have no more growth. Instead we may increase our savings rate so that more is invested, and this will lead to a level growth effect whereby the BGP level of output is now 7%, there will be growth whilst we reach this new level, but once obtained there will be no change in per capita growth.

The Solow growth model implies that a higher population growth means capital widening (giving capital to more people) rather than capital deepening, and thus a lower steady state level of both capital and output. Thus higher population growth should result in lower output per capita. It predicts that a higher savings rate will lead to higher growth rates, both predictions are empirically verified by looking at cross-country analyses.

In summary, according to this model, the only source of permanent growth is technological advances, but this is something which is left unexplained by the model! The neoclassical model tells us nothing about where technological growth comes from, instead taking it as exogenous!

Empirical Support for the Solow Model

Shortly we move on to looking at  how we can incorporate growth into our model to get a richer understanding of growth, but first we look at the empirical success of the Solow growth model. The model does not predict that poorer countries grow faster than richer countries (absolute convergence), but instead we see conditional convergence, whereby countries with similar parameters (and hence steady states) grow at similar rates and only poor countries with similar parameters to rich countries will grow fast. Empirically we see no absolute convergence whilst there is conditional convergence amongst OECD countries with similar parameters, thus supporting the model. Mankiw, Romer and Weil conduct a regression analysis to test whether the Solow growth model matches what we witness in reality. They find that about 59% of cross country income difference can be explained by differences in the investment rate and population growth. An increase in the investment rate (for non-oil countries) raises the GDP per working age person by about 1.42% whilst an increase in (n+g+δ) reduces GDP by about 1.97%. The paper predicts the correct signs for coefficients, as we would expect, but it implies an alpha (capital-labour ratio) of about 0.6, which is much higher than we observe, suggesting that the Solow growth model isn’t entirely consistent with what we observe. When we take alpha to be 1/3 then the neoclassical model predicts a relatively short transition (fast convergence) but this paper finds that the observed rate of convergence is about 2% per year to cover half of the distance between k(0) and k* which requires an alpha of 0.75, which is again too high and suggests problems with the Solow model. Noy and Nualsri find no statistical evidence supporting the neoclassical hypothesis that a natural disaster leads to high growth as a result of deviating from the steady state. Their estimated coefficient for physical capital is positive, but statistically insignificant whilst the coefficient for human capital is negative, suggesting that a negative shock to human capital lowers growth. The authors believe that endogenous growth theory seems to be more compatible with their findings with Romer’s emphasis on R+D activities as a major force behind economic growth appearing to be “especially relevant”. This is due to the emphasis on human capital and when natural disasters threaten human life then the expected return on human capital falls leading to lower investment in human capital and thus lower long-run growth. In a similar vein to Mankiw, Romer and Weil, Lucas finds that we would expect capital to flow from rich countries to poor countries given differences in marginal returns to capital, but that this is not the case and thus concludes that we should add the effects of human capital to our analysis, which greatly reduces the marginal return to capital and explains why we don’t see drastic transfers of capital from rich countries to poor countries.

We have thus far seen that the neo-classical growth model works qualitatively but not quantitatively, with the estimated capital share being too high, the estimated rate of convergence being half the rate predicted by the model and observed interest rate differentials and international capital flows much lower than model predictions.Mankiw, Romer and Weil add to this further by complementing the neoclassical model by incorporating human capital. Increasing savings now increases both the long run stock of capital and human capital, and is therefore complementary.

The authors estimate this equation and find that R2 equals around 80%, which is an improvement on the previous 59%. Now a percentage increase (in non-oil countries) in investment increases GDP per working age person by 6.9% whilst a percentage increase in (n+g+δ) would lead to a fall in GDP by 17.3%. The estimated parameters are now α=0.31 and γ=0.28 which is consistent with the empirical evidence.

Augmented Solow Growth Model

Unfortunately the story does not end here with us congratulating ourselves on the success of the augmented Solow model (which includes human capital), as there are a number of issues with this study. Firstly, they use OLS which could suffer from endogeneity bias (savings rates and n could depend on Y/L); secondly A(0) is systematically larger in rich countries which if omitted and correlated with y0 means we have omitted variables bias; thirdly Klenow and Rodriguez-Clare show the mismeasurement of investment in human capital. There are further more general issues with this augmented Solow model in that growth in technology is still exogenous and the assumption of perfect competition means there can be no profits (Euler’s exhaustion theorem) which means there’s no income to spend for research and development which can lead to increases in growth (Schumpeter) leaving us to question where growth is coming from. The results of Mankiw, Romer and Weil boil down to their definition of human capital: Caselli runs a similar regression but finds that only 20-40% of income differences are explained by factors, with the rest being left to TFP (unexplained residual). Weil uses slightly different data and finds that factor accumulation is responsible for at most 47% of the difference in per capita GDP between countries.

Klenow and Rodriguez-Clare update the MRW data and add primary and tertiary schooling, where primary schooling varies less between countries than secondary education and thus estimates of human capital vary much less across countries than MRW find. Hall and Jones adopt a different methodology to the authors above but find similar conclusions that productivity differences are important in explaining international income variation.

Endogenous Growth Theories

We have so far seen that the augmented Solow growth model has its limitations but is not terrible as an approximation for growth. However, it attributes growth to technology, without telling us where this comes from. Endogenous growth models include technology in their description and thus try to find where this comes from.

There are two basic ways to deal with the increasing returns to scale that area required to endogenise the accumulation of knowledge: imperfect competition or externalities.

The first generation of endogenous growth models had technological progress being an accidental by-product of investment (learning by doing, externalities or spill over); whilst the second generation had technology and knowledge consciously developed in the research and development industry by both the public and private sector.

Technology is knowledge and ideas that improve production; they are usually non-rivalrous whereas many private goods (such as capital goods) are rival. Note also that human capital is rivalrous: if a scientist (or educated person) is working on one project then he cannot be working on another. Technology is usually given as A in our production functions and is an index of the level of technology. It is improved by new ideas improving the technology of production, allowing a given bundle of inputs to produce more or better output.  This means that ideas can exhibit increasing returns to scale as their use is not restricted by a finite physical stock. There may be a high fixed cost to discovering a new idea, but the marginal cost of replicating the idea is zero (the only cost is associated with embodying this idea into a rivalrous good). Increasing returns to scale implies that our market will be characterised by imperfect competition. Ideas can be partially excludable, for example intellectual property introduces a monopoly situation over technology but there are also knowledge spillovers.

Arrow and Romer treat these knowledge spillovers as accidental by-products of economic activity, leading to learning by doing models. They separate the production function of individual firms from the aggregate production function and allow for positive externalities from actions of individual firms/agents to all other firms/agents, but do not internalise this externality. Depending on the parameters of this model we can either have increasing returns to scale with mutliple equilibrium (multiple steady states) and no growth paths, which makes any analysis difficult; we can return to the Solow model; or we can return to the AK model. The AK model is the simplest form of endogenous growth theory where Y=AK and graphically we would have an upward sloping sY curve and an upward sloping δK curve which only intersect at 0, suggesting that this is the first equilibrium but otherwise we have explosive growth.: the capital stock is always growing and growth never stops. The reason for this explosive growth is that capital accumulation is characterised by constant marginal return as opposed to the usual situation of diminishing returns. In a simple AK model policymaker should encourage a higher investment schedule (greater savings) to achieve higher, perpetual growth.  In reality such a model is not very likely because it requires that alpha is 1, whereas we know that for capital it is 1/3 and incorporating human capital might possibly make it 2/3.

Perhaps the most established endogenous growth model is Romer’s R+D model which works by incorporating multiple sectors into the analysis.

In the learning by doing models technological progress is an unintended by-product of economic activity and knowledge creation is unrewarded. Euler’s exhaustion theorem suggests – with a CRS assumption – that there is no profit left to pay for knowledge creation and technological progress, suggesting that there is no room for a research and development sector in the economy. In Romer’s model we have 3 sectors: a final goods sector, an intermediate goods sector and an R+D sector. The final goods sector employs currently available technology and labour input to produce a homogenous output good. It is characterised by perfectly competitive markets. The intermediate goods sector buys monopoly rights for the ideas generated from scientists in the R+D sector. They produce an intermediate good that is sold for production in the final goods sector. The research and development sector generates new ideas with technology and labour.

So this endogenous growth model tells us that the growth rate of the economy is determined within the model and the long-run growth rate is increasing in the population growth rate. Hence more people means more ideas, although given that technology crosses borders one country can increase its growth rate by reducing its countries population growth and benefit from externalities of other countries having high population growth rates. Growth is predicted as an increasing function of research effort, which is problematic as research has grown tremendously over time whilst growth is slowing down. The model also tells us that we need positive population growth to sustain growth of Y/L because of diminishing returns to knowledge production, and a change in the labour share in R+D has a level effect but no growth effect.

These are some good (but not perfect) observations from the model, and the model is effectively a micro-founded version of the Solow growth model which allows the introduction of a monopolistic sector and rewards knowledge creation. There are some problems in that the rewards to the research sector reflect only monopoly profits and not consumer surplus. Additionally, it assumed that research workers do not take into account the effect of their behaviour on the rate of discoveries.

This model assumes no conditional convergence, scale effects and the long-run effects of policies; all of which have been refuted by the empirical evidence. The model also focuses on growth rates instead of levels, and explaining levels is the key problem in economic development.

So what causes growth?

Thus far we have seen that the neoclassical growth model tells us that technology causes growth, but doesn’t tell us where technology comes from. The endogenous growht models tell us that technology – and therefore growth – come from having larger populations (more ideas), greater savings rate and fundamentally strong property rights so that patents are protected. This implicitly requires that we have strong institutions to promote patents and protect the property rights of investors.

North places particular emphasis on institutions as an important determinant of growth; “Institutions are the underlying determinant of the long run performance of economies”. He defines institutions quite broadly to include laws, constitutions, customs, tradition, religion, constraints and government policies which “shape the interactions of economic actors”.

We can use different measures of institutions to see their effect on income but we are unable to use a simple regression such as Y = αIns + βX’ + u because there will be the issue of reverse causality and omitted variables (endogeneity), so that OLS wouldn’t find a consistent estimator. A solution to this would be to find an instrumental variable that is correlated with institutions but uncorrelated with the error term (the unobserved component of Y).

Acemoglu, Johnson and Robinson do this using the mortality rates of soldiers, bishops and sailors stationed in colonies as an instrument. They see this as valid because the mortality rates of these people will have no effect on current GDP except through their impact on institutional development. This will be a relevant instrument so long as it is correlated with measures of current institutions.

They use risk of expropriation (how likely it is that private foreign investments are expropriated by government) as their indicator of current institutions and control for variables such as latitude, continental dummies, legal origin, geographical variables and different languages. They believe that settler mortality led to settlement strategy which affected early institutions which have effects on current institutions and thus on current levels of development.

They find that their IV estimate of 0.94 is greater than their OLS estimate of 0.52 which suggests downward bias and thus that the measurement error in the institutions variable is more important than reverse causality or omitted variable bias. Overall Acemoglu et al conclude that the results show a “large effect of institutions on economic performance” with the instrument explaining 25% of variation in today’s income.

Hall and Jones believe it is social infrastructure which affects growth. They define social infrastructure as the institutions and policies that determine the economic environment in which individuals accumulate skills and firms accumulate capital and produce. Olson shows that economic growth can’t be fundamentally different due to culture because countries such as North and South Korea have fundamentally the same culture but different growth rates. The social infrastructure argument gains support from Acemoglu et al as well as Dell on the persistence of institutions and their effects. They discuss the business environment (i.e. the amount of bribes and the length of time to set up a business) as being embedded within this social infrastructure which affects the level of growth.

Conclusion

We started by noting the large differences in incomes between countries and then used the neoclassical growth model (Solow) to try and account for such differences. It provided qualitative support but was not particularly successful quantitatively. We introduced human capital which led to higher quantitative predictions but still didn’t solve the issue on interest rate differentials and still assumed that growth was driven by exogenous technological progress, there were also issues in how we measured human capital which affected our empirical outcomes. Then we turned to spill-over effects and externalities to explain where this endogenous growth was coming from. This created problems in that we expect to see scale effects which are not empirically observed, the R+D model of Romer similarly was not particularly good at explaining cross-country differences in income levels and did not explain the issue of convergence. Finally, we looked at the limits to growth and incorporated land and non-renewable resources into our neoclassical model to show that growth may not persist forever.

Did small scale firms inhibit Victorian Growth?

Britain’s manufacturing firms have been accused of remaining family-run and small scale in the period 1850-1914, so ignoring the benefits of the large corporation evident in the USA. Discuss whether this represents a form of entrepreneurial failure by the owners of British firms.

Chandler identifies that corporation’s in America are vertically and horizontally integrated, invested in new technology and produced the latest industrial wares such as electricals, chemicals and automobiles. Britain was characterised by an “atomistic organisation of production”, according to Elbaum and Lazonick, with many small firms that were run by families. This is evidenced by the fact that in 1880s less than 10% of the manufacturing sector was accounted for by the largest 100 firms, the US figure was 22% (Hannah 1983). Although these couldn’t get the same economies of scale as their larger American counterparts they also had many benefits. These benefits and drawbacks will be discussed in this essay and we will find that Britain’s firms weren’t marked by entrepreneurial failure.

Chandler believed that American firms were more successful than their British counterparts because they were both horizontally and vertically large. Being horizontally large meant they had a big market to sell to with very little competition which kept prices high and meant the firm could earn supernormal profits which could be reinvested into new technology and help the firm grow even larger and reduce its costs further. This contrasts with the British situation whereby the high degree of competition kept supernormal profits very low which led to the accusation that British firms didn’t invest enough. Vertical integration allowed the firm to bypass the market in the production process and gave it guaranteed access to intermediary goods thus reducing uncertainty. A downside of this vertical integration was that it required large hierarchies, which some would consider inefficient for managers to have to control production across such a large firm with many different functions. Therefore, whilst Chandler argues that American corporations benefitted from economies of scale as a result of being large and thus being able to keep prices high (ergo profits and potentially leading to higher investment and so greater long term profit) and costs low, it could be countered that some corporations suffered from diseconomies of scale whereby the workforce felt demotivated for working in such a large firm and communication failures resulted in inefficiencies. Moreover it could be pointed out that the lack of competition in the US market would allow inefficient firms to survive which could thus be seen as an entrepreneurial failure in the US.

Harley rallies against Chandler’s argument that because British firms weren’t large that it was some sort of entrepreneurial failure. Using some of Gerschenkron’s ideas he points out that each country adapts their economy to tailor their institutions and endowments and that there is no ‘correct’ way of running a firm. Following this through means that British firms could be small and American firms large but that both were ‘correct’ and they were just adapting to their different situations. Magee picks this up by saying “British business organisation differed from the American but this does not mean that it was necessarily inferior”.

Harley points out that the reason for vertical integration in the US was to substitute the prerequisite of good markets. In the UK historical factors meant that British factor markets were well developed. A firm at the bottom of the supply chain could easily sell its goods at market which could be purchased by a firm producing at the top of the supply chain and because there were so many firms doing this it reduced the issue of faults in the supply chain or monopoly power. Comparatively, in the US the market system wasn’t well developed due to the youth of the economy. There weren’t lots firms producing intermediary goods so American corporations had to substitute this prerequisite of a well-developed market and did this with vertical integration. If we are to believe this argument then there can be no advantage to British firms for being vertically large: they already have access to well-developed markets providing intermediary goods and may have the additional advantages of being able to purchase these goods at a lower price than US firms which may face higher costs given the lack of competition and diseconomies of scale.

A legal framework didn’t exist in the UK for horizontal integration and as a result ‘mergers’ tended to simply mean firms grouping together to increase the price and not actually merging and rationalising. Thus the legal framework prevented horizontal integration to a large degree during the period concerned. Whilst British firms may not have gained economies of scale directly from being horizontally integrated, they did gain from external economies of scale as most firms in an industry were clustered in a geographical sector which gave them access to skilled labour suited for their production processes and the ability to learn from other nearby firms.

Furthermore, Magee attempts to dispel the belief that US firms were much larger than their British counterparts. In our introduction we pointed out that in the UK less than 10% of the manufacturing sector was accounted for by the largest 100 firms, whilst the US figure was 22% – this isn’t an enormous difference. It is significant, but it could be argued that there was still plenty of room for US firms to grow even larger. In addition, Schmitz (1995) finds that the largest US firm (US Steel) had a market capitalisation of only twice the largest UK firm (J&P Coats) and that in turn, J&P Coats had a market capitalisation twice that of the largest Germany firm (Krupps). There are very few economic historians who would argue that Germany suffered from entrepreneurial failure and yet this nation had smaller firms than both Britain and America. One could potentially contend that the argument of Britain’s entrepreneurs failing on the basis of them remaining small scale is negated by the existence of smaller firms in “successful” Germany. If we change our focus to the size of firms in terms of employment, instead of market capitalisation, we find that between 1906-13 the average manufacturing firm in Britain employed 64 people compared to 67 in the US and only 14 in Germany (Magee). Magee claims that this evidence could show that “the Chandlerian model does not fit terribly well with British experience”.

There is a belief that high competition in the UK resulted in low supernormal profits and thus meant that investment was low and so Britain didn’t adopt US technology resulting in lower productivity. In reality the high competition within Britain would have prevented a situation, whereby a successful innovation could be implemented in Britain (given its factor endowments and institutional constraints), from happening. For example, in the cotton industry there were claims that Britain should have adopted the ring spindle from America to replace the less automatic mule spindle. But Leunig finds that ¼ of British firms do adopt the ring spindle but given British constraints, don’t manage to make more of a significant profit that firms using the traditional mule spindle. This shows that if one firm could profitably innovate in Britain that all firms would have to adopt this same technique, otherwise they would be driven out of Britain, thus proving that Britain wasn’t characterised by a damaging lack of inventions: it could simply use foreign inventions if and when profitable. The reality of why British firms didn’t take on American technology was highlighted by Harley: Britain had access to lots of skilled labour but didn’t have as many natural resources unlike America which had access to lots of raw materials but little skilled labour. Therefore America substituted skilled labour with capital using an abundance of raw materials whilst Britain used its skilled labour instead of excessively relying on raw materials. This is another example of Harley’s substitution for prerequisites in action. Also, there is the issue of Cantwell’s skill lock-in where British labour had become skilled in certain techniques, for example using the mule looms, and it would be too expensive to profitably teach them a different method.

Looking at who ran the firms we see that in the US most firms were run by professional managers in a clearly defined hierarchical structure, who were well trained and could delegate roles to the unskilled workers. In the UK firms were largely run by individual entrepreneurs or run in the family, largely by people with no formal management training. It is argued that the US style was more suitable as the manager’s were well-trained in profit maximisation and how to best deal with their labour force, perhaps explaining why the US workforce was a lot less unionised in Britain. However, this argument is very weak. It is unlikely that well-trained managers (individuals after all) would be more efficient than the market system which dominated in Britain due to high competition. Even if we were to accept that untrained family management is less superior than formal management, it still remains that British firms could overcome this due to the highly competitive nature of the economy which weeded out inefficient firms. On the issue of unionisation, it is more the case that American firms were able to control their workers because they were unskilled. Granted, it helped that the lack of US firms gave them bargaining power against labour, but it was the lack of skills which really prevented unionisation from taking off in America. In Britain unionisation became strong because workers were highly skilled and hence had labour market power, aided by the vast number of firms and their inability to co-operate to temper union demands. This point does hold weight: high unionisation in Britain may have led to lower productivity if workers were resisting technological innovations to ensure that they remained in a job – but high unionisation did not come about due to the structure of British firms but resulted more in the skill-set of labour, something which is out of the hands of entrepreneurs.

Weiner believes that family-led businesses eventually stagnate as they are passed on to different generations. He thinks that a firm is established by an entrepreneur who sends his child to public school in order to increase his social standing. The child eventually takes over the business but is anti-industrialist due to his educational background and thus doesn’t innovate within the firm instead using it as a source of profits to maintain a lavish lifestyle. Consequently the firm doesn’t innovate and invest into the latest technology and means that productivity falls and would lead to foreign rivals overtaking Britain. This argument is very subjective and unlikely to be true. Rubinstein actually finds that rich industrialists are more likely to send their children to public school in order to remove them from the inheritance of the firm if they believe the child to be inept, thus preventing poor leadership! Moreover not a lot of firms last until the third generation so this argument wouldn’t be particularly significant even if it were true – only 17% of woollen firms in 1875 survive until 1912 (Horrell).

In conclusion it is unfair to say that British entrepreneurs failed because they were small and family run. Gerschenkron points out that each nation is different and there isn’t a single path for an economy to follow, instead each has to adapt given its institutions and constraints. British firms were seen as being small which meant they didn’t have enough profit to invest in new technology and research and development, but we have seen that the competitive nature of the economy meant that if a technology was profitable, given factor endowments and institutional constraints, then it would be adopted otherwise firms would be driven out of business. Arguments that the family nature of British firms led to entrepreneurial failure also seem flawed; there exists many examples and counterexamples of situations where entrepreneurs failed, but ultimately competition ensured that entrepreneurs were efficient and adopted the best technology to suit their circumstances. The only area of significant failure was that unions had a lot more power in Britain than in the US, which could have had the potential to disrupt the adoption of productivity enhancing technology, but we have discovered that this was more a result of labour being skilled than a failure by British business itself.

Marshallian Industrial Districts and the Rise of the Internet

Recently I attended a Conference for the 40th Anniversary of the Cambridge Journal of Economics where a talk was given on “Industrial Districts, Organisation & Policy”. This article is a summary of some of the discussion from this talk as well as my further thoughts on extending Marshallian Industrial Districts to incorporate the internet.

What is a Marshallian Industrial District?
A Marshallian Industrial District is normally considered a clustering of firms in a similar industry operating from a certain geographic area. Being close to many other firms in the same industry allows a number of benefits, sometimes called benefits of agglomeration or Marshallian atmospheric externalities. These include:

  • Marshall himself saw the benefits of small companies being the division of labour and specialisation which led to more efficient use of resources. Smaller companies producing only one good will be able to gain specialist experience and knowledge in producing such a good, which ought to lead to an increase in output.
  • Clustering of firms in a geographic area means knowledge should flow between firms more easily through the workforce, in discussions and seminars between managers in coffee-houses or more formal institutions along with other informal channels. It may seem strange, in a neoclassical framework, that competing firms would discuss and share ideas and techniques but we can imagine this happening as gossip spreads through the geographic cluster.
  • Customer recognition: clustering means that consumers know where to go if they want a particular good, and so can reduce the cost and effort associated with having to advertise and may increase demand. For example, if a British individual wanted to see a musical in London, then they know that they would need to go to the West End as this is where the clustering of theatres are; this means that such theatres don’t need to focus their efforts on telling consumers where to go to see their plays, but only in convincing them to see THEIR play, rather than that of their neighbour.
  • It should be easier to hire a workforce with the correct skills, as labour with the right skills would know where to congregate for a job, hence reducing asymmetries of information; hopefully reducing the length of unemployment for such workers. Furthermore, the sharing of workforce between firms increases expertise and helps the flow of information in a clustering.
  • Infrastructure and auxiliary firms will be established in the clustering, reducing costs: a clustering of firms in one geographic area makes it more cost efficient (either for the private community, or for the public community responsible for investment of tax) to build infrastructure required for the particular industry. This has the effect of reducing costs for firms and increasing productivity. In addition, auxiliary firms (firms which provide inputs to firms further down the supply chain) will set-up near the firms they supply, a clustering of one industry will lead to the development of auxiliary firms in this area, thus reducing costs and making it easier for firms to source inputs.
  • Ottati points out that credit is easier to obtain for firms in the MID, perhaps this is because they have better access to investors with the knowledge and willingness to invest in that given industry.
  • The ability to accept large orders without fear of being able to deliver, as these can be passed on to other nearby firms. Consider a textile-producer who has been given an order for X amount of clothes, where X is greater than the amount they can produce themselves. Rather than have to engage in time-costly and expensive investment to be able to fulfill this order (investment which may not be needed long-term), or worse still, have to refuse the order, the firm can simply purchase the remainder of X they haven’t been able to produce from other nearby firms. Without a clustering, this may be more difficult to do, resulting in the firm being forced to refuse the order.
  • Kaldor stressed the importance of “joint production between small specialised firms which involves frequent transfer of an unfinished product between numerous specialised firms” which is obviously an important factor in an MID.

Finally, as Konzelmann, Wilkinson and Fovargue-Davies point out that “firms concentrate their initiative and inventiveness on what they do best and establish an environment that improves the overall competitiveness of the locality”.

An MID is thus typically charaterised as a geographic area which has a high degree of vertical and/or horizontal specialisation, and a reliance on the market mechanism to provide exchange. In a separate article, I write about MIDs in the Industrial Revolution, and how competition was strong and explains why the UK didn’t have large firms, like in the USA.

As well as the benefits to MID we explored above, there can also be some downsides. Firstly, small firms may be more susceptible to bankruptcy in the event of an economic shock which reduces aggregate demand in the short run and hence lead to hysteresis effects such as permanently lower GDP. Larger firms may have more retained earnings and the ability to survive through cross-subsidisation (taking profits from certain sectors to remain present in temporarily loss-making sectors). Secondly, we would only expect an industrial district to form if there were limits to economies of scale: if a firm had increasing returns to scale (it could produce a good more cheaply by producing lots of it) then it wouldn’t stay small but would turn into a large firm.Thirdly, there can be congestion effects from a large concentration of firms in one area: if these firms are industrial then traffic may be a serious issue which starts to increase costs and reduce benefits from agglomeration; air and noise pollution may increase reducing the desirability to live and work in such areas; increasing demand for labour and land may lead to higher prices (rent/wages) offsetting the benefits provided by the MID.
How has the internet affected the development of Marshallian Industrial Districts?The internet has had a major effect on how we live our lives and also in how the economy is shaped. Along with other technological developments and the phenomenon of outsourcing – explored below – it has arguably been responsible for the decline in the Marshallian Industrial Districts. This is because the internet and the rise of technology has led to outsourcing of industry to cheaper industrialising countries, and permitted the expansion of companies into conglomerates which can produce goods internally, rather than relying on a multitude of other companies for input goods.

Of course, it can be counter-argued that outsourcing has led to a split in production processes which means more companies produce inputs which go into the production of a final good, such a question needs to be considered empirically, which is beyond the scope of this article.

More fundamentally it could be counter-argued that one of the most important MID in existence today is that of Silicon Valley, itself founded on the internet and technological developments. We consider Silicon Valley as an MID due to it enjoying most of the benefits which we examined above, namely the sharing of ideas, the availability to source skilled labour and the necessary infrastructure (in this case fibre-optics and good transport links to aid the commute of the many workers).

The internet, and other developments in transport and technology have meant that geography is less of an issue nowadays. At first this may seem to proliferate the decline of the MIDs, but we argue that contrary to this it has actually enhanced them: a company can still benefit from Marshallian atmospheric externalities without having the congestion costs associated with being confined to a geographic MID. The internet means that ideas and techniques can be transmitted to people across the world, no matter where they are, proliferating externalities typically associated with agglomeration, without the need to physically be close to each other. It may be seen that cottage industry rises, with people producing goods and services from their home, as they learn how to do so from online websites, without the need to be sourced near competitor firms. In addition, online forums allow ideas and questions to be posed and answered; again aiding small firms to continue production and compete with larger firms.

In a private discussion with Konzelmann and Fovargue-Davies, they argue that the most important aspect of the internet is the opportunities it has opened up for selling. The internet means that small firms are able to advertised and sell their goods online to anybody anywhere in the world, thus increasing demand and making it easier for firms to sell their products without having to invest heavily in a sales division (something which we would expect larger firms to be better at). Furthermore, making it easy to sell via the internet means that it is also easier to source input goods, and thus allows firms to be more reliant on the market for their input goods, rather than having to expand horizontally (thereby not benefiting from Marshallian externalities) in order to guarantee the supply chain.

Are Industrial Districts still important?

There are many modern developments which may inhibit the workings of a MID, perhaps the most important is the rise of the internet, discussed above. However there are other factors, which we turn to now.

We begin by proposing that economies of scale have increased over the last 100 years, since Marshall first proposed the notion of industrial districts. Modern technology, and mass production of goods in state-of-the-art factories means that we see increasing returns to scale (IRS) for most goods. This might help explain why we see ever-growing companies and oligopolistic markets (i.e. markets occupied by only a small number of large firms). Whilst this may be the case – and a condition of IRS makes the establishment of MIDs more difficult – it isn’t universally true for all goods. If we consider certain goods such as high-quality high-valued goods – of which only a few are made, either because of low demand, or the Veblenian* nature of the good – then these will likely be made by small companies who benefit from specialisation and Marshallian atmospheric externalities.

Ottati mentions the decline of banking as a serious issue for the sustainability of MIDs: with ever growing banks which focus only on large clients, and don’t attempt to engage with their clients, focusing on large cities (becoming London-centric) then there will be a fall in investment for upstarting small firms which could form into a Marshallian district. Consider Silicon Valley: without investors prepared to risk their money in such risky ventures such an MID wouldn’t have developed, this is ever-increasingly likely as the dominance of banks increases.

The ability for firms to propel themselves further up the supply chain, producing high-quality goods, and focusing on technology and R+D is vital for the prosperity of MIDs and the ability to do this s vital for securing the continuance of Marshallian Industrial Districts. In addition, the internet has allowed MIDs to flourish without the geographic constraints typically felt, meaning that the benefits of MIDs can continue without so many of the costs.

 

*A Veblen good experiences a conspicuous consumption effect, whereby more is sold at a higher price. Consider the example of a Ferrari car or an expensive watch, these goods are expensive because the people who wish to purchase them wish to signal to other people their wealth and influence, and do this by purchasing goods which are expensive just for the sake of being expensive (although, it is still the case that the quality is very high, but not proportional to the expense of the good).

Was there a Victorian Failure in Manufacturing?

Comparisons of Britain’s labour productivity in manufacturing with that of other industrialised countries, such as the USA and Germany, from 1850-1914 suggest no dramatic decline in this sector during the period. However, labour productivity performance at the whole economy level was poor in comparison to other countries. (a) How can this be explained (b) Does it suggest a failure occurring in the UK economy?

Over the period 1850-1914 there was a decline in Britain’s overall labour productivity comparative to its competitors – against the US Britain was more productive in 1870 but was overtaken during the 1890s – in 1870 US/UK labour productivity in the aggregate economy was 89.8 which rose to 117.7 in 1910 (Broadberry 2006) demonstrating the ability of foreign nations to overtake Britain on this measure. Zooming in on sectoral labour productivity, however, we see that there wasn’t a large change in manufacturing labour productivity. According to Broadberry (2006) these figures for UK/US are in the range of 150-193 between 1870 and 1914: the US had an absolute advantage but this didn’t dramatically decline over the period. According to Habbakuk the reason for this absolute advantage, which was of a magnitude 2:1 by 1850, was due to labour scarcity but resource abundance in America which meant that capital was used in place of labour augmenting labour productivity.

In answering the first question we need to explore the methodology of how labour productivity is calculated so we can examine why it changed. Labour productivity in the aggregate economy is the sum of labour productivity in agriculture, industry and services with the relevant weights – i.e. the employment share of each sector – attached. Agriculture is considered a relatively low productivity sector compared to manufacturing and services and so if an economy devoted a lot of resources to agriculture compared to the other two sectors then we would expect aggregate productivity to be low.

This seems to be the case for the US and Germany: in 1871 Germany had 49.5% of its labour force working in the agricultural sector and the US even higher at 50% (Crafts). This is compared to Britain’s share at 22.2% and may explain why British aggregate labour productivity performance was high in 1871 compared to its competitors: it had a greater share of resources employed in higher productivity sectors.

Over time the US and Germany shifted resources from agricultural into manufacturing and services, which we have already said have higher productivities, and thus mathematically it has to be the case that aggregate labour productivity rose due to this. This shift was due to high productivity growth in agriculture which meant there was surplus labour in this sector which could be transferred to industry where there was a shortage of skilled labour. By 1911 agriculture had declined to 34.5% of labour employment in Germany and down to 32% in the US (Crafts), although this was still higher than the UK’s 11.8% share it was a significant drop from 1871. In the US, employment in industry rose from 24.8% to 31.8% over this period and in services from 25.2% to 36.2% (Crafts).

It is relevant to point out that Broadberry finds the correlation co-efficient on aggregate productivity levels to be 0.98 for services, 0.85 for industry and 0.65 for agriculture. He shows that by 1910 US labour productivity in industry is 93.2% greater than Britain and in services this was 7.4%. This is further evidence that if the US and Germany were seeing greater employment in manufacturing and services, at the expense of agriculture, that they would also see higher aggregate labour productivity.

Moreover, whilst Germany and the US were moving workers from agricultural to manufacturing and services, Britain was doing the same but was moving workers more towards services, which had lower productivity. Over the period 1870-1910 10% of the labour force moved from agriculture with around 7% of this going to services and only 2% going to manufacturing (Crafts). This doesn’t explain a decline in productivity, but does show why productivity may not have been increasing as rapidly as elsewhere.

We have already seen that a portion of the lead that competitors had in labour productivity growth was due to their sectoral shifting of resources – something of which Britain had no control over – and the British move from agriculture towards the service sector rather than manufacturing. Broadberry finds that Britain was overtaken by its competitors in service labour productivity, which if coupled with the competitor’s shift in sectoral employment could explain the reason for low UK aggregate labour productivity despite fairly consistent manufacturing productivity. In 1870 US/UK labour productivity in services was at 85.9 but this rose to 107.4 by 1910 (Broadberry), whilst this doesn’t seem like a large decline it is accentuated by the shift in resources towards this sector by the US and Germany. The reason given for this decrease in service productivity was because the UK was operating on a low-scale, customised high-margin network orientated production process whereas the US was developing high-scale standardised low-margin hierarchical structure production process which had higher productivity.

Turning to whether this fall in aggregate labour productivity was due to British failure we will firstly examine why the UK continued to operate on customised services compared to the US’s standardised production and then look at manufacturing and see if Britain failed here.

America developed new technology which facilitated the industrialisation of services, allowing an increase in productivity, through selling a standardised output at low cost. This required a hierarchical organisational structure and strong management to conduct business across that vast nation and to facilitate the selling of a standard product to the masses. This industrialisation of services was slow in Britain because she had lower levels of education in management (instead focusing on skills) and there were many small firms which operated on networks of trust rather than large firms which could command economies of scale. It might be argued that this was a failure in Britain: she didn’t invest enough in education of management and nor did she invest enough in these technologies such as the telegraph, typewriter and calculating machines which allowed higher productivity in the service sector. In reality, the lack of education was not the fault of the economy, but potentially of the government.

We have seen that the UK was evolving to expand its service sector which explains why productivity wasn’t growing quickly, this fact reflects that Britain had higher incomes and wanted to increase its standard of living by enjoying more services such as education. Whilst this would appear to reduce aggregate labour productivity (if we had shifted workers towards higher productivity manufacturing then we would have enjoyed greater aggregate productivity gains) it can’t be seen as a failure if it increased the living standards of the population. Therefore if this factor were responsible for the low relative aggregate labour productivity then it wouldn’t mean a failure for the UK economy.

In manufacturing we know that the US had a labour productivity ratio of about 2:1 over this period with Britain. German productivity advantage was much smaller and was less than 1 before the 1890s before Germany overtook Britain. Whilst there wasn’t a huge change in this figure it is vital for us to understand why Britain was failing so badly in this area for us to understand whether there was overall failure. The short answer to this question is that of Rothbarth and Habakkuk which says that Britain, and to an extent Germany, had lower manufacturing labour productivities than the US because America used greater capital due to varying factor prices which ultimately reflected scarcity.

Ames and Rosenberg point out that the British comparative advantage was in labour intensive commodities, as a result of lots of skilled labour but fewer natural resources, whilst the US advantage lay in capital intensive commodities because of an abundance of natural resources and land but a lack of skilled labour.

We can see this on the isocost diagram (adapted from Leunig), the US was at a point to the left where it paid to use lots of capital but low levels of labour and Britain was at a point where it used lots of labour but little capital.

manufacturing

Building on from this point the US had a further advantage in that it had access to a larger market (which was growingly shut off to Britain through the use of trade barriers) which was more standardised. Whereas in Britain, due to class identities, demand was more customised meaning it was difficult for mass production to occur. In the US this standardised mass market meant that goods could be produced with cheap unskilled labour using special purpose machinery which needed long production runs to be feasible. This benefitted the US because labour, in particular skilled labour, was scarce and thus expensive, but as a result of an abundance of natural resources its capital goods were a lot cheaper. On the other hand, in Britain there was a large supply of skilled labour, following on from the Industrial Revolution and Britain’s high apprenticeship system (2.5% of the labour force were apprentice’s compared to the US figure of 0.3% – Broadberry), but natural resources were more expensive (especially as the cheap easily accessible coal had already been depleted) and so Britain maintained a production process reliant on skilled labour which used only general purpose technology. This suited her well because the fragmented demand market meant that there wasn’t a large enough market to invest in special purpose technology.

We can draw on an identity from macroeconomics to show that having a capital intensive production process will result in higher labour productivity. Y = KαL(1-α) by dividing each side by labour we can derive output per worker as a function of capital:

identity

 

 

 

 

We can see that if capital, K is higher then, ceteris paribus output per worker will also be higher, and thereby demonstrating that the nature of the US economy means it will have higher labour productivity.

McCloskey and Sandberg sum this point up well – British firms chose more skilled-labour intensive production methods because it paid to do so, and so there wasn’t a failure, in this respect, of the UK economy. This can be shown because Germany also had low labour productivity in manufacturing relative to the US, despite its adoption of new technological processes, because it had similar factor endowments as Britain.

However Chandler disagrees with this view that Britain didn’t fail and argues that US manufacturing success stemmed from investment in production, marketing and management. If British firms had invested in marketing then they may have been able to extend their market and encourage the purchase of standardised products which would have meant they could have feasibly used new capital-intensive machinery which would have increased labour productivity. This argument is flawed because Britain wouldn’t have been able to invest in either production or marketing due to the atomised nature of industry (Lazonick and Elbaum) which led to high competition, low/zero supernormal profits and thus little profit with which to invest with.

In conclusion it is difficult to say that the British economy was responsible for the fall in aggregate labour productivity relative to competitors. Firstly, it was inevitable that competitors would catch-up and overtake Britain, as Clapham points out “half a continent is likely in course of time to raise more coal and make more steel than a small island”, and Britain had no control in the shift of US and Germany labour away from agriculture into more productive sectors. Secondly, the reason Britain –like Germany – didn’t adopt American technology or production processes was because it didn’t pay: skilled labour was cheap and natural resources expensive in Europe and this meant it made more sense to use less capital which results in lower levels of productivity. This was shown by Leunig in the textile industry with America adopting the more productive ring spindle because it had greater access to high quality cotton, whereas Britain didn’t and so stuck with the mule spindle which required lower quality cotton but higher amounts of skilled labour which Britain had access to. Therefore Britain was simply operating within the constraints it was endowed with and was also limited in that it had developed the first industrial revolution and so had path-dependent institutions which were hard to change to adapt to the new economic circumstances, such as the move towards motor-vehicles, chemicals and electrical goods, due to the size of the traditional industrial sector.

Brief: Cannibalisation

Cannibalisation occurs when a company reduces the sales of one of its products by introducing a similar, competing product in the same market.
Igami and Yang study this phenomenon in relation to burger outlets, pointing out that entry of new outlets harms the profitability of existing stores (cannibalisation) but that this occurs so as to preempt the threat of rivals’ entry: i.e. a firm may have an incentive to canniablise its market share if it thinks this will keep rivals out (by saturating the market and depriving it off store locations).
One of their main findings is that “shops of the same chains compete more intensely with each other than with shops of different chains, which implies cannibalization is one of the most important considerations for the firms’ entry decisions”. A policy conclusion of this would be that a firm should invest in multiple brands such that it can monopolise a geographic area without suffering the effects of cannibalisation.
Thomasden estimates suggest only shops within about 0.5 miles are considered close-substitutes, despite the car-obsessed nature of society today.

Kitamura and Shinkai find that unless the market has goods that are extremely differentiated, or extremely similar in terms of quality then cannibalisation won’t occur. Suppose we have two firms each producing two goods in the same market: a low quality good (A) and a high quality good (B), where the high quality good imposes higher production costs but also higher utility for the consumer. If there is a large difference in quality between good A and B then consumers prefer good B (they derive higher utility) and so both firms have greater incentive to supply more of this good, driving product A out of the market. Conversely when there is not a lot of difference between the two goods then the firms have an incentive to supply good A (as this is cheaper to produce), thereby driving good B out of the market.

Strategic Pricing

This article will explore an overview of pricing strategy. We begin by explaining the simple Cournot and Bertrand games, which are game theoretic analyses of a firm’s pricing strategy. With this information we proceed by explaining what strategic complementarities and substitutes are before looking at a paper by Fudenberg and Tirole entitled “The Fat-Cat Effect, the Puppy-Dog Ploy, and the Lean and Hungry Look”.

Cournot Game
In this scenario we have (at least) two firms who compete on the amount of output they produce, choosing the quantity simultaneously and independently whilst taking the output decision of the other firms as given. Therefore in making this decision firm 1 will need to contemplate what firm 2 will do: if both firms produce a large quantity then we would expect price to fall and so profits will fall. Therefore each firm has a choice; they could independently produce low, hoping that the other party will do this (and so keep prices and therefore profit high) or they may realise that one firm has an incentive to deviate and so both firms deviate. In other words, both firms know that a lower overall quantity is desirable to keep prices and profits high. However it isn’t a Nash equilibrium for both firms to produce low quantities. If Firm 1 expects Firm 2 to produce a low output, then it will have an incentive to increase its output, knowing that it can sell more goods without seeing a large change in the price. Therefore it will capture more of the market, and make greater profits (since both q and p are higher).
But Firm 2 realises that this is an outcome and hence decides to produce a large quantity. The result is that both firms produce a low quantity and the price is low.

The outcome of this is that prices will be above the perfectly competitive outcome, but lower than the monopoly outcome and as the number of firms tends to infinity the price tends to the marginal cost (perfectly competitive outcome). Output will be higher than under monopoly but not as high as with perfect competition.

We point out that this model makes the following assumptions:

  • All firms are producing a homogenous (same) good
  • Firms have market power such that each firm’s output decision affects the good’s price
  • Firms do not cooperate
  • The firms are rational profit maximisers

A disadvantage of this model is that it assumes that firms can easily alter quantity, but this may not be the case if a firm is a producer as it can be difficult (and/or costly) to change quantity.

Bertrand Game
The Bertrand game has similar assumptions to Cournot, in that all firms are producing a homogenous good, and we assume they are rational profit maximisers, but now they set prices simultaneously and not quantities, and consumers wish to purchase from the firm with a lower price. We assume that if firms set the same price then demand is split equally.

The outcome of such a game is that prices must equal the marginal cost, and hence prices and output are the same as in the perfectly competitive model. We can examine why this must be the case by considering a firm’s potential pricing strategies:
A: Charge the same price as competitors where p>MC
B: Charge the same price as competitors where p=MC
C: Charge the same price as competitors where p<MC
D: Charge a price below competitors where p<MC
E: Charge a price above competitors where p>MC

It is easy to see that C and D cannot be equilibria because charging a price below the marginal cost will bankrupt a firm. Furthermore, E isn’t an equilibrium because all consumers will flock to the other firm, and the market share of our firm will be zero. Hence under E a profit-maximising firm would seek to match its competitors price in order to make a profit. Of course if the competitor price is already equal to the MC then profits will still be zero and as such may mean that the firm decides not to alter prices, because either way it can’t make profit; but the overall outcome in this case will be that p=MC. Finally we can see that A cannot be an equilibrium because each firm has an incentive to slightly reduce its price, below that of its competitor, in order to fully take the market and make profit; this process continues until p=MC.

The above analysis has assumed continuous prices. If we take the more realistic assumption of integer pricing (i.e. each price has to be to the nearest penny) then the equilibrium is a price one penny above the marginal cost. This is because at this price neither firm can undercut the other whilst increasing profit (by undercutting to the MC the firm will now make zero profit), and no firm can increase the price without losing market share. Hence both firms would prefer to stick at p=MC+0.01 and make strictly positive profits.

A disadvantage of this model is that it ignores capacity constraints, i.e. it assumes that a firm is able to produce as much quantity as it wants, and therefore can set whatever price it likes and be able to provide to the market. However in reality, a firm may be limited in the amount it can produce. The Edgeworth paradox tells us that with capacity constraints there may not exist any pure straegy Nash equilibrium.

Stackleberg Game
As an interesting aside, the Stackleberg game is when one firm moves first and then the follower firms move sequentially. Such an outcome may occur when there is a first-mover advantage: if there is an advantage to moving first, such as the hope of capturing a market, then we can study such a scenario as a Stacklberg game. The firms choose an output, so we can compare the Stackleberg game with the Cournot model.
We solve such a model by using backward induction to find the subgame perfect Nash equilibrium (SPNE): we start by looking at what the leader considers the best response function of the follow is: how the follower will respond once it has observed the quantity of the leader. The leader will then pick a quantity to maximise its payoff, anticipating the predicted response of the follower. This output is what the follower actually observes. The outcome is that there is an advtange to moving first in that the leader firm makes greater than Cournot level profits.

We are assuming that after the leader has selected its equilibrium quantity that the follower does not deviate from the equilibrium and choose some non-optimal quantity. By doing this it would hurt itself (and also the leader), and therefore – at least in a single game – would violate our assumption of rationality and profit maximisation. In a single game there would be no advantage in the follower firm adopting such a strategy: it might announce to the leader that unless the leader chooses a Cournot-level quantity that it will produce a much larger quantity than its best response in order to lower the profit of the leader (meanwhile hurting itself). However this policy is not credible and so the leader would disregard it. It isn’t credible because if the leader chooses a quantity above Cournot-level (as we would expect) then once this decision has been made it makes no sense for the follower firm to try and punish the leader, because it has absolutely no incentive to do this in a single game.

On the other hand, in a repeated game, there may be an advantage to pursuing a punishment strategy and such a threat could be credible.

Without the assumption of perfect competition – that the follower observes the quantity of the leader – the game reduces to the Cournot version.

So far we have learnt that when firms compete on quantity, we expect competitive prices to be the outcome when the number of firms is large (i.e. the perfect competition prices prevail). When firms compete on price we expect the price to be pushed down to the marginal cost (perfect competition equilibrium), and for output to be the same as under the perfect competition model. Neither the Bertrand model, nor the Cournot model can be considered “better”, we choose which model to use based on the situation we are faced with. Also remember that when we have a large number of firms the Bertrand and Cournot results are the same.
So how will a firm react to a competitor’s pricing strategy? Will it increase its own price and therefore increase its profit, or will it decide to maintain its price and try and capture more of the market. Obviously the outcome depends on which strategy maximises profits and for this we would need individual profit functions, however we can predict what will happen depending on whether the market is characterised as being complementary or substitutory.
The Fat-Cat Effect, the Puppy-Dog Ploy, and the Lean and Hungry Look

We want to consider how our competitors will react to us changing our prices, and this depends whether the market is characterised as complementary or substitutory, terms coined by Bulow, Geanakoplos, and Klemperer. If the goods in the market are complements then the competitor will respond in kind. On the other hand, strategic substitutes mean that the competitor firm will acquiesce.

A complements market is typically associated with the Bertrand game: price cuts by one firm results in matching behaviour by competitors, whilst a substitutes market is associated with the Cournot game: if I capture a larger share of the market then my competitor’s share will go down.

The following taxonomy comes from Fudenberg and Tirole’s paper and is based on the notion of a market being strategic complements, or strategic substitutes:
Puppy Dog Ploy: your competitor will fight back if you fight and so you don’t commit to playing tough

Top Dog: you play tough, expecting your competitor not to react Fat Cat Effect: you don’t play tough because you know your competitor will also play easy Lean and Hungry Look: you look tough because you knew that if you played soft that your competitor would play tough

If the market is characterised as strategic substitutes then a firm’s strategy can either be top dog or the lean and hungry look. Conversely, if the market is characterised as strategic complements then a firm’s strategy can be the puppy dog ploy or the fat cat effect.

The Fat Cat Effect occurs under a strategic complements market and can lead to the counter-intuitive result that if the incumbent firm in an industry increases their price that there is a benefit to them AND their competitors as both see profits rise. This is because firms in the industry follow each others actions and so when the incumbent firm increases its prices, it sees it profits rise without losing much market share. The second firm acknowledges this (and realises that it has nothing to gain by keeping prices low) so increases its prices also. In the advertising market this can have the effect that greater investment in advertising doesn’t deter entry into the market and that incumbent firms should under-invest in advertising if they want to deter new entry (Schmalensee, and Fudenberg and Tirole). This makes sense when we consider that advertising increases overheads and demonstrates that there is profit to be made in such markets.

The puppy dog ploy occurs when the incumbent decides to play tough and reduce prices. This encourages an aggressive response by the competitor, who reduces its own prices to try and undercut the incumbent. This price competition means that in the long-term there are less profits to invest in technology and so technological innovation in that industry falls.

Under a Top-Dog strategy, the firm will increase production of their product to try and crowd out the competitor. This increased production will lead to greater quantity supplied and hence lower prices (the law of demand), but for the firm increasing production the higher sales volume will result in increasing profits even whilst prices fall. This assumes that the competitor can’t respond quickly and also increase their sales volume which would likely result in a price war where both firms retain their existing market share but see lower prices and therefore lower profits. Such a strategy may therefore be wiser to conduct when the firm knows that its competitor can’t increase production, perhaps due to limitations in their factory size (which would limit production in the short run until factory size could be expanded).

Bulow, Geanakoplos, and Klemperer furthermore point out that a firm which operates in one market expanding into another market can have strategic effects on the first market, as marginal costs could increase (if there are diseconomies of scale or scope).

Standards of Living during the Industrial Revolution

The debate about living standards in the Industrial Revolution has recently focused on anthropometric measures, such as height and mortality, and linked these to the ability to work more intensively. Describe how these factors may be related. Discuss what the anthropometric evidence reveals about living standards in this period.

Anthropometric measures add a new light on the debate and show whether people were healthier as a result of the industrial revolution. If they were then they would have been able to work more intensively because they would need fewer days off work due to fatigue or illness. Schultz believes that there is a positive relationship between height and productivity because height is a measure of nutritional status and better fed, healthier people could work harder.

Height is determined by net calories consumed and changes in stature can reveal changes in food availability, content of diet, temperature, the nature of work and the prevalence of disease. Mortality is the number of deaths which occur, an increase could mean that there is more disease, people are dying from hunger, or living conditions are worse because of dangerous work or poor sanitation.

If height falls then we could infer that either food consumption has fallen – which may lead to higher mortality as people starve to death – or that more energy is expended on fighting disease which is likely to lead to an increase in mortality. Many diseases which were common at the time were sensitive to nutrition, such as diarrhoea and tuberculosis. If nutrition was low then mortality would rise as people died from such diseases.  This is evidenced in Milibank prison when the diets of prisoners were cut from 3,500 calories a day to 2,644 leading to scurvy and dysentery.

The fall in height could be attributed to the body spending more energy keeping warm, suggesting that people didn’t have enough clothes or heating, and thus some would succumb to the cold. Allen believes that consumers substituted food for clothing so they could keep themselves warm and reduce their calorific expenditure potentially reducing mortality.

The Barker hypothesis proposes that poor nutrition during pregnancy leads to poor long term health conditions and an increased chance of chronic diseases later in life. Shorter women are likely to receive inadequate calories which, if the hypothesis were true, would then result in a higher mortality rate in future generations. Meredith and Oxley find that infant mortality hit a low between 1800-10 despite there being high population growth and a lack of food. They suggest that the mortality rate fell because these children’s mothers would have been born 20-40 years ago when calorie intake was at its peak, perhaps providing some evidence for this hypothesis.

Floud’s military height data shows that height was rising between 1780-1820 and then declined between 1820-50 until improving after 1860. This would imply that standards of living were broadly improving, or at the very least not worsening, except in the thirty years between 1820-50. However, revised data by Komlos shows a fall over the entire period 1760-1850, average height began at 171.1cm in 1760 but by 1850 was down to 164.7cm. This is more likely to be accurate because Komlos takes into account the fact that the military was selective about height and so there is an inherent bias in the data. Humphries concurs with her convict prisoner data. A large reason for this fall in height was due to more expensive food and changing diets which were less nutritious.

Oxley and Horrell find that in 1815 those born in London and the surrounding areas tended to be much shorter than other Englishmen, suggesting that the poor urban conditions in London resulted in stunting and poor nutrition. This may be the case because food was expensive in inner cities as it had to be transported large distances and there was higher demand due to the large concentration of people and lack of farmland in the immediate vicinity. Furthermore a lot of the food was adulterated, which led to an increase in the spread of diseases (e.g. cholera found in watered down milk), and a deficiency of nutrition exacerbated the lack of food. Seeing as the availability and quality of food was deteriorated this must have a negative influence on standards of living as a whole.

Height and weight data shows that men received the most food in the household which meant they had more calories and thus had a relatively better BMI than women, whose BMI fell upon marriage (Meredith). Eden, a contemporary of the time who explored people’s diets concluded that “bread and water are almost the only diet of labourers’ wives and children”. Demonstrating that living standards of women and children were deteriorating as they received a lower quantity of food which would have affected their height and health and would have likely contributed to higher mortality and susceptibility to disease.

Humphries finds that when children or women were contributing to the income they received a better diet. Either women or children starved to death, or they had to work in terrible conditions to earn a few more calories. Even this was difficult as Horrell and Humphries find that there were declining employment opportunities for females. Additionally, Humphries found that child employment in industries was high, and left many stunted and in poor health conditions. As late as 1851 the census shows that 36% of 10-14 year olds were working – this shows that standards of living can’t have risen for this group as working left them disabled, stunted and often without an education.

The Anthropometric Committee of 1883 found that 14yr old boys from industrial schools were nearly 7” shorter and 25lbs lighter than those from public schools, thereby implying high inequality where the rich were able to afford more and better food and endured less toil but the working class had less food and/or had to work harder and fight off more disease. This height difference reflecting class distinction was also apparent in Meredith’s findings that the upper class Royal Society averaged 70”, whereas working class Hertfordshire criminals were 65.5”.

Despite the height data implying a fall in living standards, there was an increase in average life expectancy at birth between 1760 and 1850, with Szreter and Mooney finding that this rise also happened in urban cities due to improving conditions. The average life expectancy in provincial cities rose from 35 in 1820 to 42 in the 1890; in England and Wales as a whole it was higher starting at 40 in 1800 and rising to 46 by 1890. This trend is corroborated by data from Wrigley and Schofield, painting a different picture to height data, implying that conditions were improving over the period 1760-1850 as people could live longer lives. This rise in life expectancy at birth in the urban areas was due to the enlargement of the suburbs which had better living conditions and were less crowded, thus resulting in fewer deaths and hence a longer life on average suggesting improvements in standards of living.

The death rate fell due to improvements in healthcare and the discovery of vaccinations to prevent disease like smallpox. Between 1783-1807 19% of pre-ten year old Glaswegians who died did so as a result of smallpox. The vaccination reduced this rate to 4% by 1807-12 (Szreter and Mooney), demonstrating that there were some improvements in certain fields during the industrial revolution which must have increased the standards of living.

Mortality was higher in the cities than in rural areas due to overcrowding, poor sanitation and high geographical mobility enabling the spread of disease. It could be taken that standards of living in urban areas were rising, but only from a poor start, and that people who had migrated from rural areas would likely have seen a fall in their quality of living through poorer food conditions, more disease, a higher death rate and the disamenities associated with urban life. Therefore whilst it could be argued that standards of living rose during the industrial revolution, as evidenced by people living longer lives on average, it could also be argued that the prolonging of life was at the expense of a poorer quality of life, with children working in dangerous jobs at young ages, forgoing an education, food was scarce and people had to live amongst the squalor and disease of the growing urban conurbation. Furthermore, even though life expectancy rose, mortality didn’t decrease dramatically – “not until the 1870s and 1880s did urban mortality truly recede and children’s heights revive” (Szreter and Mooney). In fact Wrigley and Schofield believe that mortality rose between 1820-50 before falling. Again this would back up the belief that standards of living didn’t rise during the first half of the nineteenth century and may even have fallen.

In short, the anthropometric data consistently show that standards of living deteriorated during the period 1780-1850 as demonstrated by falling stature. This resulted from a lack of food and poor nutrients as well as harder toil by adult men and children. Mortality confirms this view, as it rose during the beginning of the nineteenth century, whilst life expectancy rose but this was from a low base and was only an average. If conditions on the whole are good but there is a short period of deterioration then teenagers will have a growth spurt and end up at or near to full potential height (Oxley). The fact that there was an increase in the numbers stunted over the period shows that this wasn’t the result of brief periods of decline but persistent hardships resulting in a deterioration of standards of living. There was also increased inequality in the standards of living depending on where someone lived (urban vs rural), what part of the country they lived in, if they were a man, woman or child and their class. Thus it is fair to conclude, on current data, that living standards for the greater part of the population were poorer as a result of the Industrial Revolution up to at least 1850.

International Trade and Economic Growth

Does international trade increase economic growth? In this context, what are the trade policies that have been followed by developing countries?

Standard textbook economic theory tells us that international trade benefits both parties in the trade, based on the gains from comparative advantage as laid out by David Ricardo. However, recent research into New Trade Theory suggests that trade may not always be beneficial, and there are examples when it could inhibit growth. This essay will examine when this could be the case and then relate this to the example of developing countries.

The Ricardian story goes that countries have comparative advantages in producing certain goods. This is irrelevant of their absolute advantage. Due to this comparative advantage countries can specialise in the production of these goods and then trade internationally with other countries so that both nations end up being able to consume more than they could do had they not traded. This implicitly assumes that the costs to trade – such as exchange rates and transport costs – are negligible. Whilst true in the example of transport costs, which have fallen dramatically since the 1980s as a result of container-ship advances, may not be true for exchange rates where we have seen greater fluctuation since the 1980s as most countries have decided to adopt a floating exchange rate regime in contrast to the previous fixed exchange rate policies derived from the Gold Standard and then followed by the Bretton Woods accord. Ricardo himself used the example of Portugal and England producing wine and cloth. He believed that Portugal had an absolute advantage in the production of both goods (could produce a greater total number than England), but that England had a comparative advantage in producing cloth, meaning it could do so at a lower opportunity cost than Portugal.

Streeten highlights that comparative advantage and international competitiveness might not be mutually consistent. If we consider Economy A which produces raw materials and industrial goods but has a comparative advantage over Economy B in raw materials (which produces the same but has its comparative advantage in industrial goods), then this would suggest that the removal of trade barriers would cause Economy A to export raw materials in exchange for industrial goods from Economy B. Therefore the raw material industry would expand in Economy A but this would include inefficient firms, whilst industrial goods firms will shrink, but again this would include efficient firms. Only if we were to assume (arbitrarily) that all industrial firms in Economy A were inefficient and all raw material firms were efficient would competitiveness and comparative advantage be consistent.

If we were to accept the Ricardian story we would then need to examine where comparative advantage comes from. The Heckscher–Ohlin model believes that comparative advantage comes from different countries having different factor prices. For example, Portugal may have been able to produce grapes cheaply –due to exogenous conditions such as the climate- and thus mean it has a comparative advantage in the production of wine over England. Recent studies have cast doubt on this theory, showing that a significant proportion of US imports are in capital-intensive industries, despite the US itself being a capital-intensive economy. The Heckscher-Ohlin model would instead imply that the US ought to be more likely to import labour-intensive produced goods such as raw materials or cheap manufactures.

Standard economic theory would instruct us that trade can cause economic growth because companies face international competition and must therefore eliminate Liebenstein’s X-inefficiency by cutting costs. This reduces inefficiencies and means the economy uses its resources more conservatively. Furthermore, some firms would benefit from economies of scale, by being able to sell to a larger market they can move further down their average cost curve, which if it exhibits diminishing returns to scale can result in lower costs meaning lower prices for consumers or potentially higher supernormal profits because of the natural monopoly situation that arises from diminishing returns to scale.

Stephen Redding would also argue that it is possible for a country to manufacture its comparative advantage through state intervention, citing the examples of South Korea and other Asian Tigers. Whilst this may cast aspersions on laissez-faire policy it doesn’t necessarily detract from the argument of free trade. In fact it may even support the argument, suggesting that a government should steer the economy towards a comparative advantage in a good/service which is neglected by other economies and from which it would be able to successfully trade with to increase overall consumption. Others may contend that the government is inefficient and doesn’t have the sufficient knowledge to successfully steer the economy into a comparative advantage and may instead fear that they will be influenced by pressure groups who wish to increase their economic rent.

Some development economists also worry that a developing country opening up its borders to foreign competition would be unable to compete and so is at a disadvantage and would lose out to economic growth. This comes under the infant industry protection argument which suggests that a country should maintain trade barriers until industry has sufficiently grown in its ability to compete internationally. However, there is no guarantee that an industry enjoying supernormal profits arising from the lack of competition would have an incentive to reduce X-inefficiencies and invest in new technology, and may instead maintain supernormal profits in the belief that so long as it remains inefficient (and would therefore go out of business if exposed to international competition resulting in mass unemployment) the government will not relax trade barriers. To overcome this problem the government would be well advised to impose credible promises to reduce trade barriers at a certain date to induce firms to increase efficiency and ensure they can compete on an international stage. However this faces a credibility problem. Furthermore, Rodrik proposes that any country which liberalises trade has to be seen to be doing so of its own accord, due to an internal commitment, as oppose to external pressure from debtor institutions like the World Bank. He sees this as the case, because if a government liberalised in order to access World Bank funds then private agents may not respond for fear that the government will reverse these liberalisation policies once it has overcome the debt-crisis that sent it to the World Bank in the first place. He sees this as a particular problem during the 1980s when many countries had balance of payments issues and needed support from the World Bank which was given on the conditionality of liberalisation policies being implemented. This makes it particularly difficult for governments who did indeed have a desire to implement these policies – and may have done so at the time because it is seen as easier to implement radical reforms during troubled times than it is when the economy is sound and pressure groups have a lot of influence – because private agents wouldn’t know whether this was a credible commitment. To overcome this problem, Rodrik identifies that countries like Turkey had to instead implement even more radical proposals than necessary by the World Bank – for example devaluing the currency further than necessary – in order to send an effective signal to entrepreneurs and business.

 

Dollar and Kray study what they call the Post-1980 globalisers which include developing countries such as India, China, Argentina, Mexico, Malaysia and Thailand. They come up with these countries as they had both reduced trade barriers and seen an increase in proportion trade volumes. In their study Dollar and Kray were looking to see whether trade policies increased growth or inhibited it. Therefore they were wary of countries which saw an increase in trade volumes for non-policy reasons (for example Rwanda was noticed as such an example: it had seen a large increase in trade volumes, but this was suspected to be due to the cessation of civil war rather than policy induces change) and measured it in proportional terms so as not to miss countries who were geographically disadvantaged which dispositioned them to trade less relative to other countries. They also looked at reduction in trade barriers to see which countries were inducing increased trade through policy, although they noted that this method has its drawbacks because of issues with weighting tariffs and because a country may have increased non-tariff barriers which are difficult to identify. Despite these issues within the study they found that countries who liberalised trade during the 1980s experienced higher growth relative to both other developing countries and also to developed countries, which means they experienced absolute convergence, whereas other developing countries saw lower growth than developed countries and thus saw the gap between them increase.

The Post-1980 globalisers saw economic growth per annum rise from 2.9% in the 1970s to 3.5% in the 1980s, reaching 5% in the 1990s whilst the figures for other developing countries were 3.3%, 0.8% and 1.4%. However, we must note that this increase in growth in the case of the globalisers cannot be solely attributed to trade policy and many other factors were occurring at the time, for example the introduction of property rights in China which is also likely to have led to higher growth. Furthermore, they identified that there was no increase in inequality arising from the increase in globalisation, meaning the poorest 20% didn’t see their share of income fall: if anything it rose slightly.

Based on this evidence, it is hard not to conclude that international trade leads to economic growth, however the theory behind it hasn’t been satisfactorily proven. Furthermore we must take caution in attributing all economic growth to trade liberalisation when many other factors were also in play.

What is the WACC?

The weighted average cost of capital (WACC) is simply an average cost of the two types of capital: debt and equity. It tells us the amount an investor would need in compensation to invest in a project. Therefore if we were to offer a lower return than the WACC, we would find that we have no investors; a return greater than WACC would lead to a situation where we have excess demand for our project.

To theoretically calculate the rate of return on a project we would need to compare it with existing returns on projects which have similar risk characteristics, and then set the return similar to these projects to ensure that we can attract finance.

WACC is often used by regulators in the pricing of controlled industries such as in the energy and telecom industries. These regulators calculate an allowed WACC which is intended to be high enough to encourage investment, but not so high that it allows excessive monopoly profits which these regulators are established to prevent. If the WACC is set too high then it could potentially lead to a situation of overinvestment (which may be considered good in the sense that it should lead to lower costs in the long run, but could be unsustainable) or super-normal profits at the expense of consumers; if set too low then we will have a situation of under-investment and potentially under-provision.

Obviously this assumes that regulators have sufficient information as to the variables involved in such calculations, that they don’t suffer from regulatory capture (the situation whereby the regulator is influenced too much by the supposed regulatees and ends up capitulating to the whims of such firms; in this case this could result in a higher WACC and higher prices for consumers). Finally we also have to assume that there aren’t political influences to the setting of WACC by regulators: for example they may base their decisions around final price increases, such that any price increase to consumers is politically feasible.

We can formally calculate the WACC as:

WACC = (Gearing * Cost of Debt) + ((1-Gearing)*Cost of Equity)

The cost of debt itself can be calculated as an average (or a weighted average) of existing debt and new debt. This makes sense; the cost of a given level of debt for a firm depends upon the interest rate it has to pay out on existing debt, and the interest rate it has to pay on new debt (either because the firm is expanding/investing or because it is rolling debt over).

The cost of equity can be calculated through a variety of methods, one such being through the Capital Asset Pricing Model (CAPM), not fully discussed here, which requires knowledge of things like the:

  • risk-free rate – the interest rate which occurs given no risk, this is usually the rate on government debt, because a government can simply print money and should therefore not need to default on its debt
  • equity premium – this is the difference between the return on bonds and on equity; it exists because bondholders are the first to receive a payment in the event of a firm going bankrupt, and therefore face less risk than equity holders who are the last to receive payment.
  • asset beta – this indicates whether the investment is more or less volatile than other assets.

The costs above are assumed to be pre-tax for debt and post-tax for capital, if this is not the case then we would need further additional information of tax rates to adjust our results to get the above real vanilla WACC.  In the capital asset pricing model, beta risk is the only kind of risk for which investors should receive an expected return higher than the risk-free rate of interest.

The level of gearing is the amount of debt a firm has in relation to its overall capital structure. So if we let capital = amount of debt + amount of equity then the level of gearing can be calculated as debt/capital.

It then makes sense that all our WACC equation is doing is adding up the cost of debt and the cost of equity which are both weighted by the amount of debt (gearing); we could do a similar exercise where we let gearing represent the amount of capital, and just adjust our formulae accordingly.